Derived from the notion of algorithmic bias, it is possible that creating user segments such as personas from data results in over- or under-representing certain segments (FAIRNESS), does not properly represent the diversity of the user populations (DIVERSITY), or produces inconsistent results when hyperparameters are changed (CONSISTENCY).
Collecting user data on 363M video views from a global news and media organization, we compare personas created from this data using different algorithms.
Results indicate that the algorithms fall into two groups: those that generate personas with low diversity–high fairness and those that generate personas with high diversity– low fairness.
The algorithms that rank high on diversity tend to rank low on fairness (Spearman’s correlation: -0.83). The algorithm that best balances diversity, fairness, and consistency is Spectral Embedding.
The results imply that the choice of algorithm is a crucial step in data-driven user segmentation, because the algorithm fundamentally impacts the demographic attributes of the generated personas and thus influences how decision makers view the user population.
The results have implications for algorithmic bias in user segmentation and creating user segments that not only consider commercial segmentation criteria but also consider criteria derived from ethical discussions in the computing community.
Salminen, J. O., Chhirang, K., Jung, S.G., Thirumuruganathan, S., Guan, K. W., and Jansen, B. J. (2022) Big Data, Small Personas: How Algorithms Shape the Demographic Representation of Data-Driven User Segments. Big Data. 10(4), 313–336. https://doi.org/10.1089/big.2021.0177