Rethinking Personas for Fairness: Algorithmic Transparency and Accountability in Data-Driven Personas

Algorithmic fairness criteria for machine learning models are gathering widespread research interest. They are also relevant in the context of data-driven personas that rely on online user data and opaque algorithmic processes. 

Overall, while technology provides lucrative opportunities for the persona design practice, several ethical concerns need to be addressed to adhere to ethical standards and to achieve end user trust. 

Rethinking Personas for Fairness: Algorithmic Transparency and Accountability in Data-Driven Personas
Rethinking Personas for Fairness: Algorithmic Transparency and Accountability in Data-Driven Personas

In this research, led by Joni Salminen, we outline the key ethical concerns in data-driven persona generation and provide design implications to overcome these ethical concerns. 

Good practices of data-driven persona development include (a) creating personas also from outliers (not only majority groups), (b) using data to demonstrate diversity within a persona, (c) explaining the methods and their limitations as a form of transparency, and (d) triangulating the persona information to increase truthfulness.

Salminen, J., Jung, S.G., Chowdury, S.A., and Jansen, B. J. (2020) Rethinking Personas for Fairness: Algorithmic Transparency and Accountability in Data-Driven Personas. 22nd International Conference on Human-Computer Interaction (HCII2020). Copenhagen, Denmark, 19-24 July 2020. 82-100.