Algorithmically generated personas can help organizations understand their social media audiences.
However, when using algorithms to create personas from social media user data, the resulting personas may contain toxic quotes that negatively affect content creators’ perceptions of the personas.
To address this issue, we have implemented toxicity detection in an algorithmic persona generation system capable of using tens of millions of social media interactions and user comments for persona creation. On the system’s user interface, we provide a feature for content creators using the personas to turn on or off toxic quotes, depending on their preferences.
To investigate the feasibility of this feature, we conducted a study with 50 professionals in the online publishing domain.
The results show varied reactions, including hate-filter critics, hate-filter advocates, and those in between. Although personal preferences play a role, the usefulness of toxicity filtering appears primarily driven by the work task – specifically the type and topic of stories the content creator seeks to create. We identify six use cases where a toxicity filter is beneficial.
For system development, the results imply that it is beneficial to give content creators the option to view or not view toxic comments, rather than making this decision in their stead.
We also discuss the ethical implications of removing toxic quotes in algorithmically generated personas, including potentially biasing the user representation.
Salminen, J., Jung, S.G., and Jansen, B. J. (2022) Intentionally Biasing User Representation?: Investigating the Pros and Cons of Removing Toxic Quotes from Social Media Personas. NordiCHI 2022, 10-12 October 2022, Aarhus University, Denmark. Article No.: 10.