Mirror - Algorithm biases
Data Visualization, Machine learning, Critical Design
Valeria Gracia Lopez
Mirror is an online platform that is dedicated to show people how machines and the internet can interpret our behavior and how we can look to them in a world where biases are around us.
The topic of biases in machine learning also call our attention since machines learn from repetition and the information that they are fed with come from human knowledge, how does this biases could physically look?
This project should make people aware not only of how we judge and create stereotypes in society – as well as how the internet has been „trained“ to look a certain way – but especially on how machines learn from our behavior; what we read, write or what we say and internet and people who control it do not know who you are, but can misinterpret your actions online.
How can the data consumption of a person be visualized and create awareness in a critical way?
Our algorithm is full of stereotypes, genderization, and generalizations. Machines are trained with human data and our language, the way we perceive an image or a word is mostly not free from creating stereotypes. Machines and Artificial Intelligence are not free beings that can rationalize and differentiate, we might be capable of training them to realize nuances but they still are what they are: a mirror of society.
Research about art and AI/ML, how do algorithms and stereotypes relate
Defined stereotypes and searched for news articles that people would possibly be going to read
Building a plug in which transforms your consumer data into visualizations
MERGING DATA WITH YOUR FACE
The News platform contains an algorithm which is interpreting your profile picture based on our data set. During the whole process of exploring and reading articles, our algorithm starts to mash up the users profile picture with stereotypical faces based on your browser history.
It reflects stereotypical personas according to the keywords in the article.
The News platform contains an algorithm which is interpreting your profile
picture based on our data set.
Stereotypes help humans to categorize their surroundings and to define their identity through differentiation and similarities. Therefore we are creating Datasets for algorithms based on human biases. For our platform, we tried to define most familiar stereotypes according to the western culture.
For keeping the process transparent, a data visualization of the way how the algorithm works will be shown. This kind of interaction should clarify how we consume and create data and how machines can interpret it. Therefore it helps to encourage the user to reflect on the whole process by feeding the algorithm with information about alleged interests.