57297.jpg

Mirror - Algorithm biases

Data Visualization, Machine learning, Critical Design

Year:

 

2019

Team:


Valeria Gracia Lopez
Lisa Collmer

Giang Pham

 

Tasks:
 

Conception,
Prototyping

Mirror is an online platform that is dedicated to show people how machines and the internet can interpret our behavior and how we can look to them in a world where biases are around us.

The topic of biases in machine learning also call our attention since machines learn from repetition and the information that they are fed with come from human knowledge, how does this biases could physically look?

This project should make people aware not only of how we judge and create stereotypes in society – as well as how the internet has been „trained“ to look a certain way – but especially on how machines learn from our behavior; what we read, write or what we say and internet and people who control it do not know who you are, but can misinterpret your actions online.

 

How can the data consumption of a person be visualized and create awareness in a critical way?

PROBLEM
 

Our algorithm is full of stereotypes, genderization, and generalizations. Machines are trained with human data and our language, the way we perceive an image or a word is mostly not free from creating stereotypes. Machines and Artificial Intelligence are not free beings that can rationalize and differentiate, we might be capable of training them to realize nuances but they still are what they are: a mirror of society.

PROCESS

icons präsi-05.png

Research about art and AI/ML, how do algorithms and stereotypes relate

icons präsi-04.png

Defined stereotypes and searched for news articles that people would possibly be going to read

icons präsi-10.png

Building a plug in which transforms your consumer data into visualizations

APPROACH
 

With a critical design approach, we decided to create a prototype for fake news website with real news where people could click and read the articles that they find the most interesting and see how they profile picture will change according to what they read. Our dataset shows the relationship between articles with click bait headlines and pictures of stereotypes. At the beginning, the user gets informed by our terms of use which explain the transparency of our data collection. Accepting cookies first to continue!

 
 
 
 
7.jpg

MERGING DATA WITH YOUR FACE

The News platform contains an algorithm which is interpreting your profile picture based on our data set. During the whole process of exploring and reading articles, our algorithm starts to mash up the users profile picture with stereotypical faces based on your browser history.

It reflects stereotypical personas according to the keywords in the article. 

The News platform contains an algorithm which is interpreting your profile
picture based on our data set.

 
 
 
 
 
 
morphing-example-copy-2 (1) (1).gif
morphing-example-3 (1) (2).gif

STEREOTYPES
 

Stereotypes help humans to categorize their surroundings and to define their identity through differentiation and similarities. Therefore we are creating Datasets for algorithms based on human biases. For our platform, we tried to define most familiar stereotypes according to the western culture. 

 

TRANSPARENCY

For keeping the process transparent, a data visualization of the way how the algorithm works will be shown. This kind of interaction should clarify how we consume and create data and how machines can interpret it. Therefore it helps to encourage the user to reflect on the whole process by feeding the algorithm with information about alleged interests.