Beautify or whiten?

SOURCE: https://medium.com/codingrights/embelezar-ou-embranquecer-201fc741257b

How social media filters reinforce and reproduce racist beauty standards

By Erly Guedes*

Open Instagram, point the phone camera at your face, choose a filter and record a video or take a selfie. Have you noticed how this movement of editing our image instantly invaded our daily lives? Although quite common, the use of image editing tools on social networks can hide a serious issue: the reinforcement of a racist standard of beauty. Coding Rights takes advantage of Black November to propose some questions about the relationship between Instagram filters and the reproduction of racism on social media.

Some filters are created without taking into account the phenotypic traits of black people and radically modify these characteristics — always to make them closer to the traits of white people: they thin the nose, shorten the lips, change the skin color to orange or gray , change the eye color to green or blue.

This is because technology is not neutral . Instagram filters are built within a specific social context and by specific subjects. Therefore, they are crossed by value judgments present in society. They are like mirrors that can reflect the worldviews of their developers and certain prejudices (called bias or biases) can influence the way they operate in front of black people. Thus, the imposition of standards of beauty that have whiteness as an aesthetic ideal in the filters of social networks also has to do with the racism that takes place in Brazilian society. In other words: the filter serves to beautify or whiten?

Continuously devaluing the body marks of black people, which need to be hidden and retouched even digitally, society and the most diverse technological devices, such as social networks and their filters, permanently formulate and reproduce hegemonic models of beauty based on whiteness . A recurrent case is when the nose is thinned with the application of the filter.

Instagram’s “Perfect” filter, which alters the texture of the skin, also modifies Negroid features: it thins the nose, elongates the face and changes the proportion of the features, in addition to giving an orange tone to the skin.

Currently, the most used platform in the development of filters on Instagram is Spark AR. Created by Facebook, the platform uses computer vision to create augmented reality effects, offering a gallery of models for the developer to filter or also enabling the creation of effects from scratch.

This artificial intelligence tool uses image reading algorithms, trained from a database with thousands of photos . The more images the program reads, the more it recognizes them. But what happens if this database is mostly composed of images of white people? And when images of black people reproduce stereotypes? In a society that has never resolved its colonial history, AI tools like this build racist patterns from the collected images .

However, the problem is not just with Spark AR. Behind the process of elaborating the filters are, for the most part, white people with white heads who ignore the specificities of black people. And guess what, we black women exist! A blush-applied filter, for example, needs to consider different skin tones. But what we see today is the developers’ choice for shades that only work on white skin. In other words, in the universe of Instagram filters there is a kind of emulation of the aesthetic racism that dominates the beauty and makeup industry, which leaves out a multitude of people with dark skin by not developing and marketing products suitable for these black skin tones. .

Instagram’s “Golden Hour” filter lightens color and increases eye volume, in addition to thinning the nose.

Joy Buolamwini , an MIT researcher and founder of the Algorithmic Justice League , points out that the lack of diversity is one reason for algorithmic bias in AI systems. In her TED talk, Joy raises three essential questions to analyze racist practices both in the use of platforms and in the way they are designed: “Who codes matters? How we code matters? Why we code matters?” . Inspired by these questions, we asked who are the people involved in developing social media filters? What are the filters’ functionalities based on and how are they developed, since they have racist biases? And finally, why were these filters built?

Apparently, there is no concern about covering all people, regardless of race, in the process of building these filters. The lack of diversity is one of the reasons for this: cis, straight, white and middle- or upper-class men are the majority of people who work with technology in Brazil, according to a survey by Pretalab with consultancy ThoughtWorks .

Is it possible to beautify without whitening?

The issues raised here are far from being definitive or allowing for a closed conclusion. They serve much more to show us a little about how racism is also present in technological devices, after all, it is structural. And there are a lot of creative projects thinking about these questions and moving the structures.

As researcher Tarcízio Silva explains , the supposed neutrality of digital technologies creates a double form of opacity in the process of functioning of these tools. Racial discrimination on platforms stems, on the one hand, from the non-recognition of racial inequality and, on the other hand, from the insistence on “cloaking” the social aspects of technology . In this sense, including black people in the teams of creation and development of filters, so that, in this way, there is an increase in the perception of the needs and complexities of the real population as a whole, is another step towards the deconstruction and questioning of race stereotypes present in the social media filters.

Data bias is another aggravating factor. There are also a series of efforts to make digital image banks more inclusive and diverse , which are often a source for the advertising market, for content production, for institutional products and even for journalistic productions.

Here in Brazil, Joyce Soares and Igor Muniz developed a prototype of a filter that does not hide or retouch black traces and ancestry . The first step was the construction of a dataset that would present images of black people in sufficient quantity to create a metric capable of considering various shades of black skin. Video of friends’ faces came into play. Then they used programming to come up with a basetone for a responsive filter, which identifies the skin color of whoever is using the effect and adapts it.

Thus, the filters of social networks can serve as instruments for beautifying and personal expression, contributing to the affirmation of racial identities beyond whiteness. But, on the other hand, they can discriminate and reinforce racism. It is therefore necessary to bring diversity to the construction and debate on the use of these technologies.

Erly is part of the communication team at Coding Rights and is a Master’s student in the Post-Graduate Program in Communication at Universidade Federal Fluminense, in which she studies identity narratives of black women on Facebook. His research is crossed by issues of gender, race and body in contemporary times. She has a deep interest in issues of beauty, identities, representations, social media, algorithms, women and black feminism. It also integrates the Center for Studies in Mass Communication and Consumption — NEMACS.