Google has revealed it has adopted “value-neutral” language in the interests of improving both selfies and mental health.
File this one under “unintended but all-too-predictable consequences of technology,” because Google’s doing this after discovering that so-called “beauty filters” offered in smartphone operating systems and apps are having negative effects.
“After conducting multiple studies across four countries and speaking with child and mental health experts from around the world, we learned that the potential for harm is real,” wrote a quartet of Google designers and strategists. “The studies showed that 80% of parents said that they’re worried about filters and two-thirds of teens have reported being bullied by peers based on how they look in their selfies.”
People have the agency to get sparkly if they desire, but apps should take care when imposing it on a person.
Google knows that selfies are big business: in a post announcing its new guidelines the company stated: “More than 70 percent of photos taken on Androids use the front-facing camera” and revealed it has over 24 billion photos tagged as selfies in Google Photos. Quick reminder: there are eight billion humans and about three billion of them have an Android device.
Google’s response is a new set of guidelines so that apps don’t send “unintentional signals about personal worth or beauty norms.”
The Chocolate Factory’s first recommendation is making facial retouching filters an opt-in feature, but leaving them off by default.
Being careful with language is also recommended. “The language of face retouching implies improving or correcting a person’s physical appearance – which suggests that the way they actually look is bad,” the four Googlers write, calling out the terms “Enhancement,” “beautification,” and “touch up,” as words they’d rather not see in apps.
“’Beautification’ is a common name for face retouching features that unnecessarily adds a value judgement to a person’s edited image. The same can be said of terms like ‘slimming,’ which imply that one’s body needs improvement,” they write.
VMware to stop describing hardware as ‘male’ and ‘female’ in new terminology guideREAD MORE
The authors also think developers need to think about icons. “It’s common to see sparkling design elements in face retouching, and while there’s nothing inherently wrong with enjoyable imagery, it can be harmful when connected to something as personal as one’s identity,” they suggest.
“People have the agency to get sparkly if they desire, but apps should take care when imposing it on a person.”
Google has walked the talk and made these changes in its new Pixel phones.
The move to “value-neutral” language continues a 2020 trend of tech companies taking more care about how they express themselves in public, typified by the response to Black Lives Matter protests that saw the likes of VMware, GitHub and Splunk decide to stop using terms like “master” and “slave”. Linux kernel developers adopted new guidelines for inclusive terminology and even NASA resolved to reconsider whether names like “'Eskimo Nebula” are appropriate. ®