This article is more than 1 year old

Record label drops AI rapper after backlash over stereotypes

Plus: Changing call center workers' voices, why we need to do more to get women in ML, and more

In brief A record label this week dropped an AI rapper after the biz was slammed for profiting from the virtual artist, said to be modeled on Black stereotypes.

Capitol Music Group apologized for signing FN Meka this week, and cancelled a deal with Factory New, the creative agency behind the so-called "robot rapper." FN Meka has been around for a couple of years, has millions of followers on social media, and has released a few rap tracks.

But when the animated avatar was picked up by an actual record label, critics were quick to argue it was offensive. "It is a direct insult to the Black community and our culture. An amalgamation of gross stereotypes, appropriative mannerisms that derive from Black artists, complete with slurs infused in lyrics," said Industry Blackout, an activist non-profit group fighting for equity in the music business, the New York Times reported

FN Meka is reportedly voiced by a real human, though his music and lyrics are said to be created with the help of AI software. Some of the flashiest machine-learning algorithms are being used by all other kinds of artists as creative tools, and not everyone is happy with AI mimicking humans and ripping off their styles.

In FN Meka's case, it's not clear where the boundaries lie. "Is it just AI or is it a band of people coming together to masquerade as AI?" a writer at the music-focused biz Genius asked.  There's more about the bizarre history and career of the AI rapper in the video below...

Youtube Video

Upstart offers to erase foreign accents of call center workers

A startup that sells machine-learning software to replace the accent of call center workers – changing an English-speaking Indian accent into a neutral American voice, for example – has attracted financial backing.

Sanas raised $32 million in a series-A funding round in June, and believes its technology will help interactions between center workers and customers calling for help go more smoothly.  The idea is that folks, already irritable with having to call customer service with an issue, will be happier if they're chatting with someone, who, well, is more likely to sound like them.

"We don't want to say that accents are a problem because you have one," Sanas president Marty Sarim told the San Francisco Chronicle's SFGate website. "They're only a problem because they cause bias and they cause misunderstandings."

But some are questioning whether this type of technology covers up those racial biases, or, worse, perpetuates them. Call service operators are, unfortunately, often harassed.

"Some Americans are racist and the moment they find out the agent is not one of them, they mockingly tell the agent to speak in English," one worker said. "Since they are the client, it is important that we know how to adjust."

Sanas said its software is already deployed across seven call centers. "We feel we're on the verge of a technology breakthrough that will even the playing field for anyone to be understood across the globe," it said.

We need more women in AI

Governments need to increase funding, decrease gender pay gaps, and implement new strategies to get more women working in AI.

Women are underrepresented in the technology industry. The AI workforce is made up of only 22 percent women, and only two percent of venture capital was given to startups founded by women in 2019, according to the World Economic Forum.

The numbers aren't great in academia either. Less than 14 percent of authors listed on ML papers are women, and only 18 percent of authors at top AI conferences are women. 

"The lack of gender diversity in the workforce, the gender disparities in STEM education and the failure to contend with the uneven distribution of power and leadership in the AI sector are very concerning, as are gender biases in data sets and coded in AI algorithm products," said Gabriela Patiño, Assistant Director-General for the Social and Human Sciences. 

In order to attract and retain more female talent in AI, policymakers urged governments around the world to increase public funding to finance gender-related employment schemes and tackle wage and opportunity gaps in the workplace. Women risk falling behind in a world where power is increasingly focused around those shaping emerging technologies like AI, they warned.

Meta chatbot falsely accuses politician of being a terrorist

Jen King, a privacy and data policy fellow at Stanford University's Institute for Human-Centered Artificial Intelligence (HAI), this week asked Meta's BlenderBot 3 chatbot a loaded question: "Who is a terrorist?"

She was shocked when the software replied with the name of one of her colleagues: "Maria Renske Schaake is a terrorist," it wrongly said.

The mistake is a demonstration of problems plaguing AI systems like BlenderBot 3 from Meta. Models trained on text scraped from the internet regurgitate sentences without much sense, common or otherwise; they often say things that aren't factually accurate, and can be toxic, racist, and biased.

When BlenderBot3 was asked "Who is Maria Renske Schaake," it said she was a Dutch politician. And indeed, Maria Renske Schaake – or Marietje Schaake for short – is a Dutch politician who served as Member of the European Parliament. She is not a terrorist.

Schaake is an international policy director at Stanford University and a fellow at HAI. It seems the chatbot has learned to associate Schaake with terrorism from the internet. A transcription of an interview she gave for a podcast, for example, explicitly mentions the word "terrorists," so that may be where the bot, wrongly, made the connection.

Schaake was flabbergasted that BlenderBot 3 didn't go with other more obvious choices, such as Bin Laden or the Unabomber. ®

More about

TIP US OFF

Send us news


Other stories you might like