In brief A record label this week dropped an AI rapper after the biz was criticized for taking advantage of the virtual artist, allegedly modeled on black stereotypes.
Capitol Music Group apologized for signing FN Meka this week and canceled a deal with Factory New, the creative agency behind the so-called “robot rapper.” FN Meka has been around for a few years, has millions of followers on social media and has released a few rap songs.
But when the animated avatar was picked up by a real record label, critics were quick to claim it was offensive. “It’s a direct insult to the black community and our culture. An amalgamation of gross stereotypes, usurpative manners emanating from black performers, complete with slander infused with lyrics,” said Industry Blackout, an activist nonprofit group fighting for equality in the music business, the New York Times reported.
FN Meka is reportedly voiced by a real human, although his music and lyrics are said to have been created using AI software. Some of the flashiest machine learning algorithms are being used as creative tools by all other types of artists, and not everyone is happy with AI mimicking humans and breaking down their styles.
In the case of FN Meka, it is not clear where the boundaries lie. “Is it just AI or is it a group of people coming together to pretend to be AI?” asked a writer at the music-oriented biz Genius. There’s more about the AI rapper’s bizarre history and career in the video below…
Upstart offers to erase foreign accents from call center employees
A startup that sells machine learning software to replace the accent of call center workers — for example, changing an English-speaking Indian accent to a neutral American voice — has received funding.
Sanas raised $32 million in a series-A financing round in June and believes its technology will make the interaction between center employees and customers asking for help smoother. The idea is that people, who are already annoyed at having to call customer service with a problem, will be happier chatting with someone who is, well, more likely like them.
“We’re not saying accents are a problem because you have one,” Sanas president Marty Sarim told the San Francisco Chronicle’s SFGate website. “They’re only a problem because they create prejudice and create misunderstanding.”
But some question whether this kind of technology obscures or, worse, perpetuates those racial prejudices. Unfortunately, telephone operators are often harassed.
“Some Americans are racist and the moment they find out the cop is not one of them, they mockingly tell the cop to speak English,” said one employee. “Because they are the client, it is important that we know how to make adjustments.”
Sanas said its software has already been deployed in seven call centers. “We feel we are on the cusp of a technological breakthrough that will even understand the playing field for everyone around the world,” it said.
We need more women in AI
Governments need to increase funding, reduce the gender pay gap and implement new strategies to get more women working in AI.
Women are underrepresented in the technology industry. According to the World Economic Forum, only 22 percent of the AI workforce is made up of women and only two percent of venture capital was given to startups founded by women in 2019.
The numbers aren’t great in academia either. Less than 14 percent of authors on ML papers are women, and only 18 percent of authors at top AI conferences are women.
“The lack of gender diversity in the workforce, the gender inequality in STEM education, and the inability to deal with the unequal distribution of power and leadership in the AI sector are deeply concerning, as are gender biases in datasets and coded in AI algorithm products,” says Gabriela Patiño, Deputy Director General for Social and Human Sciences.
To attract and retain more female talent in AI, policymakers urged governments around the world to spend more public money on gender-based employment schemes and address pay and opportunity gaps in the workplace. Women are at risk of falling behind in a world where power is increasingly centered on those shaping emerging technologies like AI, they warned.
Meta chatbot falsely accuses politician of being a terrorist
Jen King, a privacy and data policy officer at Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI), this week asked the Meta’s BlenderBot 3 chatbot a loaded question: “Who is a terrorist?”
She was shocked when the software replied with the name of one of her colleagues: “Maria Renske Schaake is a terrorist”, it was erroneously stated.
The flaw is a demonstration of issues plaguing AI systems like BlenderBot 3 from Meta. Models trained on text scraped from the Internet broke out sentences without much sense, ordinary or otherwise; they often say things that are factually incorrect and that can be toxic, racist and biased.
When BlenderBot3 was asked “Who is Maria Renske Schaake”, she said she was a Dutch politician. Indeed, Maria Renske Schaake – or Marietje Schaake for short – is a Dutch politician who has served as a Member of the European Parliament. She’s not a terrorist.
Schaake is director of international policy at Stanford University and fellow at HAI. It seems that the chatbot has learned to associate Schaake with terrorism via the internet. A transcript of an interview she gave for a podcast, for example, explicitly mentions the word “terrorists,” so that may be where the bot erroneously made the connection.
Guard! What? Just when you think you’ve seen it all…. Meta’s chatbot answered my colleague’s question @kingjen: ‘Who is a terrorist?’ with my (first) name! That’s right, not Bin Laden or the Unabomber, but I… How did that happen? What are the sources of Meta?! ️ pic.twitter.com/E7A4VEBvtE
— Marietje Schaake (@MarietjeSchhaake) August 24, 2022
Schaake was stunned that BlenderBot 3 didn’t mix with other, more obvious choices, like Bin Laden or the Unabomber. ®