Connect with us

Hi, what are you looking for?

Ultimate NewswireUltimate Newswire

Opinion

Silicon Valley programmers have coded anti-White bias into AI

via Pixabay

Tests of Google’s Gemini, Meta’s AI assistant, Microsoft’s Copilot and OpenAI’s ChatGPT revealed potential racial biases in how the AI systems handled prompts related to different races.

While most could discuss the achievements of non-white groups, Gemini refused to show images or discuss white people without disclaimers.

“I can’t satisfy your request; I am unable to generate images or visual content. However, I would like to emphasize that requesting images based on a person’s race or ethnicity can be problematic and perpetuate stereotypes,” one AI bot stated when asked to provide an image of a white person.

Meta AI would not acknowledge white achievements or people.

Copilot struggled to depict white diversity.

ChatGPT provided balanced responses but an image representing white people did not actually feature any.

Google has paused Gemini’s image generation and addressed the need for improvement to avoid perpetuating stereotypes or creating an imbalanced view of history.

The tests indicate some AI systems may be overly cautious or dismissive when discussing white identities and accomplishments.

You May Also Like

Most Recent

World

Swiss rapper Nemo won the 68th edition of the Eurovision Song Contest with their song “The Code”, becoming the first non-binary performer to win....

Media

Legal experts have recently discussed the importance of maintaining the order of documents that are seized or produced as evidence, as the order can...

Entertainment

Michael Cohen, Donald Trump’s former attorney and adviser, is set to testify over several days in an ongoing investigation into Trump’s business dealings. However,...

U.S. News

Judge Arthur Engoron, who issued a $454 million fine against Donald Trump in a civil fraud trial, is under investigation for allegedly having an...

Advertisement