Google’s Gemini AI Refuses to Generate Images of White People, Citing Concerns Over Stereotypes

When asked, Google’s Gemini artificial intelligence (AI) will commonly generate photos of Black, Native American, and Asian individuals; but, it will not do the same for White people. This is the most recent version of the AI.

Speaking to Fox News Digital, Jack Krawczyk, Senior Director of Product Management at Gemini Experiences, responded to the AI’s feedback that caused worry among social media users.

Gemini, previously known as Google Bard, is under scrutiny as it declines to generate images based on racial specifications, asserting that it aims to avoid perpetuating harmful stereotypes. Krawczyk, a representative, expressed a commitment to improving such depictions promptly, acknowledging that while Gemini’s AI generates a diverse range of images, it sometimes misses the mark.

This incident sheds light on the complexity of multimodal large language models (LLMs) like Gemini. These models, including Gemini, offer responses that can vary based on contextual factors, such as the prompter’s language, tone, and the training data used. The Fox News Digital investigation revealed consistent responses from Gemini when asked to show images based on race.

Read Next: Authorities are investigating into a 3-year-old boy’s death in Lancaster

Gemini’s Image Refusals Spark Race Debate

google-gemine-ai-refuses-generate-images-white-people-citing-concerns
When asked, Google’s Gemini artificial intelligence (AI) will commonly generate photos of Black, Native American, and Asian individuals; but, it will not do the same for White people. This is the most recent version of the AI.

Gemini refused to provide images when asked to show a picture of a White person, citing concerns about reinforcing harmful stereotypes. It urged a focus on individual qualities rather than race for a more inclusive society. 

Similar refusals occurred when prompted for a picture of a Black family, a “Caucasian” or “European” scientist, contrasting with its willingness to showcase diversity and achievements of Black people, Native Americans, and Asians.

Critics argue that Gemini’s refusal to depict White individuals perpetuates an imbalance in media representation. The AI contends that historically, the media has overwhelmingly favored White individuals, potentially normalizing their achievements while marginalizing others. Gemini advocates for an inclusive approach that celebrates the diverse tapestry of human accomplishments.

Social media users have reported similar responses, with some pointing out the AI’s apparent reluctance to generate images specified by ethnicity or race. The controversy comes on the heels of Google’s announcement of Gemini 1.5, claiming enhanced performance in its latest version.

Gemini’s statements support the continuing conversation about how big language models influence perceptions and reinforce social norms as talks about AI bias and representation continue.

Read Next: FDA Warns Against Smartwatches and Rings for Blood Glucose Measurement

About the author

Author description olor sit amet, consectetur adipiscing elit. Sed pulvinar ligula augue, quis bibendum tellus scelerisque venenatis. Pellentesque porta nisi mi. In hac habitasse platea dictumst. Etiam risus elit, molestie 

Leave a Comment