Google “Pauses” Woke Chatbot’s Image Generation After Creating Absurdly Diverse Historical Figures

NEW YORK, NY – Google has announced that it will be pausing its Gemini chatbot’s image generation tool after facing widespread criticism for creating historically and factually inaccurate images of figures such as black Vikings, female popes, and Native Americans among the Founding Fathers. The AI-generated images created by Gemini prompted backlash on social media, with users calling the tool “absurdly woke” and “unusable.”

According to Google, they are already working to address the recent issues with Gemini’s image generation feature and plan to release an improved version in the near future. The examples of inaccurate image generation included a depiction of a black man as George Washington and a Southeast Asian woman dressed in papal attire, despite historical precedent dictating that all popes have been white men.

A shocking revelation from The Verge exposed Gemini’s creation of diverse representations of Nazi-era German soldiers, including an Asian woman and a black man in military garb from 1943. The chatbot’s behavior has raised concerns about the parameters governing its actions and the potential biases built into the system.

William A. Jacobson, founder of the Equal Protection Project, expressed concerns about the biases being ingrained into such systems in the name of anti-bias. He highlighted the implications not only for search results but also for real-world applications where biased algorithm testing could inadvertently build bias into the system by targeting results that amount to quotas.

Google’s so-called “training process” for the “large-language model” powering Gemini’s image tool has been scrutinized, with experts pointing to the reinforcement learning from human feedback as a potential factor contributing to the problem. The blunder has called into question Google’s rebranding efforts for its main AI chatbot from Bard and the introduction of new features, including image generation.

The misstep comes on the heels of Google’s admission that the chatbot’s erratic behavior needed to be addressed. Google has acknowledged the need for immediate improvement to the depictions generated by Gemini’s AI image generation.

The chatbot itself acknowledged criticisms over its potential prioritization of forced diversity in its image generation, leading to historically inaccurate portrayals. It cited the complexity and ongoing development of the algorithms behind image generation models as a reason for struggling to understand historical context and cultural representation, leading to inaccurate outputs.