Google has temporarily halted its Gemini Artificial Intelligence chatbot’s ability to generate images of people following backlash over historical inaccuracies. This decision came after users on social media shared screenshots showing the chatbot inaccurately depicting racially diverse characters in white-dominated scenes, including Nazi-era troops.
The controversy raised questions about whether Google was over-correcting for racial bias in its AI model. In response, Google issued a statement on X (formerly Twitter), acknowledging the issues with Gemini’s image generation feature and announcing a pause in generating images of people. The company stated that it is working to address the inaccuracies and plans to release an improved version soon.
Gemini, previously known as the Bard chatbot, is part of Google’s efforts in the field of artificial intelligence. It is a multimodal large language model designed for various forms of understanding, including language, audio, code, and video. Powered by the Imagen 2 model, Gemini allows users to create high-quality images based on text prompts.
The recent controversy surrounding Gemini highlights the challenges of developing AI models that accurately reflect historical contexts while avoiding bias and inaccuracies. Google’s decision to pause the image generation feature demonstrates its commitment to addressing these issues and improving the accuracy of its AI technologies.
Moving forward, Google is likely to continue refining Gemini’s image generation capabilities to ensure that it provides users with accurate and culturally sensitive depictions. As AI technology continues to evolve, it is essential for companies like Google to prioritize transparency, accountability, and inclusivity in their AI development processes.