Google has announced that it will temporarily stop its Gemini chatbot from generating images of people, after facing backlash for producing historically inaccurate and insensitive depictions.
Gemini’s image tool sparks controversy
Gemini is Google’s flagship suite of generative AI models, apps and services, launched earlier this month. It allows users to create various types of content, such as text, audio, video and images, using natural language prompts.
One of the features of Gemini is the ability to generate images of people, based on user input. For example, a user can ask Gemini to “generate an image of a medieval knight” or “generate a glamour shot of a Brazilian couple”.
However, some users have discovered that Gemini’s image tool does not always produce accurate or appropriate results, especially when dealing with historical or cultural contexts. For instance, Gemini has generated images of:
- Nazis and America’s Founding Fathers as people of color
- Black Vikings and Native American popes
- Female NHL players and male ballerinas
These images have been widely shared and criticized on social media platforms, such as X and Twitter, with some users accusing Google of pushing a “woke” agenda or being insensitive to historical realities and cultural identities.
Google responds to the criticism
Google has acknowledged the issues with Gemini’s image tool and said that it is working to improve the accuracy and diversity of its depictions. In a post on X, Google said:
We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.
Google also explained that Gemini’s image tool is designed to generate a wide range of people, to reflect the diversity of its global users. However, it admitted that it is “missing the mark” in some cases, and that it needs to account for historical and cultural factors.
Some experts have suggested that Google’s image tool may be suffering from a lack of quality data or a flawed algorithm, which could lead to inaccurate or biased outputs. Others have pointed out the ethical and social implications of using AI to create or alter images of people, especially without their consent or awareness.
Gemini’s image tool is not the first AI controversy for Google
This is not the first time that Google has faced controversy or criticism for its AI products or services. In the past, Google has been accused of:
- Misclassifying black men as gorillas in its image recognition tool
- Spreading misinformation or propaganda through its Bard chatbot
- Violating privacy or security through its Duplex voice assistant
- Promoting harmful or offensive content through its YouTube recommendation system
Google has often responded to these issues by apologizing, fixing, or removing the problematic features, or by rebranding or relaunching its AI products. However, some critics have argued that Google needs to do more to ensure the ethical, responsible, and transparent use of AI, and to prevent or mitigate the potential harms or risks that AI may pose to society.