Generation of images of people through Gemini is suspended by Google after ethnic critics

Google detiene la generación de imágenes de personas a través de Gemini tras críticas étnicas

Google has decided to temporarily suspend its new artificial intelligence model, Gemini, designed to generate images of people, following controversy over its depiction of ethnically diverse German World War II soldiers and Vikings. The technology company announced that it will adjust the model in response to criticism and that, during this process, the generation of images of people will be stopped.

Notably, this move comes after social media users shared examples of Gemini-generated images depicting historical figures, such as popes and the founding fathers of the United States, in a variety of ethnicities and genders.

Problems with accuracy and bias in the generation of images of people.

Although Google did not mention specific images in its statement, public criticism focused on examples available on various platforms, where issues related to accuracy and ethnic bias were highlighted. The lack of adequate recognition of ethnic and racial diversity in AI-generated representations raised concerns about Gemini’s ability to accurately capture the reality of historical figures. One former Google employee even stated that it was “hard to get Google Gemini to acknowledge that targets exist.”

In this regard, Jack Krawczyk, senior director of Google’s Gemini team, acknowledged the need for adjustments to the model’s imager. He stated that they are working to improve Gemini’s representations and noted that AI image generation should reflect the global diversity of users. Krawczyk explained that Gemini’s AI generally produces a wide range of people, but in the specific case of historical representations, the model has not reached the desired standards.

Similarly, in an additional statement, Krawczyk reiterated Google’s commitment to its artificial intelligence principles, stressing the importance of reflecting diversity in image generation tools. He also stated that they will continue to work to adjust to more complex historical contexts and adapt to specific user demands.

Persistent challenges

Finally, it is important to remember that coverage of bias in artificial intelligence has revealed numerous examples of negative impacts, especially on people of color. In this regard, last year’s Washington Post investigation pointed to bias and sexism in imagers, highlighting the need to address these issues in the development of advanced technologies.

For his part, Andrew Rogoyski of the Institute for People-Centered AI at the University of Surrey acknowledged that mitigating bias in generative AI is a persistent challenge, but expressed optimism about continued improvement over time as more effective approaches and solutions are implemented.

No Comments Yet

Leave a Reply

Your email address will not be published.