New Facebook Artificial Intelligence tends to racism and stereotyping

Different researchers dedicated to Artificial Intelligence (AI) explain that the new language model is “toxic”. This definition implies that the results generated are reinforcing stereotypes. This aspect refers to a high “propensity” for racism and prejudice.

New Facebook Artificial Intelligence tends to racism and stereotyping

The new tool introduced by Facebook and its parent company, Meta, uses state-of-the-art AI. The system, from the point of view of the firm’s researchers, maintains the same problem as its predecessors. We are talking about still being bad at avoiding the reinforcement of racist and sexist stereotypes.

Artificial Intelligence not so efficient

Under the name OP-175B, this new system resembles a template with a large language model. Each component of it is pre-trained and is increasingly being used in learning that processes human language, and in this sense the results of natural language processing are astounding. We are talking about a great capacity to generate images from a brief description of a text.

Researchers have warned that this new system has a higher risk of generating toxic results. For Meta’s own people, the OPT-175B is more dangerous than PaLM or Davinci, the previous language models.

The suspicion about the results provided by the system is centered on the fact that the training data includes unfiltered text. This would be taken from conversations on social networks, thus increasing the tendency of the model to recognize and generate hate speech.