OpenAI to create team to limit artificial intelligence risks

Sam Altman quiere crear un equipo de riesgos para frenar el mal uso de la IA

There is new AI news that Sam Altman plans to create a risk team. This would act in the event that artificial intelligence devices need to be curbed if misuse of them for fraudulent actions, for example, or risks to international security are detected.

OpeanAI: AI systems can help a lot, but also create several risks

The OpenAI team confirmed the creation of this same department on its own blog. This department aims to mitigate damage before a real threat occurs if the AI falls into the wrong hands.

According to the company, new technologies have the potential to help humanity, but also to create major problems if these technologies fall into the wrong hands. One of the heads of the department will be Aleksander Madry. He is the director of MIT, who will be leaving his current job in order to devote himself fully to this new position under Sam Altman.

Madry has extensive experience in AI language models. He is a person who will be perfectly able to take charge of his position. Altman has been considering for months the option of creating a specific committee to ensure the safety of the systems and the population. For him, it is an absolute priority to generate a welfare state as long as AI continues to advance and develop.

All the professionals who collaborate in Sam Altman’s risk team will be trained to detect all types of threats related to biosecurity, radioactivity, chemical threats or the theft of sensitive data. In addition to seeking security for individuals, this equipment seeks to provide security to different regulatory bodies such as governments, large companies and banks, among others. One of the messages that OpenAI wanted to convey to all its followers on social networks is that AI is something very “revolutionary” and that its creation could have unforeseen consequences.

Preparedness, the risk team that will be in charge of analyzing artificial intelligence

The risk team that would protect people from AI misuse could be called Preparedness. One of its goals for the coming years will be to protect the population from a nuclear attack by AI, in addition to preventing chemical and biological threats.

It is expected that this team will develop its own code of conduct, which will be detailed in its Risk-Based Development Policy. There the team will clarify how it will create its own assessments for artificial intelligence models and how it will monitor all the actions it produces. In short, everything that happens will be closely monitored.

Although it is still a secret what will happen with the AI, we will soon know how it will progress. We will learn what can be done with it and how large companies will respond to its advances, assessing whether it will really be as powerful as it is said to be. Sam Altman’s risk team will need a good understanding of what will happen with AI and will generate the right infrastructures to provide AI with the best security measures.

Politicians such as Joe Biden, the President of the United States, also called for extreme security in the implementation of AI and pointed out that special regulation will be needed. In this way, this type of tool will be prevented from being used to generate evil.

Tags: IA
No Comments Yet

Leave a Reply

Your email address will not be published.