Google has an expert group to prevent AI robots from harming the world

Google tiene un grupo experto para prevenir que robots con IA terminen con el mundo

If you are a fan of news related to the Google company, you will surely be interested in this type of news. Such as the robotics team that Google has created to prevent AI-created robots from performing any activity that is negative to humanity.

Google already had to face EU regulatory taxation laws

The European Union was the first to propose to agree on a law on artificial intelligence so that there would be regulation of activities related to autonomous technology. Before any type of conflict occurred that could endanger any service to the population or citizens. This law, in theory, is also capable of regulating all kinds of artificial intelligence systems such as ChatGPT or Google Bard.

Thanks to this law, any type of company will have to specify whether its content has been created from the use of an AI. They will also have to detail if the copyright (if applicable) of the original work taken as inspiration has been respected when creating these materials. Google wanted to create its own “constitutions” to control all devices that work using AI.

How does Google’s security team work?

In Google’s own constitution, they wanted to provide their own indications focused on the field of security. Instructing the language model of the robots that work with AI, programming them so that the robots avoid using electrical appliances, dangerous objects, animals or humans in their tasks.

DeepMind, Google’ s security team, also made special programming so that the robots can recognize if they are exceeding the normal threshold of strength in their actions. Making them stop if they exceed the unacceptable limit. In addition to these safety features, engineers have added a shutdown switch for operators to press in case of danger.

Can intelligent robots pose a threat?

Like everything else, sometimes all kinds of emergencies can arise. Security teams have been created to maintain alertness and prevent any event from occurring. In 2022, a robot went so far as to break a seven-year-old boy’s finger. And a young man from Volkswagen died when he was hit by a robot at a Volkswagen plant.

After being struck in the chest, he was crushed against a hard metal plate. Another technician named Wanda Holbrook, was accidentally killed by a robot in her workspace. Having caused a failure that resulted in the robot grabbing her by the head and crushing her on an assembly line.

There is still a long way to go to verify whether the implementation of AI-controlled robots will do good for society or create all kinds of problems. Like the ones mentioned above. However, thanks to the implementation of these security groups such as Google’s, things will be better verified and greater evils will be avoided.

No Comments Yet

Leave a Reply

Your email address will not be published.