Stem Learning

The New Laws of Robotics

Way back in the early 1940s, the great sci-fi writer Isaac Asimov drew-up a set of principles for the development of advanced robotic systems. Asimov realised that future AI devices, and their designers, might need a little help keeping on the straight and narrow.

Asimov’s Laws

Asimov’s below three laws still have an influence in science and technology circles.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Almost 80 years later, legal academic and artificial intelligence expert Professor Frank Pasquale has added four additional principles.

According to Professor Pasquale, although Asimov’s ideas were well founded, it presumed few technological routes that no longer holds, going with the notion that innovations are not always for the good of humanity. Asimov’s law mentioned how robots help people and not harm them, whereas Pasquale speaks about how to democratise the direction of technology.

New Laws of Robotics

There is an important distinction between Artificial Intelligence (AI) and Intelligence Augmentation (IA). The AI goal is often to replace human beings to create something, for instance, a robot doctor or a robot teacher. Whereas, the intelligence augmentation goal is to help the human beings at large. The mix of robotic and human interaction is too important to be determined by corporate companies. An artificial intelligence vision of replacement is going to create a jobless future in many areas. Whereas emphasise on intelligence augmentation should increase both, productivity and the value of labour.

New Law 1: AI Should Complement Professionals

Some areas of the economy, a particular mention to manufacturing, will continue to see rapid automation. The jobs that indulge judgement and deliberation over choices needs to be kept for humans. In professional fields people need to be able to explain options to their clients, rather than having some large tech firm to assume what is best by automating the result.

New Law 2: Robotic Systems and AI Should Not Counterfeit Humanity

Devices should not be developed to mimic human emotions. The future of the human vs computer interaction might involve tough judgement calls about the seamless personal interactions with robots. There is a need to be more disciplined in the language used around robotics.

New Law 3: Robotic Systems and AI Should Not Intensify Zero-Sum Arms Races

The uncurbed development of the smart robotic weapons system faces the risk of spiralling out of control. There is every reason to suggest that an arms race will develop over the development and deployment of AI weaponry. There is a need to make the societies recognise that this is destructive, and not providing real human services. It is like investing in the history of destruction. Hyper-competitiveness will lead to technological dominance and monopolisation.

New Law 4: Robotic Systems and AI Must Always Indicate Identity

The need of the hour is to instil greater levels of transparency to increase accountability and discourage inappropriate and illegal activity, by both the owners and developers of technology. The way all cars have license plates, so too the robots should have some sort of license registered against it. No robotic system should ever be made fully autonomous as its risks are way too high. It is important to understand that people can be punished, but how robots or AI be punished. This aspect of technology is really important to the future of legal enforcement.

Professor Pasquale completely recognizes that the widespread application of his new laws would slow down the development of certain technologies to quite an extent, but that would certainly be for the public good. There are so many cases witnessed wherein technological advances have led to troubling and harmful consequences. It is time to get ahead of things.

There is a need to deeply think about directing the innovations, as innovation itself cannot just be a catchphrase to stop regulation.