The word “Robot” has marked its 100th anniversary in the year 2020. Since its first use in the play “R.U.R.” (Rossum’s Universal Robots) by a Czech writer Karel Čapek, the technology has ignited discussions around sensing technologies, control algorithms, social interactions, economic impacts, and safety implications. In the present world, artificial intelligence, automation, and robots are no longer considered things that will be achieved in the distant future. They have already occupied our daily lives and will increasingly become even more common soon. With the advancements in these technologies continuing to surprise humans, recently researches have been conducted that discusses the idea of developing robots that are better than the humans. TESLA has also announced a robot project – OPTIMUS – that has the ability of mobility, cognition, and task performance and can work closely with humans.
These concepts are based on the psychological premises of understanding that other living things have thoughts and emotions that affect the behavior oneself. In terms of robotics, this would mean that humanoid robots with the help of A.I. (Artificial Intelligence) could comprehend how humans, animals, and other machines can “think” and make decisions through self-reflection and determination, and then utilize that information to make their own decisions. Essentially, machines would have to be able to grasp and process the concept of “mind,” the fluctuations of emotions in decision making, and a litany of other psychological concepts in real-time, creating a two-way relationship between people and artificial intelligence.
Figure: A robot and human working together at an assembly cell in a factory context
So, the future can be envisioned where humans and robots are working together as a team, but when can we expect to see a humanoid robot walking on the road?
To answer this and other serious questions related to technology and the future of the world, we briefly interviewed Ali Ahmad Malik who is an industrial scientist in the use of human-robot teams in factories and is working as an Expert for Robotics & Automation in Siemens Renewable Energy in Denmark.
The search for a “universal algorithm for learning and acting in any environment,” (Russel and Norvig 27) isn’t new, but time hasn’t eased the difficulty of essentially creating a machine with a full set of cognitive abilities. AGI (Artificial general intelligence) has long been the muse of dystopian science fiction, in which super-intelligent robots overrun humanity, but experts agree it’s not something we need to worry about anytime soon.
Ali Ahmad answered that we cannot make an accurate time prediction, but it will take quite some years (not less than a decade or two) before these advanced interaction approaches can become an off-the-shelf solution for fluid interaction between humans and robots. Current practical solutions which are used to control or interact with robots are smartphones, smartwatches, gesture tracking, voice commands, etc. However., in the future, to make these machines a part of everyday life, they need to be intelligent to learn from their environment and be responsive to every changing situation in real-time. For such a fluid interaction, the robots need to:
▪ understand what their human colleague is thinking,
▪ anticipate his/her actions,
▪ make adjustments to its own actions
These capabilities require that robots have even more intelligent sensing and cognitive abilities.
Figure: A robot offering cola to a colleague during lunch break. Ali
Cognitive scientist and author Gary Marcus offered some details in an illuminating 2013 New Yorker essay. The smarter machines become he wrote, “Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called ‘technological singularity or ‘intelligence explosion,’ the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.”
However, it indicates that in the near future, humans will be surrounded by algorithms, intelligent systems, and robots. So, will it make humanity end up in a technological prison? We asked Ali Ahmed.
He opined, “Prison sounds a negative word, I am rather very positive about this anticipated future. The humans of the present times are also in a technology “prison” because our lives are surrounded by a web of technologies that we can’t stay away from. Everything can have adverse effects and so do these technologies can, but largely they are making our lives simple, better, and easier.”
Interestingly, he further added that in the future. The human race will live a better life with fewer diseases and better standards of living, while the shift toward technological prison would not be a one-step progression, but rather a continuum of gradual evolution. And in that case, thanks to human flexibility and adaptability, the change will not disturb humans; rather they will keep digesting it bit by bit.
Already today, there are studies available on how social media has changed the way humans used to meet their future partners. So, can we expect humans to choose a robot as a life partner?
Giving a positive indication, Ali explained that the future humans will live with AI that will not only affect humans’ jobs and skills requirements but will also affect their social lives. As of today, we can see how social media has changed the way humans used to meet their future partners. But if the future partner is an intelligent machine instead of a human that can definitely disturb the biological cycle and human behavior it must be carefully planned.
Robots manufacturing robots at a factory in Odense, Denmark.
As we can see in the above image that robots are recreating themselves meaning that robots can easily expand themselves, they can also make the largest army of themselves against humanity. To clear this ambiguity, we asked Ali if he agrees that robots and AI are more dangerous than that nuclear weapons, and can a self-learned robot manipulate data, and start a nuclear war?
Reminding the history of robots, Ali told us that seventy-five years ago, Isaac Asimov had coined the three laws of robotics which still make sense. The first law states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. But now, as we are talking about robots learning from humans, they will certainly be able to learn actions from humans and replicate the same. But as the algorithms of artificial intelligence get sophisticated, a future can be anticipated when the androids will also start learning the bad habits or actions of humans. Slowly they will match the humans and then will surpass them. In that case, they can start deceiving their masters (the humans). And that can be a catastrophe. A fiction-based depiction of this scenario was presented in a short film of a future factory where the robots take control of the factory and convert an automobile manufacturing facility into a killing robots manufacturing facility.
Big dog is a robot project by Boston Dynamics. The robot was desired to be used in warfare. However, it is offering challenges of high noise. But in the future, a large army of such military robots can be developed that can use weapons and shoot targets with high precision. Though the positive aspect is that these machines will not have the emotions of revenge and hence can be better soldiers, but who will control countries’ expansion in these devices, and will they be counted as active-duty soldiers? These are open questions and the answer really depends on the safety, sensing and interaction technologies that will be developed in the next few years to collaborate with robots.
Several approaches have been identified to make human-robot collaboration safe for fellow humans. The ISO 15066 standard forms a basis of safety requirements when implementing robots in close proximity to humans. However, the safety standards need to be continuously updated with the advancement of technologies. It is also important that there is no standard of ethical and safe use of A.I. in factories yet.
It is expected that in the future, robots will become social entities. So, do we need laws for the safe collaborations of humans and robots?
Ali responded that there is a discussion on making human-robot collaboration safe, but these discussions are limited to ensuring the safety of the humans from robots, but what about the safety of the robots from humans? And this is a violation of the first law of robotics. If robots become a social entity then there must be laws governing the rights, safety, and social well-being of intelligent robots.
An approach to mitigate the possible challenges of the future is to keep humans an integral part of the automation loops. Instead of making fully automated systems, the systems must be designed with humans and robots collaborating with each other in which tasks are distributed according to the best skills of each. In such a system designed at the periphery of human skills and robotic capabilities, the humans can make critical decisions; consider empathy and emotions (which are unique characteristics of humans) as and when needed.