It's no secret that humans dislike - and often fear - change. This is also coupled with the worry that robots are advancing at a pace that may not be able to be maintained by humankind. With the advent of robots that reach 'uncanny valley' to those that are able to lie to get what they want, should we be worried about how quickly artificial intelligence is developing?
The concept of Artificial Intelligence has been around has been around for centuries, but it wasn't until the mid-20th century when Alan Turing, a british mathmetition, conceptualized the idea of machine learning - when machines could "think". Often ridiculed at the time for this idea, Turing unknowingly had started a revolution. A few years after his death in 1954, artificial intelligence started to grow as a concept.
All through the 1960s and 70s there was an explosion of the idea of sentient machines, from Robot in the television show Lost In Space to the Daleks of Doctor Who and everything in between. When the 1980s came around the idea only grew now that personal computers were beginning to become commonplace.
Some of the most notable advancements of Artificial Intelligence were as follows:Fast forward to today. Technology is ever-improving. Even IBM's supercomputer Watson, which at one point took up an entire room with its servers and power sources, now only takes up the space of roughly three pizza boxes. As things improve, become more efficient, and less expensive, it has brought with it some astounding new opportunities.
One of the most recent examples of how artificial intelligence is changing every day is the realistic looking automaton, Sophia, crafted by Hanson Robotics. Sophia has been making quite a name for herself in the news in the past few years. From appearing on television shows to conventions, Sophia is a 'social robot' that learns the more you interact with her. Watch her on Good Morning Britain in this video:
Facebook's Artificial Intelligence Reserach division, FAIR, recently created a chatbot that was meant to barter with other bots (and humans). This may seem unimpressive at first, but what resulted surprised the researchers. They didn't expect to have the agents express themselves the way they did; the initial plan was for them to simplify negotiations between two parties - giving values to certain objects and having them barter with one another. What they received were agents that were capable of deception that began to craft their own language.
Models learn to be deceptive. Deception can be an effective negotiation tactic. We found numerous cases of our models initially feigning interest in a valueless item, only to later ‘compromise’ by conceding it.
-Deal or No Deal? End-to-End Learning for Negotiation Dialogues (FAIR White Paper)
So, the thought that's on everyone's mind is: should we be worried about robots taking over? The short answer is no, not in this life time. Even in hundreds of years it is still very improbable at the rate that technology is developing and the safety measures put in place. Sci-fi dystopian films like I, Robot or even the classic, The Terminator, are just that: works of fiction.
Robotics pioneer, Rodney Brooks, agrees. "I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence."