3 WAYS how to TEACH ROBOTS human skills

Surprising it may sound, but question of robots’ moral standards is with us for centuries and have been present well before first functioning  robot prototype constructed. If you have seen the old Czech movie Emperor’s Baker, Baker’s Emperor (or its American version the Emperor and the Golem) or knows the story of the Golem, has actually encountered first attempts to make robots morally competent. Clay robot has been activated by inserting the magical stone (shaem) into robot’s forehead. From today’s perspective, more interesting that Golem’s clay is the fact that the robots back then acted as good or bad by standards of the man, who activated the Golem. Unfortunately, the story does not recall how this has been achieved. It is even more puzzling because Golems (and other early robots) did not have any senses to observe its surroundings or to adapt to environment.

Fast forward, human kind has moved from legends to reality. We can build (much more elaborate) robots, we can even equip them with all senses that regular human possesses (sight, touch, hearing, smell and taste). However, in question, how to make the robot good or evil by design we are not much further then Golem times. So what methods did we develop as humans to teach the machine what is morally acceptable and what not?

Some readers might get disappointed upon finding out that in programming robots we try to rely on the learning methods that we use to teach ourselves. The supporters of these approaches reason that if we wantGolem to create robots compatible with human kind, we should provide them with comparable education as we receive as humans. Some disillusion rises from opponents reminding that we hope robots to be better/fairer than the bar we, humans, se for them. While, let’s be sincere, our education system still produces abundance of cheaters, violence or human intolerance. Thus, striving for higher moral standards in robots does not seem to unreasonable request in that context. Be it reality or just our dream, these are three ways we now try to educate machines:

I. Mistakes as a path to success

Central Europeans have the right to be proud that one of the ways, which humans pick to teach robots, has started in Slovakia.  Marek Rosa (Slovak founder or gaming start-up KeenSofwareHouse,  decided to devout his full focus to Artificial Intelligence in company named GoodAI.  Marek’s approach uses mentor to build robot’s thinking habits.  Literally, robot’s own mistakes help him get better via loops of tasks served to him from mentor. Robot tries all possible solutions to the problem and those ways leading to desired result are stored into robot’s permanent memory as useful concepts. Mentor’s role is to feed robot with still more and more complicated tasks upon learning the simpler ones. When robot is faced with conditions same to already solved task from past, he acts to deliver the desired result. Marek Rosa foto

This approach probably most closely follows the pattern of human learning: First we learn to detect digits, then to substract and multliply, yet later to solve the set of equations or to describe curve of time-space. The name GoodAI has been chosen to indicate that artificial intelligence trained this way will not feel off the balance by new circumstances. Robot simply selects the solutions that minimizes the damage and if more often facing that new phenomena, over the time it improves the method to ideal resolution of the problem.

II.  When sugar succumbs to whip

Ron Arkin fotoQuite different approach has been selected by  Ron Arkin, American professor of Robotics at Georgia Tech University in USA. Upon his experience he had gained by programming robots for American military, in classic sugar-whip choice he leaves the caloric option aside. His approach builds on simulating emotions in robots. And emotions that are. Arkin let the robot to decide and after the decision he simulates joy or shame in the robot’s system (by assigning black or red points to his solution). So if robot hits the barrier and tries to smash it by force, the teacher stimulates the shame feeling in robot’s memory about the damage caused to the wall. Therefore, next time robot hits the wall, he refrains the solutions that he felt ashamed about in past and prefers solutions that he has been praised about. This approach is essential because robots quickly learn to avoid unacceptable mistakes. In real life these robots will be less blunt in “surprising, never experienced” scenario than the ones trained by first method.

III. Read your robot a fairy tale before sleep

Mark Riedl fotoThe third approach relies on moral standards development that we, humans, receive in the early childhood. Fairy tales and Stories. Mark Riedl, also from Georgia Tech University agrees to Good AI approach. But he reasons that we do not have enough time to teach robot every tiny bit of the intelligence by plethora of trials and fails.

Therefore, Riedl suggests that robot „reads“ great number of stories and analyses human thought and acting into cause-aftermath pairs. If robots during reading of the stories identifies formula that repeats, it stores it into the memory and will try to validate or disprove this rule in next stories to read. Already from legendary movie  „Number five is alive“ (see video) we know that robots can read enormously fast. Hence, this approach of learning can progress much faster than other methods involving human feedback. Robot can this way infer from innocent stories that if humans walk into the restaurant, they sit down and wait for the waitress to take their food orders. Do you find this trivial? Well, then consider robots to be perplexed, why hungry humans do not storm into the restaurant kitchen and cook something for them, as they do at their homes. The advantage of the “fairy tale” approach is that it can train event complex events that are very complicated to construct into try-and-fail attempts used by Marek Rosa.

Together or against each other?

So, what all three methods share in common? Moral standards training of robots cannot rely on preprogramed routines.  Even if we took the effort of rewriting all our moral standards into chains of “If X happens, then do Y“, robot educated by them would be still paralyzed if new circumstances arise. This way trained robots would also be rigid in times with their standards, fully inapt for human dynamics changing. Let’s not forget that not that long-ago women did not have right to vote and it was owner’s legal right to beat his slave on the street.  Proper training of the robot must allow for him to “learn along seeing” new societal norms, same way we teach new customs upon arriving into foreign culture. At the beginning we are a bit cautious and reserved, but after few days we slowly learn not to be elephant in glasshouse. Robot’s training has several advantages to human education. Firstly, if one robot learns all the needed skills, all his next copies get them right away from moment of the construction. What is more, state authorities, can demand that all human facing robots in given country will share common moral standards and compulsory stick to them. The thing that would be so often needed in our human life as well. But that is different fairy tale to read …


Publikované dňa 21. 1. 2018.