A little over a year ago, researchers at Bielefeld University demonstrated that software they developed gave a walking robot a simple form of consciousness (wherein it was able to overcome obstacles while traversing a course).
Over the last few months, they’ve built upon this achievement, developing a software architecture based on artificial neural networks that will allow the bot to see itself as others see it; that is, programming that makes the robot self-aware.
But first, some background: As you can see in the picture above, the robot, affectionately referred to as “Hector” by the researchers, is modelled after a stick insect. In 2014, the group demonstrated the bot was able to use its specially written software to recognize obstacles and adjust its walking path in getting from Point A to Point B.
Since the program has proven effective thus far, Hector’s software will now be tested using a computer simulation to see if it can be expounded upon.
“What works in the computer simulation must then, in a second phase, be transferred over to the robot and tested on it,” explains Dr. Holk Cruse, professor at the Cluster of Excellence Cognitive Interaction Technology (CITEC) at Bielefeld University. Together with colleague Dr. Malte Schilling, the two will investigate to what extent various higher level mental states can be built in to the robot.
Specifically, they are looking for “emergent” abilities, or capabilities that suddenly appear within the programming.
For the last few months, Hector has been a reactive robot; that is, it reacts to stimuli in its surroundings. By way of the software program “Walknet”, the bot walk with an insect-like manner. The addition of another program called “Navinet”, gives the bot the ability to make the decisions necessary to find the best path to a distant target.
A third program the duo wants to try out is called “reaCog” and it is activated in instances where both of the other programs cannot solve the problem in front of Hector. This new software allows the robot to simulate “imagined behavior” that could be used to figure out how best to solve the problem. It looks for new solutions and evaluates whether the suggested action makes sense.
The ability to perform imagined actions is a main characteristic of simple consciousness.
What’s interesting to note is that Hector was specially designed so that its system could eventually adopt a number of higher-level mental states. “Intentions, for instance, can be found in the system,” explains Schilling. It is these “inner mental states” that make goal-directed behavior possible.
The researchers have also identified how properties of emotions may show up in the bot’s system. “Emotions can be read from behavior. For example, a person who is happy takes more risks and makes decisions faster than someone who is anxious,” says Cruse. This type of behavior could be implemented in the control model reaCog. “Depending on its inner mental state, the system may adopt quick, but risky solutions, and at other times, it may take its time to search for a safer solution.”
Cruse and Schilling are relying on psychological and neurobiological definitions to examine which forms of consciousness are present in Hector. “A human possesses reflexive consciousness when he not only can perceive what he experiences, but also has the ability to experience that he is experiencing something,” explains Cruse. “Reflexive consciousness thus exists if a human or a technical system can see itself ‘from outside of itself,’ so to speak.”
In the duo’s latest published research, Cruse and Schilling write that they’ve identified a way in which reflexive consciousness could emerge.
“With the new software, Hector could observe its inner mental state – to a certain extent, its moods – and direct its actions using this information,” says Schilling. “What makes this unique, however, is that with our software expansion, the basic faculties are prepared so that Hector may also be able to assess the mental state of others. It may be able to sense other people's intentions or expectations and act accordingly.”
Dr. Cruse explains further, “the robot may then be able to “think”: what does this subject expect from me? And then it can orient its actions accordingly.”
Learn more about Electronic Products Magazine