Advertisement

AI develops its own secret language, including the ability to create words that define concepts like ‘where’

OpenAI set up a virtual environment for AI agents to interact with each other and then set them tasks that could only be accomplished by developing an ability to communicate transactional instructions

AI_pixelbay


By Brian Santo, contributing writer

First opposable thumbs, then “Jeopardy!,” and now this. The number of distinctions that homo sapiens can boast about has been whittled yet again — we’re no longer unique in having developed sophisticated language now that artificial intelligence (AI) researchers have spurred software agents to create their own language that includes words that stand for abstractions.

OpenAI set up a virtual environment for AI agents to interact with each other and then set them tasks that could only be accomplished by developing an ability to communicate transactional instructions; for example, inducing another agent to go to some specific landmark within the environment. The agents responded as hoped, creating words used to relay instructions to achieve goals that the researchers set for them.

But that’s not all. The agents also behaved unexpectedly. For example, they spontaneously generated complex language capabilities, including the ability to create words that define concepts, such as where virtual landmarks were located in relation to each other within their virtual environment (e.g., “top-most” or “left-most”).

These agents might also be unique for being the first AIs to represent a category of thinking referred to as embodied cognition.

Explaining the distinctions between cognition theories risks disappearing down a rabbit hole riddled with warrens of neurobiology and metaphysics, but suffice it to say that most theories consider language to be a construct for conveying thought — thought is the root progenitor of language. Embodied cognition goes another step. It posits that thought must first derive from experience, and experience, by definition, must be based on sensory input. Sensation is, therefore, a critical predicate for thought and the real root of language. Sensation assumes two things: a defined entity to perform the sensing and an environment to be sensed.

These are precisely the conditions that OpenAI created in its experiment. Its agents were given virtual identities (a red dot, a blue dot), thus becoming defined entities. They were placed in a virtual environment (a plane) that included landmarks (a green circle, a blue square) — entities that could be identified or “sensed.”

The agents were given different properties. Sometimes they were able to move, and sometimes they were immobile but could “look” to detect and identify a landmark in their lines of “sight.” The agents could pass along information, usually when they touched.

The researchers set up all experiments as cooperative endeavors and used positive reinforcement techniques to encourage the development of communications capabilities and promote new communications tactics.

Some agents that were sighted but lacked mobility figured out how to “point” other agents to a landmark goal. When the researchers added “blind” agents, other agents developed the capability to “push” the blind agent to its goal.

OpenAI

AI agent signaling location of goal to a separate agent. Image Credit: OpenAI.

All the while, the agents continued to devise new language capabilities to accomplish the goals that the OpenAI researchers set for them.

“Our approach yields agents that invent a (simple!) language which is grounded and compositional ,” the researchers wrote in a blog post . The authors did not use the phrase “embodied cognition,” but this terminology in the preceding sentence is commonly associated with the theory.

The blog explained the terms this way: “Grounded means that words in a language are tied to something directly experienced by a speaker in their environment; for example, a speaker forming an association between the word ‘tree’ and images or experiences of trees. Compositional means that speakers can assemble multiple words into a sentence to represent a specific idea, such as getting another agent to go to a specific location.”

The researchers said that they intend to continue experimenting, hoping to encourage AIs to “create an expressive language that contains concepts beyond the basic verbs and nouns that evolved here.” They also expect to train AIs to be able to translate whatever language they develop into English.

OpenAI is a non-profit funded by Microsoft and Amazon Web Services. Tesla Motors CEO, Elon Musk, and PayPal founder, Peter Thiel, have both made personal investments in OpenAI.

The whole point of developing agents in the first place is to let them autonomously perform transactions on behalf of people. The new research reopens questions about what would happen if some of them were to spontaneously start communicating. Would humans notice; would humans be able to detect the language, let alone decipher it; and would agents spontaneously develop something that looks like intent? And if it’s not technically intent, would it matter? A rarely emphasized aspect of the Turing test is that intention (or will, if you will) is irrelevant; only results matter. If enough humans think that an AI is human, then it might as well be.

We don’t want to bring up Skynet every single time an AI does something interesting, but maybe AIs could meet us halfway and stop doing things that reminded us of Skynet. That Musk and Thiel are involved isn’t entirely comforting. Musk is in a rush to get vehicles with AI-based autonomous navigation on the road, whereas other automakers seem to be proceeding toward the same goal with a great deal of more caution. Thiel, meanwhile, is a co-founder of secretive big data services company Palantir Technologies. 

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply