HRED Developing Robot Intelligence to Respond to Voice Commands

By Caroline Rees / 14 Jan 2013
Follow UST

HRED Researchers Developing Robot Intelligence

Unmanned systems have begun to have a significant impact on warfare. Unmanned drones providing sustained surveillance, swift precise attacks on high value targets and small robots are being used for counter-IED missions. These systems are generally remotely piloted systems and are reliant upon near-continuous control by a human operator and vulnerable to breakdowns of communication links.

The future for unmanned systems lies in the development of highly capable systems that have a set of intelligence-based capabilities sufficient to enable the teaming of autonomous systems with Soldiers. To act as teammates, robotic systems will need to reason about their missions, move through the world in a tactically correct way, observe salient events in the world around them, communicate efficiently with Soldiers and other autonomous systems and effectively perform a variety of mission tasks.

Researchers from the U.S. Army Research Laboratory (ARL) Human Research and Engineering Directorate (HRED) are developing robot intelligence that will enable robots to successfully navigate in their environment when given a voice command by a human.

The Symbolic and Sub-Symbolic Robotics Intelligence Control System (SS-RICS), which was developed by HRED in cooperation with Towson State University in 2004, combines symbolic and sub-symbolic representations of knowledge into a unified control structure. The system is a goal-oriented production system, based loosely on the cognitive architectures, the Adaptive Character of Thought-Rational (ACT-R) and Soar, which is a cognitive architecture from the University of Michigan.

The goal is to develop a system capable of performing a wide variety of autonomous behaviors under a variety of battlefield conditions.

“We have found that in order to simulate complex cognition on a robot, many aspects of cognition (long term memory and perception) needed to be in place before any generalized intelligent behavior can be produced,” said Troy Kelley, cognitive robotics team leader, HRED. “In working with ACT-R, we found that it was a good instantiation of working memory, but that we needed to add other aspects of cognition including long term memory and perception to have a complete cognitive system.”

Cognition arises from a collection of different algorithms, each with different functionalities, which together, produce the integrated process of cognition. This is also known as a functionalist representation. HRED is developing SS-RICS to be a modular system, or as a collection of modular algorithms, each group of algorithms with different responsibilities for the functioning of the overall system. The important component is the interaction or interplay amongst these different algorithms, which leads to an integrated cognitive system.

“We are not necessarily attempting to produce a neurological representation of the individual components of the brain (thalamus, amygdale),” said Kelley. “The basic idea is that we are trying to use psychological theory to augment robotics development, especially in areas of learning and memory.”

Such examples include getting a robot to learn what a hallway or door is. The robots are exposed to a variety of different hallways and doors and then specific features are pulled out to incorporate a general rule for what they are, but need to be flexible.

“For example, in a foreign country you may see blankets or gates used as a door,” said Kelley. “A human is born with a lot of low level stuff that a robot would have to be programmed for – it’s tough to get a robot to think like a person.”

The three functional components that HRED is developing to program the robots include memory, language and perception (such as color recognition). HRED has been concentrating on implementations of human memory as a way reducing the computational load faced by autonomous systems.

For example, it is understood from psychology experiments that humans load elements from long term memory into working memory when they are given a problem solving task. Once long term memories are accessed, humans are then able to concentrate on a specific task. This separation of long term memories from working memory allows for increased computation efficiency because only the knowledge related to a specific task are searched during problem solving.

This implementation can be replicated on an autonomous system to help reduce the computational load. Other human memory implementations for autonomous systems would include memory decay (forgetting unimportant information) and associative learning (things that happen together get remembered together).

Kelley and his team have traveled numerous times to Fort Indian Town Gap in Grantville, Pa., in support of the Robotics Collaborative Technology Alliance (RCTA). At the MOUT site, which is used to train Soldiers in conducing military operations on urban terrain, Kelley has worked to improve indoor navigation for autonomous systems.

“Typical indoor environments do not have reliable access to GPS information and autonomous systems cannot use this information for navigation,” said Kelley.

Kelley has worked to take a more human-based approach by using landmark-based navigation.

“Humans use landmarks and dead reckoning to navigate in unfamiliar indoor environments,” said Kelley. “We are working to develop an autonomous implementation of human-based landmark navigation to help robots get around indoors.”

Throughout the world, robots are being developed and used for many different reasons and purposes – such as in the private sector for health care, manufacturing and work in dangerous or remote areas where humans are at risk. Additional work is being done in law enforcement for bomb disposal, hostage situations and autonomous surveillance of high valued areas.

“I know the Japanese have been attempting to develop robots to be as realistic as possible. However, I have seen research that shows that people sometimes find it difficult to interact with a robot that looks human-like because they find it uncomfortable or ‘creepy.’ Instead people would rather interact with a robot that looks like a robot – they find this more comforting for some reason,” said Kelley. “This could be an issue going forward if robots are expected to interact with non-combatants at check points or in hostage situations,” he added.

“In many ways, what Troy and his team are working on is a much more difficult and needed area for the military than what the private sector is working,” said Dr. Pamela A. Savage-Knepshield, chief of the Human Factors Integration Division within HRED. “What Troy is doing will make it easier for our Soldiers to communicate and partner with robots to accomplish dangerous missions.”

Source: U.S. Army Research Laboratory

Posted by Caroline Rees Caroline co-founded Unmanned Systems Technology and has been at the forefront of the business ever since. With a Masters Degree in marketing Caroline has her finger on the pulse of all things unmanned and is committed to showcasing the very latest in unmanned technical innovation. Connect & Contact
Latest Articles