U.S. Army researchers have developed ground-breaking technology that will enhance how soldiers and robots communicate and perform tasks in tactical environments.
The research was presented at the 14th International Conference on Computational Semantics (IWCS 2021) where it received the Outstanding Paper Award. The paper represents a body of research spanning over a decade.
According to Army researcher Dr. Claire Bonial from the U.S. Army DEVCOM, Army Research Laboratory (ARL), “the research sets out to develop a Natural Language Understanding (NLU) pipeline for robots that can be easily ported over to any computational system or agent and incrementally tames the variation that we see in natural language.”
This means that regardless of how a soldier chooses to express themselves to the robot, the underlying intent of that language is understood and can be acted on, given both the current conversational and environmental or situational context.
To do this, the NLU pipeline first automatically parses the input language into Abstract Meaning Representation (AMR) which captures the basic meaning of the content of the language and then converts and augments it into Dialogue-AMR, which captures additional elements of meaning needed for two-way human robot dialogue, such as what the person is trying to do with the utterance in the conversational context – for example give a command, ask a question, state a fact about the environment, etc.
“This is unique in comparison to other NLU components within dialogue systems, many of which forego any kind of semantic representation in favor of a deep-learning approach,” Bonial said.
The research includes work to represent language expressed through semi-idiomatic constructions, most recently extending AMR so that it can better capture two-way dialogue and specifically task-oriented, situated dialogue between people and robots, in an augmented version of the representation called Dialogue-AMR.
Discussing team’s next steps which involve connecting the output semantic representation with a system that grounds the pieces of the representation to both entities in the environments and the executable behaviors of the robot in joint work with Dr. Thomas Howard of the University of Rochester, Bonial said;
“We are optimistic that the deeper semantic representation will provide the structure needed for superior grounding of the language in both the conversational and physical environment such that robots can communicate and act more as teammates to soldiers, as opposed to tools,”
Find AI & Deep Learning technologies for Drones & Unmanned Systems >>