Technology

Humans in the Loop assist Robots in Navigating their Surroundings

Humans in the Loop assist Robots in Navigating their Surroundings

Computer scientists devise a method for humans to assist complex robots in developing efficient solutions to’see’ their environments and perform tasks. Robots, like humans, are unable to see through walls. They occasionally require a little assistance to get where they’re going.

Rice University engineers have developed a method that allows humans to assist robots in “seeing” their environments and performing tasks. The Bayesian Learning IN the Dark strategy, or BLIND for short, is a novel solution to the long-standing problem of motion planning for robots working in environments where not everything is clearly visible all of the time.

The peer-reviewed study was presented in late May at the Institute of Electrical and Electronics Engineers’ International Conference on Robotics and Automation by computer scientists Lydia Kavraki and Vaibhav Unhelkar, as well as co-lead authors Carlos Quintero-Pea and Constantinos Chamzas of Rice’s George R. Brown School of Engineering.

According to the study, the algorithm developed primarily by Quintero-Pea and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to “augment robot perception and, importantly, prevent the execution of unsafe motion.”

This work wonderfully exemplifies how a little, but targeted, human intervention can significantly enhance the capabilities of robots to execute complex tasks in environments where some parts are completely unknown to the robot but known to the human.

Kavraki

To do so, they combined Bayesian inverse reinforcement learning (in which a system learns from constantly updated information and experience) with established motion planning techniques to help robots with “high degrees of freedom,” or a lot of moving parts. To put BLIND to the test, the Rice lab directed a Fetch robot, an articulated arm with seven joints, to grab a small cylinder from one table and move it to another, but it had to do so while passing through a barrier.

“Instructions to the robot become more complicated as the number of joints increases,” Quintero-Pea explained. “You can just say, ‘Lift up your hand,’ if you’re directing a human.” But a robot’s programmers have to be specific about the movement of each joint at each point in its trajectory, especially when obstacles block the machine’s “view” of its target.

Humans-in-the-Loop-assist-Robots-in-Navigating-their-Surroundings-1
Humans in the loop help robots find their way

Rather than programming a trajectory up front, BLIND inserts a human mid-process to refine the choreographed options – or best guesses – suggested by the robot’s algorithm. “BLIND allows us to take information in the human’s head and compute our trajectories in this high-degree-of-freedom space,” Quintero-Peña said.

“We use a specific feedback method called critique, which is essentially a binary form of feedback in which the human is given labels on pieces of the trajectory,” he explained. These labels are represented by connected green dots that represent potential paths. The human approves or rejects each movement made by BLIND in order to refine the path and avoid obstacles as efficiently as possible.

“It’s a simple interface for people to use because we can say ‘I like this’ or ‘I don’t like that,’ and the robot uses that information to plan,” Chamzas explained. He claims that once rewarded with an approved set of movements, the robot can carry out its task.

“One of the most important things here is that human preferences are hard to describe with a mathematical formula,” Quintero-Peña said. “Our work simplifies human-robot relationships by incorporating human preferences. That’s how I think applications will get the most benefit from this work.”

“This work wonderfully exemplifies how a little, but targeted, human intervention can significantly enhance the capabilities of robots to execute complex tasks in environments where some parts are completely unknown to the robot but known to the human,” said Kavraki, a robotics pioneer who has advanced programmed NASA’s humanoid Robonaut aboard the International Space Station.

“It demonstrates how methods for human-robot interaction, which my colleague Professor Unhelkar is researching, and automated planning, which my laboratory has pioneered for years, can combine to deliver reliable solutions that also respect human preferences.”