Technology

A new Program Brings us One Step Closer to Self-Driving Robots

A new Program Brings us One Step Closer to Self-Driving Robots

through the last decade, we’ve witnessed the incredible progress of robotics, with models that can walk, talk, and make gestures like humans, do jobs ranging from moving heavy machinery to gently manipulating microscopic objects, and maintain balance on two or four legs through difficult and treacherous terrain.

As remarkable as the most recent robots are, their achievements are mostly the consequence of task-specific programming or remote human direction.

Researchers at ETH Zurich have created a program that assists robots in tasks that do not rely on “prerecorded expert demonstrations” or “densely engineered rewards,” as the researchers put it.

They instead devised a method for the robot to “rapidly discover a feasible and near-optimal multi-modal sequence that solves the task.” In other words, they create an environment in which robots can achieve goals with minimal assistance from humans.

A-new-program-brings-us-one-step-closer-to-self-driving-robots
A new program brings us one step closer to self-driving robots

The findings were published in the August 16 issue of Science Robotics. The work, “Versatile multi-contact planning and control for legged loco-manipulation,” was written by Jean-Pierre Sleiman, Farbod Farshidian, and Marco Hunter of ETH Zurich’s Robotic Systems Lab.

“Given high-level descriptions of the robot and object, along with a task specification encoded through a sparse objective,” Sleiman explained, “our planner holistically discovers how the robot should move, what forces it should exert, what limbs it should use, as well as when and where it should establish or break contact with the object.”

ANYbotics’ quadrupedal ANYmal masters the opening of a dishwasher door as well as neatly opening a weighted door and holding it open with a leg while moving through.

“The framework is easily adaptable to various types of mobile manipulators,” Sleiman stated.

Robotic development has made considerable gains in recent years. Atlas was designed in 2013 by Boston Dynamics, a pioneer in the field of robotics. It could keep its equilibrium in a dangerous environment thanks to its stereo vision and fine motor talents. It was eventually developed to allow it to enter and exit cars, open doors, and operate power equipment. Cassie from Agility Robotics demonstrated superior walking and sprinting ability in 2016.

A lifelike Sophia that effortlessly emulated human motions and behavior was despatched in 2017 to assist the elderly in nursing homes and play with youngsters. In addition, OpenAI’s Dactyl: highly advanced tactile manipulation was exhibited in 2019. The single-handed Dactyl could quickly manage a Rubik’s cube and solve the 3D combination puzzle, which has baffled millions of users since its debut in 1974, after training sessions that its developers predicted would take humans 13,000 years to complete.

Recently, Boston Dynamics’ four-legged Spot, which can walk three miles, climb hills, overcome hurdles, and perform specific duties, has made headlines. And Ameca, regarded as one of the most—if not the most—lifelike robot, engages in seamless conversation and generates incredibly humanlike facial emotions and hand movements.

ETH Zurich has accomplished a significant step in the next generation of robot development, which would take the remarkable achievements of its predecessors and eliminate—or at least greatly reduce—the need for humans to control robots behind the scenes.