Technology

In Just One Hour, a Robot Dog Learns to Walk

In Just One Hour, a Robot Dog Learns to Walk

A four-legged robot stumbles around like a newborn animal during its first attempts at walking. While a foal or a giraffe takes much longer to learn to walk, the robot learns to walk in just one hour. A computer program simulates the animal’s spinal cord and learns to optimize the robot’s movement in a short period of time. The artificial neural network is not yet optimally adjusted at the start, but it quickly self-adjusts.

To avoid predators, a newborn giraffe or foal must learn to walk as quickly as possible on its legs. Muscle coordination networks in the spinal cord are present in all animals at birth. Learning the precise coordination of leg muscles and tendons, on the other hand, takes time. Initially, baby animals rely heavily on hard-wired spinal cord reflexes.

While more basic, motor control reflexes help the animal avoid falling and injuring themselves during their first attempts at walking. Following that, more advanced and precise muscle control must be practiced until the nervous system is well adapted to the leg muscles and tendons of the young animal. There will be no more uncontrollable stumbling; the young animal can now keep up with the adults.

The Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart conducted a study to discover how animals learn to walk and learn from stumbling. They created a four-legged, dog-sized robot to assist them in figuring out the details.

“As engineers and roboticists, we sought the answer by building a robot that features reflexes just like an animal and learns from mistakes,” says Felix Ruppert, a former doctoral student in the Dynamic Locomotion research group at MPI-IS. “If an animal stumbles, is that a mistake? Not if it happens once. But if it stumbles frequently, it gives us a measure of how well the robot walks.”

As engineers and roboticists, we sought the answer by building a robot that features reflexes just like an animal and learns from mistakes. If an animal stumbles, is that a mistake? Not if it happens once. But if it stumbles frequently, it gives us a measure of how well the robot walks.

Felix Ruppert

Learning algorithm optimizes virtual spinal cord

Ruppert’s robot makes good use of its complex leg mechanics after learning to walk in just one hour. The learning is guided by a Bayesian optimization algorithm: the measured foot sensor information is matched with target data from the modeled virtual spinal cord, which is running as a program in the robot’s computer. The robot learns to walk by comparing sent and expected sensor data, running reflex loops, and adapting its motor control patterns.

A Central Pattern Generator’s control parameters are adapted by the learning algorithm (CPG). These central pattern generators in humans and animals are networks of neurons in the spinal cord that produce periodic muscle contractions without brain input. Central pattern generator networks help to generate rhythmic tasks like walking, blinking, and digestion. Furthermore, reflexes are involuntary motor control actions triggered by hard-coded neural pathways that connect sensors in the leg with the spinal cord.

As long as the young animal walks over a perfectly flat surface, CPGs can be sufficient to control the movement signals from the spinal cord. A small bump on the ground, however, changes the walk. Reflexes kick in and adjust the movement patterns to keep the animal from falling. These momentary changes in the movement signals are reversible, or ‘elastic’, and the movement patterns return to their original configuration after the disturbance.

But if the animal does not stop stumbling over many cycles of movement – despite active reflexes – then the movement patterns must be relearned and made ‘plastic’, i.e., irreversible. In the newborn animal, CPGs are initially not yet adjusted well enough and the animal stumbles around, both on even or uneven terrain. But the animal rapidly learns how its CPGs and reflexes control leg muscles and tendons.

In-Just-One-Hour-a-Robot-Dog-Learns-to-Walk-1
Robot dog learns to walk in one hour

The same holds true for the Labrador-sized robot-dog named “Morti.” Even more, the robot optimizes its movement patterns faster than an animal, in about one hour. Morti’s CPG is simulated on a small and lightweight computer that controls the motion of the robot’s legs. This virtual spinal cord is placed on the quadruped robot’s back where the head would be. During the hour it takes for the robot to walk smoothly, sensor data from the robot’s feet are continuously compared with the expected touch-down predicted by the robot’s CPG.

If the robot trips, the learning algorithm adjusts how far the legs swing back and forth, how fast the legs swing, and how long a leg is on the ground. The robot’s ability to use its compliant leg mechanics is also affected by the adjusted motion. During the learning process, the CPG sends adapted motor signals to the robot, causing it to stumble less and walk more efficiently. The virtual spinal cord has no explicit knowledge of the robot’s leg design, motors, or springs in this framework. It lacks a robot’model’ because it knows nothing about the machine’s physics.

“Our robot is practically ‘born’ knowing nothing about its leg anatomy or how they work,” Ruppert explains. “The CPG resembles a built-in automatic walking intelligence that nature provides and that we have transferred to the robot. The computer produces signals that control the legs’ motors, and the robot initially walks and stumbles.

Data flows back from the sensors to the virtual spinal cord where sensor and CPG data are compared. If the sensor data does not match the expected data, the learning algorithm changes the walking behavior until the robot walks well, and without stumbling. Changing the CPG output while keeping reflexes active and monitoring the robot stumbling is a core part of the learning process.”

Energy efficient robot dog control

Morti’s computer consumes only five watts of power while walking. Industrial quadruped robots from well-known manufacturers, which have learned to run with the assistance of complex controllers, consume far more power. Their controllers are programmed with knowledge of the robot’s exact mass and body geometry based on a model of the robot. They typically consume several tens to several hundred watts of power. Both robot types run dynamically and efficiently, but the computational energy consumption in the Stuttgart model is significantly lower. It also provides important insights into animal anatomy.