Subscribe

MIT, Stanford Researchers Develop Method for Training Robotic Controllers with Less Data

A paper published by researchers from MIT and Stanford highlights a new method for developing robotic controllers based on data from dynamic system models. The approach doesn’t rely on massive datasets and could lead to more capable autonomous vehicles and drones in the future.

Training autonomous vehicles and drones to operate on a closed course with known variables is simple. Teaching them to operate safely in the real world is far more complex. Just look at companies like Tesla who, despite massive investments and promises over the years, still haven’t cracked the code on fully autonomous cars.  

Engineers from MIT and Stanford University are hoping to change this narrative with a new approach to developing controllers for robotic vehicles. The novel strategy outpaces existing benchmark methods and succeeds even when trained on a small amount of data. Though more investigation is needed, researchers hope the approach could help autonomous vehicles adapt faster to dynamic environments and make training them a faster process.  

More Efficiency, Better Results

For a drone (or any robot) to follow a certain path, it must be equipped with a controller—the computer logic that tells it how to steer and compensate for outside factors like wind or uphill terrain. Developing these instructions is notoriously difficult. Even when researchers know how to create an exact model for the system where the robot will operate, nailing the details for a precise controller is challenging.  

Hand-calculated controllers for simple systems rely on models that show the relationship between forces. A robot’s system may look at the relationship between force, acceleration, and velocity. However, for vehicles inserted in unpredictable situations, such as a drone in the windy sky, modeling by hand is nearly impossible. So researchers use machine learning to crunch the numbers based on measurements taken over time.  

Unfortunately, this approach to modeling a system doesn’t often take control into account. As a result, researchers must separately develop a controller using the collected data. This is both costly and time-consuming. So the MIT and Stanford team developed a new method of extracting a controller directly from the machine learning-generated dynamics model, thus cutting out the additional step.  

The result? A far more precise controller than those generated by other methods, including one built using the exact dynamics of the test system. Stanford graduate student and lead author of the accompanying paper says, “By making simpler assumptions, we got something that actually worked better than other complicated baseline approaches.”  

Notably, the team’s approach is also highly data efficient. In other words, it functions well even when given a limited amount of data—as few as 100 data points. Other methods performed significantly worse when restricted by the same small dataset.  

Real World Applications

The researchers note that the efficiency of their technique could make it possible to deploy a drone or robot faster in situations where conditions change rapidly. Search and rescue missions, zero-gravity space endeavors, and, yes, autonomous driving all come to mind.  

Navid Azizan, assistant professor at MIT’s Department of Mechanical Engineering and member of the Laboratory for Information and Decision Systems, said, “By jointly learning the system’s dynamics and these unique control-oriented structures from data, we’re able to naturally create controllers that function much more effectively in the real world.”  

As artificial intelligence (AI) becomes further embedded in society, finding new ways to process data with useful results will be paramount. Not all algorithms are equal. Advances like this one highlight the importance of continuing to innovate on existing techniques.

Moving forward, the researchers from this project hope to develop more interpretable models. Those generated by the current technique make determining specific information about a dynamic system difficult. The team hopes to make uncovering these details easier. Richards notes this could lead to even better-performing controllers in the future.

It also highlights another important consideration of AI for the coming years. While an algorithm can quickly analyze massive datasets and come to a conclusion, humans still need to know where the answers come from. Even if the conclusion is correct, knowing how our AI systems work is a crucial component of continuing to advance the field.

Author of article
Author
Sourceability Team
The Sourceability Team is a group of writers, engineers, and industry experts with decades of experience within the electronic component industry from design to distribution.
linkedin logox logofacebook logoinstagram logo