MIT Develops Autonomous Drone Obstacle Detection System

By Mike Ball / 13 Nov 2015
Follow UST

MIT Obstacle Avoiding DroneMIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has announced that it has developed an obstacle-detection system that allows a drone to autonomously move through a tree-filled field at upwards of 30 miles per hour.

“Everyone is building drones these days, but nobody knows how to get them to stop running into things,” says CSAIL PhD student Andrew Barry, who developed the system as part of his thesis with MIT professor Russ Tedrake. “Sensors like lidar are too heavy to put on small aircraft, and creating maps of the environment in advance isn’t practical. If we want drones that can fly quickly and navigate in the real world, we need better, faster algorithms.”

Running 20 times faster than existing software, Barry’s stereo-vision algorithm allows the drone to detect objects and build a full map of its surroundings in real-time. Operating at 120 frames per second, the software – which is open-source and available online – extracts depth information at a speed of 8.3 milliseconds per frame.

The drone, which weighs just over a pound and has a 34-inch wingspan, was made from off-the-shelf components costing about $1,700, including a camera on each wing and two processors similar to those found in modern cellphones.

Traditional algorithms focused on this problem would use the images captured by each camera, and search through the depth-field at multiple distances – 1 meter, 2 meters, 3 meters, and so on – to determine if an object is in the drone’s path.

Such approaches, however, are computationally intensive, meaning that the drone cannot fly any faster than 5 or 6 miles per hour without specialized processing hardware.

Barry’s realization was that, at the fast speeds that his drone could travel, the world simply does not change much between frames. Because of that, he could get away with computing just a small subset of measurements – specifically, distances of 10 meters away.

“You don’t have to know about anything that’s closer or further than that,” Barry says. “As you fly, you push that 10-meter horizon forward, and, as long as your first 10 meters are clear, you can build a full map of the world around you.”

While such a method might seem limiting, the software can quickly recover the missing depth information by integrating results from the drone’s odometry and previous distances. Barry says that he hopes to further improve the algorithms so that they can work at more than one depth, and in environments as dense as a thick forest.

“Our current approach results in occasional incorrect estimates known as ‘drift,’” he says. “As hardware advances allow for more complex computation, we will be able to search at multiple depths and therefore check and correct our estimates. This lets us make our algorithms more aggressive, even in environments with larger numbers of obstacles.”

 

source: MIT CSAIL

Posted by Mike Ball Mike Ball is our resident technical editor here at Unmanned Systems Technology. Combining his passion for teaching, advanced engineering and all things unmanned, Mike keeps a watchful eye over everything related to the unmanned technical sector. With over 10 years’ experience in the unmanned field and a degree in engineering, Mike’s been heading up our technical team here for the last 8 years. Connect & Contact