The popularity of Unmanned Aerial Vehicles or UAVs has exploded in just a few years. That’s the result of smaller, cheaper computers that allow these vehicles to fly unaided, better radio communication systems and more efficient, lighter motors for longer flight times.
As a result, UAVs are extraordinarilly capable. The flying machines available in any toyshop for a few hundred dollars would have been the envy of any UAV research team just ten years ago.
But there are still limits to what these machines can do and one of them is tracking objects on the ground. Send up one of these cheap UAVs to circle your house or to follow a car and it’ll be hopelessly lost in seconds.
That’s because object recognition tends to be a computationally intensive task and there are obvious power and weight  limits for small flyers.
The standard way to solve this problem is to broadcast the images back to the ground where they can be crunched relatively easily and then sent back. But this obviously doesn’t work when communications systems are disrupted.
So today Ashraf Qadir and pals at the University of North Dakota in Grand Forks reveal a solution. With Department of Defense funding, these guys have built their own image processing machine, which is small and light enough to be carried by a small UAV. They say their device is capable of tracking objects such as cars and houses in real time without the need for number crunching on the ground.
The way these guys have solved this problem is to simplify it and then solve the simplified puzzle. They point out that from a plane, objects on the ground such as cars and houses do not generally change shape.
However, they do change their orientation and position relative to the camera. Â So their object-tracking program essentially solves just these two problems. First , it uses the motion of the object in the previous frames to predict where it is going to be in the next frame. That’s fairly straightforward.
Second, it uses a remarkably simple process to follow the object as it rotates. When the onboard computer first finds its target, it uses a simple image processing program to creates a set of images in which the object is rotated by 10 degrees. That produces a library of 36 pictures showing the object in every orientation.
So the process of following  the target is simply a question of matching it to one of those images. Qadir and co have developed a simple protocol to optomise this process.
And that’s it. They’ve tested the approach both in the lab and in the air using a Linux computer on a single printed circuit board plus a small camera and gimballing system. All this is carried on board the university’s customised UAV called Super  Hauler with a wingspan of 350 centimetres and payload capability of 11 kilograms.
These guys say the system worked well in tests. The UAV has an air-to-ground video link which allows an operator to select a target such as a car, building, or in these tests,  the group’s control tent. The onboard computer then locks onto the target, generates  the image library and begins tracking.
From an altitude of 200 metres or so, Qadir and co say the system works well at frame rates over 25 frames per second–that’s essentially real time..
Of course, the systems has some limitations. Following a single vehicle is obviously much harder than selecting and following one of many in traffic, for example. Similarly, station keeping over a single tent in a field is relatively straightforward compared to the same problem in suburbia where all the houses look the same.
But one step at a time, as they say. These are problems for the future.
These guys have a proof-of-principle device that could easily be deployed cheaply and more widely. The Super Hauler isn’t quite in the ‘toy’ department yet but it isn’t hard to imagine how a version of this kind of software and hardware could be deployed in cheap UAVs elsewhere in the near future.
Source: Technology Review