Embedded Systems Software Development, Firmware Programming & Drone Software Design/Testing Services

Tilak.io & GPS-Denied Navigation

Tilak.io outlines how their state-of-the-art, graph-based visual SLAM can replace standard navigation in GPS-denied environments, aiding drone industries from agriculture to defense Feature Article by Tilak.io
Tilak & GPS-Denied Navigation
Follow UST

Tilak.io understands that unmanned aerial vehicles are changing how entire industries operate, but that not all can function in areas where GPS navigation is not an option.

In their latest release, the company discusses their visual SLAM solution, utilizing an algorithm to combine visual data issued from a camera. Find out more below.


Drones are revolutionizing industries, from agriculture and defense to aerial mapping. However, these systems face significant challenges in GPS-denied environments, such as dense forests, urban canyons, or indoor facilities, where traditional localization methods fall down.

Our project addresses this challenge by developing cutting-edge technology that allows drones to operate in GPS-denied environments. With the use of advanced visual-based and feature extraction techniques, this system offers an alternative that is not only stable but highly reliable compared to standard GPS-based navigation.

But why choose a visual-based approach, and how can it possibly replace GPS for localization?

The answer lies in simplicity! 

In fact, visual sensors have gained widespread adoption in robotics for their ease of integration, affordability, and compact size. These aspects make them a perfect fit for small platforms such as UAVs. This system takes advantage of an onboard camera capable of capturing stereo grayscale, RGB and depth images, supplying essential data for navigation.

Among the most effective methods in the current state-of-the-art, graph-based visual SLAM (VSLAM) was selected as the core brick of our solution. This approach is highly regarded for its ability to handle complex, long-term navigation with high accuracy and reliability.

Essentially, the algorithm combines visual data issued from a camera and raw inertial unit (IMU) measurements to estimate the 3D pose of a drone using visual-inertial odometry (VIO). As the drone explores its environment, it builds a map in real time while continuously refining its position using graph optimization.

A key feature of VSLAM is loop closure which detects when the drone revisits a previously mapped area, correcting any accumulated small errors. This capability is particularly beneficial for 3D scanning, infrastructure inspection, and advanced obstacle avoidance.

Turning Theory into Action: Simulation Testing

To validate the system, we conducted extensive simulations in a controlled environment. A detailed textured indoor maze was designed, where a modeled UAV drone will navigate autonomously.

The simulated UAV, stripped of GPS capabilities, relied only on an IMU unit and an onboard camera for localization. These data streams were processed by PX4, our modified autopilot, which served as the drone’s primary localization source, fully substituting GPS signals.

The UAV successfully navigated a series of predefined indoor paths during multiple flight missions, yielding results that surpassed our expectations. The accompanying video demonstrates the drone’s autonomous navigation through the maze while generating detailed 2D and 3D maps of the environment, accurately reflecting textures.

To grasp the performance of VSLAM, using Tiplot, our main open-source flight data visualization tool, the estimated trajectory shown in blue is compared against the reference trajectory shown in red along with a rich amount of debugging information. The near-perfect alignment between the two paths highlights the robustness of the system’s localisation, even under challenging scenarios with multiple turns and tight corridors.

Summary of data log flight analysis using Tiplot

 

Robust Performance Across Test Scenarios

It is essential that promising results are backed up by robust data. Therefore, a series of tests were conducted in order to gather odometry data for performance evaluation purposes across multiple test scenarios.

The approach supports both stereo grayscale and RGB-D images, providing multiple formats of evaluation and also a richer database. A custom Python-based evaluation tool was developed and used to compare the estimated trajectories against a known ground truth, allowing for concrete performance insights.

The performance metrics across a 300-meter trajectory showed minimal differences between the RGB-D and stereo approaches, with both maintaining impressive accuracy. The RGB-D setup delivered an average absolute trajectory error (ATE) of just 0.20% for position and 2.22% for orientation. Meanwhile, the stereo setup slightly outperformed RGB-D in position accuracy with an error of just 0.19%, though its orientation error was lower at 1.29%. 

A notable aspect of the stereo setup is the progressive reduction in ATE after each loop. This improvement is driven by loop closure detection that corrects and realigns the drone. Both approaches achieved centimeter-level position error, showcasing high-precision performance.

Additionally, the low root mean square error (RMSE) and small standard deviations indicate strong consistency and reliability across different runs. Therefore, this robustness makes these systems ideal for challenging GPS-denied environments where accurate navigation is highly critical.

Bringing It to Life: Prototype Implementation

While the simulation results were promising, the real challenge lies in testing the system in real-world conditions. To demonstrate its practical potential, we developed a physical prototype using a 1/10 RC car as the foundation for a UGV drone.

This prototype highlights both the simplicity of the hardware setup—transforming a standard RC car into an autonomous vehicle — and the flexibility and modularity of the software architecture, making it easily scalable for both ground and aerial platforms.

Key components of the prototype include a cost-effective stereo camera equipped with an internal IMU, a PX4 flight controller, power modules, and an embedded computer companion, all mounted onto a custom 3D-printed platform.

With all components integrated, the prototype successfully completed its mission by autonomously following predefined waypoints, relying solely on the IMU-camera system for navigation. Importantly, the mission was conducted entirely without GPS modules, further validating the reliability of this minimal configuration in indoor environments.

Additionally, the onboard system generated a detailed 3D map of the surroundings, demonstrating its potential for 3D scanning applications in real-world scenarios.

Looking Ahead: What’s Next?

This project represents a significant step forward in GPS-denied navigation. By integrating advanced visual-based algorithms with cost-effective, minimalistic hardware, the resulting solution offers unmatched performance in handling complex environments. Whether it’s for 3D scanning, inspection, or exploration, our system opens up new possibilities for a wide range of applications.

This represents only the beginning. We’re excited to collaborate with future partners to customize our solution for unique use cases and further push the boundaries of what’s possible in GPS-denied drone navigation.

Visit the Tilak.io website for more information.

To learn more, contact Tilak.io: Visit Website Send Message View Supplier Profile
Posted by Abi Wylie Connect & Contact