Vision-Enhanced RTK-GNSS Positioning System for Autonomous Vehicles

The Vision-RTK 2 combines high precision RTK-GNSS with deeply fused inertial and visual sensors By Mike Ball / 18 May 2022
Intelligent positioning - sensor-fusion in autonomous vehicles
Follow UST

Fixposition, a leading developer of autonomous guidance sensors with high-precision positioning, has partnered with Unmanned Systems Technology (“UST”) to demonstrate their expertise in this field. The ‘Gold’ profile highlights how the company’s technology, which combines high precision RTK-GNSS with deeply fused inertial and visual sensors, can be used to provide precise global positioning for autonomous vehicles anytime and anywhere.

Vision-RTK 2 - gnss positioning sensorThe Vision-RTK 2 is a lightweight, compact off-the-shelf system that can be easily integrated into a wide range of autonomous vehicles and platforms.

Featuring industry-standard connectors, it provides plug-and-play autonomy for logistics, landscaping, urban delivery, land mowers and more, allowing you to simplify development and reduce time-to-market. The solution is available in a weatherproof enclosure or as an OEM board, and includes an intuitive web-based interface for setup and monitoring, with a dashboard that provides visualization of your data.

Autonomous vehicle sensor fusionThe Vision-RTK 2 system feeds all available sensor data into Fixposition’s deep sensor fusion engine, combining the best of GNSS and relative positioning to overcome weaknesses in individual sensors and remove the time-dependent drift characteristics present in IMU-based solutions. The result is robust and precise positioning even in GNSS-degraded or denied areas.

Two dual-band receivers use satellite signals from all four GNSS Systems (GPS, GLONASS, BeiDou, and Galileo) to determine the sensor’s absolute position and orientation. RTK technology is used to correct errors and achieve centimeter-level accurate positioning. NTRIP is used to provide the correction data to the sensor. This data can be obtained from publicly available Virtual Reference Station (VRS) networks, or from a local physical base station.

multi sensor fusion technologyCamera images are used to extract significant points (visual features) that are tracked across multiple images. Subsequent observations of visual features allow computation of how the camera moved in between the captured images.

Vision-RTK 2 is ideal for a wide range of robotics applications, including delivery, precision agriculture, and landscaping. Fixposition can work with you to determine the specifics of your hardware and software platforms and the unique requirements of your application, fine-tune your design to optimize performance, and continue to support you during the production phase.

To find out more about Fixposition and their precise positioning solutions for autonomous vehicles, please visit their profile page:

Posted by Mike Ball Mike Ball is our resident technical editor here at Unmanned Systems Technology. Combining his passion for teaching, advanced engineering and all things unmanned, Mike keeps a watchful eye over everything related to the unmanned technical sector. With over 10 years’ experience in the unmanned field and a degree in engineering, Mike’s been heading up our technical team here for the last 8 years. Connect & Contact
Latest Articles