Resaerch Topic
Relative positioning system for Mobile robots collaboration based on multi-sensor fusion.
Topic points :
- How to realize mutual recognition between robots and the acquisition of attitude information
- How to solve the error optimization of position poses information in particular environments
- How to complete collaborative environment exploration in an unknown environment through a relative positioning system
Introduction
Rapid advancements in single-robot technology, such as UAVs and UGVs, have generated the demand for multi-robot systems to work together in unknown environments. Relative pose estimation is a crucial component of such a system, as it can enable robots to collaborate more efficiently. Current SLAM technology solutions require extensive computational resources and depend on environmental features[1][2]. Additionally, artificial tag methods are effective for only short-range applications[3]. Cao [4] proposed a plan combining VIO and UWB ranging to correct coordinate system errors, yet this method is contingent on the initial environment. A new approach combining a relative pose estimation system and an independent navigation system is needed to achieve autonomous navigation.
Aims and Objectives
This research aims to develop a method to enable mobile robots to collaborate by exchanging information regarding their relative positions and orientations. This method can satisfy accuracy requirements in challenging conditions such as dim lighting, long distances, and large angles, and it keeps the control error to within 0.16m. The system is based on multi-sensor fusion via the UWB and IMU modules, providing pose and distance data. I plan achieve this fusion through a single-frame sensor, allowing for relative pose estimation and reducing sensor noise. The results can apply to various cooperative tasks, such as landing mobile car platforms, multiple aircraft mapping unknown areas, multi-mobile robots sharing maps, and aircraft collaborating with multiple car formations to navigate routes.
This research introduces a hybrid approach, combining a relative pose estimation system and an autonomous navigation system, which reduces the computational consumption of robots by sharing the grid map constructed by one computing platform. This method uses relative poses to ensure efficient information distribution, thus expediting the overall navigation process.
Research Methodology
My research focuses on studying relative pose estimation in multi-robot systems and proposes a novel relative pose estimation system, including hardware and algorithm. The hardware system includes an IR fisheye camera, IR LED, IMU, and UWB. The algorithm uses an error-state Kalman filter and single-frame sensor fusion to resist sensor noise. When the number of robots is three or more, a pose graph optimization algorithm using single-frame data is further introduced to increase accuracy.
I propose a system for multi-mobile robots to recognize each other's relative attitudes and position information in application scenarios. I could combine this system with autonomous navigation schemes such as the EGO-Planner, which uses relatively pose information to share the map built by a computing platform with other mobile robots. This would enable map information sharing, helping multiple mobile robots achieve global path planning[5].
Theoretical Basis
Robotics ID Extraction requires several steps to be successful. First, a binary is created from the image via a threshold, and they perform circle detection to acquire the pixel coordinates of the detected spots' centers. Then, these spots are compared with previous ones according to a distance constraint, and their duty rates are calculated. Finally, IDs are determined by comparing the duty rates to elements in an ID library. Additionally, the pixel coordinates of the spots' centers serve as a directional measurement for pose estimation.
The ESKF algorithm helps provide UAVs with precise real-time positioning by eliminating errors. By dividing the system state into a nominal state and an error state, the ESKF algorithm can efficiently reduce the noise of an IMU's three-axis acceleration and angular velocity measurements. The algorithm predicts the Gaussian distribution estimation of the error state in the prediction phase and integrates the nominal state. During the measurement update process, the algorithm corrects the error state with observations and adds the estimated value to the nominal condition. This ultimately provides a posterior Gaussian estimate of the error state and a more accurate positioning system.
EGO - Planner: An ESDF-free Gradient-based Local Planner for Quadrotors. EGO-Planner is a lightweight gradient-based local planner without ESDF construction, significantly reducing computation time compared to some state-of-the-art methods[6]. The total planning time is only around 1 ms, and we don't need to compute ESDF.
Research projects
I will divide the research project into three parts. I need complet the hardware design and algorithm theory verification of the relative positioning module in the first stage. Then, I ended the pose information test verification between the two handheld devices in the indoor environment, meeting the error range, then entered the next part.
In the second stage, I plan to change the test environment to the outdoors, put the test equipment on a UAV and a UGV, and use the position and attitude information to help them complete a set of specified actions: the UAV land autonomously on the UGV. This part must consider the impact of sunlight and wind disturbance in the outdoor environment.
In the third stage, I will multiply the number of experimental objects and use swarm mobile robot collaboration as the observed scene. Considering that the increasing number of mobile robots will improve the equipment's communication and computing power requirements, I plan to use 4 UGVs and one UAV. The experimental scene features a rectangular map with obstacles in which a UAV is used to explore and build a dynamic local map. Supporting this task, four UGVs apply their equipment to calculate the relative coordinate system between themselves and the UAV. Through the close positioning device, the UGVs can find their location on the map, identify environmental information, and ultimately reach their destination via the most optimal path.
I plan validate the proposed method in three separate stages of experiments: theoretical verification, two-robot testing, and multi-robot testing. Its ability shows its feasibility to apply to different complex environments, including indoors, outdoors, and in fast-paced circumstances. Thus, the experiments prove that this new method is viable for use.
Significance
Robots can benefit from this research in various ways. For example, it could enable aircraft to land on mobile car platforms, facilitate exploration of unknown environments with multiple aircraft formations, facilitate sharing of maps among multiple mobile robots, and allow aircraft to collaborate with various car formations to explore routes. The research enabled all of these collaborative tasks. Improve how well robots work together and how well they know what to do, make it possible for multiple robots to work together in future complex situations, and help with tasks like formation, exploration, environment perception and navigation, and planning a path. Lastly, the relative position information between devices is applied to the scene of a group of mobile robots working together. This can help mobile robots work together to finish tasks in the future where GPS won't work.
References
[1] Y. Huang, T. Shan, F. Chen and B. Englot, "DiSCo-SLAM: Distributed Scan Context-Enabled Multi-Robot LiDAR SLAM With Two-Stage Global-Local Graph Optimization," in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1150-1157, April 2022, doi: 10.1109/LRA.2021.3138156.
[2] P. -Y. Lajoie, B. Ramtoula, Y. Chang, L. Carlone and G. Beltrame, "DOOR-SLAM: Distributed, Online, and Outlier Resilient SLAM for Robotic Teams," in IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1656-1663, April 2020, doi: 10.1109/LRA.2020.2967681.
[3] J. Chen, Y. Gao and S. Li, "Real-time Apriltag Inertial Fusion Localization for Large Indoor Navigation," 2020 Chinese Automation Congress (CAC), Shanghai, China, 2020, pp. 6912-6916, doi: 10.1109/CAC51589.2020.9326501.
[4] Y. Cao and G. Beltrame, "Vir-slam: Visual, inertial, and ranging slam for single and multi-robot systems," Autonomous Robots, vol. 45, no. 6, pp. 905–917, 2021.
[5] Y. Gao, Y. Wang, X. Zhong, T. Yang, M. Wang, Z. Xu, Y. Wang, C. Xu, and F. Gao, "Meeting-merging-mission: A multi-robot coordinate framework for large-scale communication-limited exploration," 2021. [Online]. Available: https://arxiv.org/abs/2109.07764
[6] X. Zhou, Z. Wang, H. Ye, C. Xu, and F. Gao, "EGO-Planner: An ESDF-Free Gradient-Based Local Planner for Quadrotors," in IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 478-485, April 2021, doi: 10.1109/LRA.2020.3047728.