Reinforcement Learning Allows Underwater Robots To Locate And Track Objects Underwater
Editor’s note: When we send robotic submersibles and rovers to other worlds to search for life in remote and hostile environments, we’re going to our droids to be as smart and self-reliant as possible. This team is doing that here on Earth right now as they study our planet’s oceanic depths.
______________
Currently, underwater robotics is emerging as a key tool for improving knowledge of the oceans in the face of the many difficulties in exploring them, with vehicles capable of descending to depths of up to 4,000 meters. In addition, the in-situ data they provide help to complement other data, such as that obtained from satellites. This technology makes it possible to study small-scale phenomena, such as CO2 capture by marine organisms, which helps to regulate climate change.
Specifically, this new work reveals that reinforcement learning, widely used in the field of control and robotics, as well as in the development of tools related to natural language processing such as ChatGPT, allows underwater robots to learn what actions to perform at any given time to achieve a specific goal. These action policies match, or even improve in certain circumstances, traditional methods based on analytical development.
An agent was trained in a virtual environment that uses real conditions, such as ocean currents and distance measurement noise (A). During the training, multiple parallel scenarios were used to boost the process (B), and different actor-critic algorithms were studied (C). Last, the policy learned was transferred to the real vehicle as a path planning method as part of its guidance system (D). — Science Robotics
“This type of learning allows us to train a neural network to optimize a specific task, which would be very difficult to achieve otherwise. For example, we have been able to demonstrate that it is possible to optimize the trajectory of a vehicle to locate and track objects moving underwater”, explains Ivan MasmitjĂ , the lead author of the study, who has worked between ICM-CSIC and MBARI.
This “will allow us to deepen the study of ecological phenomena such as migration or movement at small and large scales of a multitude of marine species using autonomous robots. In addition, these advances will make it possible to monitor other oceanographic instruments in real time through a network of robots, where some can be on the surface monitoring and transmitting by satellite the actions performed by other robotic platforms on the seabed”, points out the ICM-CSIC researcher Joan Navarro, who also participated in the study.
SPARTUS-II AUV link IQUA
To carry out this work, researchers used range acoustic techniques, which allow estimating the position of an object considering distance measurements taken at different points. However, this fact makes the accuracy in locating the object highly dependent on the place where the acoustic range measurements are taken. And this is where the application of artificial intelligence and, specifically, reinforcement learning, which allows the identification of the best points and, therefore, the optimal trajectory to be performed by the robot, becomes important.
Several real-world tests were conducted in Monterey Bay. These tests were carried out with a Wave Glider that used the H-LSTM-SAC algorithm to track a LRAUV (A) between 5 and 10 km off the coast of California (B). Two missions are represented in (C) and (D) where the blue points are the Wave Glider trajectory, the gray cross is the LRAUV real position, and the red and green points are the LRAUV estimated positions using the PF and DAT. — Science Robotics
Neural networks were trained, in part, using the computer cluster at the Barcelona Supercomputing Center (BSC-CNS), where the most powerful supercomputer in Spain and one of the most powerful in Europe are located. “This made it possible to adjust the parameters of different algorithms much faster than using conventional computers”, indicates Prof. Mario Martin, from the Computer Science Department of the UPC and author of the study.
Once trained, the algorithms were tested on different autonomous vehicles, including the AUV Sparus II developed by VICOROB, in a series of experimental missions developed in the port of Sant Feliu de GuĂxols, in the Baix EmpordĂ , and in Monterey Bay (California), in collaboration with the principal investigator of the Bioinspiration Lab at MBARI, Kakani Katija.
“Our simulation environment incorporates the control architecture of real vehicles, which allowed us to implement the algorithms efficiently before going to sea”, explains NarcĂs Palomeras, from the UdG.
For future research, the team will study the possibility of applying the same algorithms to solve more complicated missions. For example, the use of multiple vehicles to locate objects, detect fronts and thermoclines or cooperative algae upwelling through multi-platform reinforcement learning techniques.
This research has been carried out thanks to the prestigious European Marie Curie Individual Fellowship won by the researcher Ivan MasmitjĂ in 2020 and the BITER project, funded by the Ministry of Science and Innovation of the Government of Spain, which is currently under implementation.
Dynamic robotic tracking of underwater targets using reinforcement learning, Science Robotics (open access)
Astrobiology