Data - Apps - AI - Cybernetics

AI in Space for Scientific Missions: Strategies for Minimizing Neural-Network Model Upload

By Keith Cowing
Status Report
astro-ph.IM
June 22, 2024
Filed under , , , , , , , , , , , , ,
AI in Space for Scientific Missions: Strategies for Minimizing Neural-Network Model Upload
One of the eyes of a smart Martian DROID named Perseverance — NASA

Artificial Intelligence (AI) has the potential to revolutionize space exploration by delegating several spacecraft decisions to an onboard AI instead of relying on ground control and predefined procedures.

It is likely that there will be an AI/ML Processing Unit onboard the spacecraft running an inference engine. The neural-network will have pre-installed parameters that can be updated onboard by uploading, by telecommands, parameters obtained by training on the ground. However, satellite uplinks have limited bandwidth and transmissions can be costly.

Furthermore, a mission operating with a suboptimal neural network will miss out on valuable scientific data. Smaller networks can thereby decrease the uplink cost, while increasing the value of the scientific data that is downloaded. In this work, we evaluate and discuss the use of reduced-precision and bare-minimum neural networks to reduce the time for upload.

As an example of an AI use case, we focus on the NASA’s Magnetospheric MultiScale (MMS) mission. We show how an AI onboard could be used in the Earth’s magnetosphere to classify data to selectively downlink higher value data or to recognize a region-of-interest to trigger a burst-mode, collecting data at a high-rate. Using a simple filtering scheme and algorithm, we show how the start and end of a region-of-interest can be detected in on a stream of classifications.

To provide the classifications, we use an established Convolutional Neural Network (CNN) trained to an accuracy >94%. We also show how the network can be reduced to a single linear layer and trained to the same accuracy as the established CNN. Thereby, reducing the overall size of the model by up to 98.9%.

We further show how each network can be reduced by up to 75% of its original size, by using lower-precision formats to represent the network parameters, with a change in accuracy of less than 0.6 percentage points.

Jonah Ekelund, Ricardo Vinuesa, Yuri Khotyaintsev, Pierre Henri, Gian Luca Delzanno, Stefano Markidis

Subjects: Artificial Intelligence (cs.AI); Instrumentation and Methods for Astrophysics (astro-ph.IM)
Cite as: arXiv:2406.14297 [cs.AI] (or arXiv:2406.14297v1 [cs.AI] for this version)
https://doi.org/10.48550/arXiv.2406.14297
Focus to learn more
Submission history
From: Jonah Ekelund
[v1] Thu, 20 Jun 2024 13:24:52 UTC (830 KB)
https://arxiv.org/abs/2406.14297
Astrobiology, AI, Artificial Inteligence, Machine Learning, Deep Learning, Neural network,

Explorers Club Fellow, ex-NASA Space Station Payload manager/space biologist, Away Teams, Journalist, Lapsed climber, Synaesthete, Na’Vi-Jedi-Freman-Buddhist-mix, ASL, Devon Island and Everest Base Camp veteran, (he/him) 🖖🏻