Mars

Mapping “Brain Coral” Regions On Mars Using Deep Learning

By Keith Cowing
Status Report
astro-ph.EP
November 22, 2023
Filed under , , , , , , , ,
Mapping “Brain Coral” Regions On Mars Using Deep Learning
An overview of the inputs for each step in our processing pipeline. a) A region in the HiRISE image of ESP 018707 2205 is shown at the native resolution (0.3m/px) with a blue box highlighting the window size for our classifier network. b) A 256 x 256 pixel window is used as input for our spatial classifier algorithm however the window is at 1/16 the original resolution (1.2m/px). A grid of 8×8 squares is displayed showing how that image gets tiled and preprocessed using parts of a JPEG encoder which involves the discrete cosine transform (d). c) A single 8×8 tile flattened into a 1d array which is used for the DCT transform (d). e) A block of Fourier coefficients is rearranged into a data cube and used as input for our Fourier classifier. Reducing the image size and ultimately the channel size after the first conv. layer significantly shortens the network’s processing time compared to the spatial image input. — astro-ph.EP

One of the main objectives of the Mars Exploration Program is to search for evidence of past or current life on the planet. To achieve this, Mars exploration has been focusing on regions that may have liquid or frozen water.

A set of critical areas may have seen cycles of ice thawing in the relatively recent past in response to periodic changes in the obliquity of Mars. In this work, we use convolutional neural networks to detect surface regions containing “Brain Coral” terrain, a landform on Mars whose similarity in morphology and scale to sorted stone circles on Earth suggests that it may have formed as a consequence of freeze/thaw cycles.

We use large images (~100-1000 megapixels) from the Mars Reconnaissance Orbiter to search for these landforms at resolutions close to a few tens of centimeters per pixel (~25–50 cm). Over 52,000 images (~28 TB) were searched (~5% of the Martian surface) where we found detections in over 200 images. To expedite the processing we leverage a classifier network (prior to segmentation) in the Fourier domain that can take advantage of JPEG compression by leveraging blocks of coefficients from a discrete cosine transform in lieu of decoding the entire image at the full spatial resolution.

The hybrid pipeline approach maintains ~93% accuracy while cutting down on ~95% of the total processing time compared to running the segmentation network at the full resolution on every image. The timely processing of big data sets helps inform mission operations, geologic surveys to prioritize candidate landing sites, avoid hazardous areas, or map the spatial extent of certain terrain. The segmentation masks and source code are available on Github for the community to explore and build upon.

Kyle A. Pearson, Eldar Noe, Daniel Zhao, Alphan Altinok, Alex Morgan

Comments: Submitted for publication, seeking comments from the community. Code available: this https URL
Subjects: Earth and Planetary Astrophysics (astro-ph.EP); Instrumentation and Methods for Astrophysics (astro-ph.IM); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
Cite as: arXiv:2311.12292 [astro-ph.EP] (or arXiv:2311.12292v1 [astro-ph.EP] for this version)
Submission history
From: Kyle Pearson
[v1] Tue, 21 Nov 2023 02:24:52 UTC (13,166 KB)
https://arxiv.org/abs/2311.12292
Astrobiology

Explorers Club Fellow, ex-NASA Space Station Payload manager/space biologist, Away Teams, Journalist, Lapsed climber, Synaesthete, Na’Vi-Jedi-Freman-Buddhist-mix, ASL, Devon Island and Everest Base Camp veteran, (he/him) 🖖🏻