Mantis Shrimp: Exploring Photometric Band Utilization in Computer Vision Networks for Photometric Redshift Estimation

We present Mantis Shrimp , a multisurvey deep learning model for photometric redshift estimation that fuses ultraviolet (Galaxy Evolution Explorer), optical (PanSTARRS), and infrared (UnWISE) imagery. Machine learning is now an established approach for photometric redshift estimation, with generally...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:The Astrophysical journal Ročník 991; číslo 2; s. 124 - 149
Hlavní autori: Engel, Andrew W., Byler, Nell, Tsou, Adam, Narayan, Gautham, Bonilla, Emmanuel, Smith, Ian
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Philadelphia The American Astronomical Society 01.10.2025
IOP Publishing
Predmet:
ISSN:0004-637X, 1538-4357
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We present Mantis Shrimp , a multisurvey deep learning model for photometric redshift estimation that fuses ultraviolet (Galaxy Evolution Explorer), optical (PanSTARRS), and infrared (UnWISE) imagery. Machine learning is now an established approach for photometric redshift estimation, with generally acknowledged higher performance in areas with a high density of spectroscopically identified galaxies over template-based methods. Multiple works have shown that image-based convolutional neural networks can outperform tabular-based color/magnitude models. In comparison to tabular models, image models have additional design complexities: It is largely unknown how to fuse inputs from different instruments that have different resolutions or noise properties. The Mantis Shrimp model estimates the conditional density estimate of redshift using cutout images. The density estimates are well calibrated, and the point estimates perform well in the distribution of available spectroscopically confirmed galaxies with (bias = 1e-2), scatter (NMAD = 2.44e-2), and catastrophic outlier rate ( η >0.15 = 4.51%). We find that early-fusion approaches (e.g., resampling and stacking images from different instruments) match the performance of late-fusion approaches (e.g., concatenating latent space representations), so that the design choice ultimately is left to the user. Finally, we study how the model learns to use information across bands, finding evidence that our model successfully incorporates information from all surveys. The applicability of our model to the analysis of large populations of galaxies is limited by the speed and ease of downloading and preparing cutouts from external servers; however, our model could be useful in smaller studies such as in generating priors over redshift for stellar population synthesis.
Bibliografia:AAS62012
Laboratory Astrophysics, Instrumentation, Software, and Data
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0004-637X
1538-4357
DOI:10.3847/1538-4357/adf5b8