CryoMAE: Few-Shot Cryo-EM Particle Picking with Masked Autoencoders

Cryo-electron microscopy (cryo-EM) emerges as a pivotal technology for determining the architecture of cells, viruses, and protein assemblies at near-atomic resolution. Traditional particle picking, a key step in cryo-EM, struggles with manual effort and automated methods' sensitivity to low si...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Proceedings / IEEE Workshop on Applications of Computer Vision s. 3876 - 3885
Hlavní autoři: Xu, Chentianye, Zhan, Xueying, Xu, Min
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 26.02.2025
Témata:
ISSN:2642-9381
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Cryo-electron microscopy (cryo-EM) emerges as a pivotal technology for determining the architecture of cells, viruses, and protein assemblies at near-atomic resolution. Traditional particle picking, a key step in cryo-EM, struggles with manual effort and automated methods' sensitivity to low signal-to-noise ratio (SNR) and varied particle orientations. Furthermore, existing neural network (NN)-based approaches often require extensive labeled datasets, limiting their practicality. To overcome these obstacles, we introduce cryoMAE, a novel approach based on few-shot learning that harnesses the capabilities of Masked Autoencoders (MAE) to enable efficient selection of single particles in cryo-EM images. Contrary to conventional NN-based techniques, cryoMAE requires only a minimal set of positive particle images for training yet demonstrates high performance in particle detection. Furthermore, the imple-mentation of a self-cross similarity loss ensures distinct features for particle and background regions, thereby enhancing the discrimination capability of cryoMAE. Experiments on large-scale cryo-EM datasets show that cryoMAE outperforms existing state-of-the-art (SOTA) methods, improving 3D reconstruction resolution by up to 22.4%. Our code is available at: https://github.com/xulabs/aitom.
ISSN:2642-9381
DOI:10.1109/WACV61041.2025.00381