All‐Optical Autoencoder Machine Learning Framework Using Linear Diffractive Processors
Diffractive deep neural network (D2NN), known for its high speed and strong parallelism, is applied across various fields, including pattern recognition, image processing, and image transmission. However, existing network architectures primarily focus on data representation within the original domai...
Uložené v:
| Vydané v: | Laser & photonics reviews Ročník 19; číslo 15 |
|---|---|
| Hlavní autori: | , , , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
Weinheim
Wiley Subscription Services, Inc
01.08.2025
|
| Predmet: | |
| ISSN: | 1863-8880, 1863-8899 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | Diffractive deep neural network (D2NN), known for its high speed and strong parallelism, is applied across various fields, including pattern recognition, image processing, and image transmission. However, existing network architectures primarily focus on data representation within the original domain, with limited exploration of the latent space, thereby restricting the information mining capabilities and multifunctional integration of D2NNs. Here, an all‐optical autoencoder (OAE) framework is proposed that linearly encodes the input wavefield into a prior shape distribution in the diffractive latent space (DLS) and decodes the encoded pattern back to the original wavefield. By leveraging the bidirectional multiplexing property of D2NN, the OAE modelsfunction as encoders in one direction and as decoders in the opposite direction. The models are applied to three areas: image denoising, noise‐resistant reconfigurable image classification, and image generation. Proof‐of‐concept experiments are conducted to validate numerical simulations. The OAE framework exploits the potential of latent representations, enabling single set of diffractive processors to simultaneously achieve image reconstruction, representation, and generation. This work not only offers fresh insights into the design of optical generative models but also paves the way for developing multifunctional, highly integrated, and general optical intelligent systems.
This work introduces an all‐optical autoencoder (OAE) framework utilizing the bidirectional multiplexing property of diffractive deep neural networks (D2NN). The OAE encodes and decodes input wavefields in the diffractive latent space (DLS), enhancing information mining and multifunctional integration. Applied to image denoising, noise‐resistant classification, and image generation, the framework leverages bidirectional processing for efficient optical generative models and integrated systems. |
|---|---|
| Bibliografia: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1863-8880 1863-8899 |
| DOI: | 10.1002/lpor.202401945 |