Out-of-distribution generalization for learning quantum dynamics

Generalization bounds are a critical tool to assess the training data requirements of Quantum Machine Learning (QML). Recent work has established guarantees for in-distribution generalization of quantum neural networks (QNNs), where training and testing data are drawn from the same data distribution...

Full description

Saved in:
Bibliographic Details
Published in:Nature communications Vol. 14; no. 1; pp. 3751 - 9
Main Authors: Caro, Matthias C., Huang, Hsin-Yuan, Ezzell, Nicholas, Gibbs, Joe, Sornborger, Andrew T., Cincio, Lukasz, Coles, Patrick J., Holmes, Zoë
Format: Journal Article
Language:English
Published: London Nature Publishing Group UK 05.07.2023
Nature Publishing Group
Nature Portfolio
Subjects:
ISSN:2041-1723, 2041-1723
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Generalization bounds are a critical tool to assess the training data requirements of Quantum Machine Learning (QML). Recent work has established guarantees for in-distribution generalization of quantum neural networks (QNNs), where training and testing data are drawn from the same data distribution. However, there are currently no results on out-of-distribution generalization in QML, where we require a trained model to perform well even on data drawn from a different distribution to the training distribution. Here, we prove out-of-distribution generalization for the task of learning an unknown unitary. In particular, we show that one can learn the action of a unitary on entangled states having trained only product states. Since product states can be prepared using only single-qubit gates, this advances the prospects of learning quantum dynamics on near term quantum hardware, and further opens up new methods for both the classical and quantum compilation of quantum circuits. Generalization - that is, the ability to extrapolate from training data to unseen data - is fundamental in machine learning, and thus also for quantum ML. Here, the authors show that QML algorithms are able to generalise the training they had on a specific distribution and learn over different distributions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
89233218CNA000001
USDOE Laboratory Directed Research and Development (LDRD) Program
USDOE National Nuclear Security Administration (NNSA)
LA-UR-23-26072; LA-UR-22-23623
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-023-39381-w