Hierarchical quantum circuit representations for neural architecture search

Authors: Matt Lourens, Ilya Sinayskiy, Daniel K. Park, Carsten Blank, Francesco Petruccione

arXiv: 2210.15073v3 - DOI (quant-ph)
22 pages, 13 figures

Abstract: Machine learning with hierarchical quantum circuits, usually referred to as Quantum Convolutional Neural Networks (QCNNs), is a promising prospect for near-term quantum computing. The QCNN is a circuit model inspired by the architecture of Convolutional Neural Networks (CNNs). CNNs are successful because they do not need manual feature design and can learn high-level features from raw data. Neural Architecture Search (NAS) builds on this success by learning network architecture and achieves state-of-the-art performance. However, applying NAS to QCNNs presents unique challenges due to the lack of a well-defined search space. In this work, we propose a novel framework for representing QCNN architectures using techniques from NAS, which enables search space design and architecture search. Using this framework, we generate a family of popular QCNNs, those resembling reverse binary trees. We then evaluate this family of models on a music genre classification dataset, GTZAN, to justify the importance of circuit architecture. Furthermore, we employ a genetic algorithm to perform Quantum Phase Recognition (QPR) as an example of architecture search with our representation. This work provides a way to improve model performance without increasing complexity and to jump around the cost landscape to avoid barren plateaus. Finally, we implement the framework as an open-source Python package to enable dynamic QCNN creation and facilitate QCNN search space design for NAS.

Submitted to arXiv on 26 Oct. 2022

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.