Several tasks involving the determination of the time evolution of a system of solid state qubits require stochastic methods in order to identify the best sequence of gates and the time of interaction among the qubits. The major success of deep learning in several scientific disciplines has suggested its application to quantum information as well. Thanks to its capability to identify best strategy in those problems involving a competition between the short term and the long term rewards, reinforcement learning (RL) method has been successfully applied, for instance, to discover sequences of quantum gate operations minimizing the information loss. In order to extend the application of RL to the transfer of quantum information, we focus on Coherent Transport by Adiabatic Passage (CTAP) on a chain of three semiconductor quantum dots (QD). This task is usually performed by the so called counter-intuitive sequence of gate pulses. Such sequence is capable of coherently transfer an electronic population from the first to the last site of an odd chain of QDs, by leaving the central QD unpopulated. We apply a technique to find nearly optimal gate pulse sequence without explicitly give any prior knowledge of the underlying physical system to the RL agent. Using the advantage actor-critic algorithm, with a small neural net as function approximator, we trained a RL agent to choose the best action at every time step of the physical evolution to achieve the same results previously found only by ansatz solutions.

Reinforcement Learning Based Control of Coherent Transport by Adiabatic Passage of Spin Qubits / R. Porotti, D. Tamascelli, M. Restelli, E. Prati. - In: JOURNAL OF PHYSICS. CONFERENCE SERIES. - ISSN 1742-6588. - 1275:(2019), pp. 012019.1-012019.9. (Intervento presentato al 9. convegno International Workshop on Decoherence, Information, Complexity and Entropy (DICE) - From Discrete Structures and Dynamics to Top-Down Causation tenutosi a Castiglioncello nel 2018) [10.1088/1742-6596/1275/1/012019].

Reinforcement Learning Based Control of Coherent Transport by Adiabatic Passage of Spin Qubits

D. Tamascelli;E. Prati
2019

Abstract

Several tasks involving the determination of the time evolution of a system of solid state qubits require stochastic methods in order to identify the best sequence of gates and the time of interaction among the qubits. The major success of deep learning in several scientific disciplines has suggested its application to quantum information as well. Thanks to its capability to identify best strategy in those problems involving a competition between the short term and the long term rewards, reinforcement learning (RL) method has been successfully applied, for instance, to discover sequences of quantum gate operations minimizing the information loss. In order to extend the application of RL to the transfer of quantum information, we focus on Coherent Transport by Adiabatic Passage (CTAP) on a chain of three semiconductor quantum dots (QD). This task is usually performed by the so called counter-intuitive sequence of gate pulses. Such sequence is capable of coherently transfer an electronic population from the first to the last site of an odd chain of QDs, by leaving the central QD unpopulated. We apply a technique to find nearly optimal gate pulse sequence without explicitly give any prior knowledge of the underlying physical system to the RL agent. Using the advantage actor-critic algorithm, with a small neural net as function approximator, we trained a RL agent to choose the best action at every time step of the physical evolution to achieve the same results previously found only by ansatz solutions.
Settore FIS/03 - Fisica della Materia
Settore INF/01 - Informatica
2019
Article (author)
File in questo prodotto:
File Dimensione Formato  
Porotti_2019_J._Phys.__Conf._Ser._1275_012019.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 3.07 MB
Formato Adobe PDF
3.07 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/759884
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 8
social impact