Have an existing GitHub repository or Jupyter notebook showing off quantum machine learning with PennyLane? Read the guidelines and submission instructions here, and have your demonstration and research featured on our community page.

Fraud Detection

Sophie Choe


This is a binary classification hybrid model as proposed in the paper "Continuous Variable Quantum Neural Networks", composed of 2 layers of feed forward classical layers and 4 layers of quantum neural network. Using the Pennylane Tensorflow plug-in, the whole network is wrapped as a Keras sequential network, whose parameters are updated via Keras's built in loss function and optimizer.

Quantum-Classical MNIST Classification Model

Sophie Choe


Keras-PennyLane hybrid model for MNIST classification, inspired by the "Supervised learning with hybrid networks" section of the paper "Continuous-variable quantum neural networks".

Hybrid quantum-classical auto encoder

Sophie Choe


Keras-PennyLane implementation of the hybrid quantum-classical auto encoder proposed in the paper "Continous-variable quantum neural networks". The loss function used here is the mean-squared error, unlike the paper which requires state vector retrieval.

Quantum circuit learning to compute option prices and their sensitivities

Takayuki Sakuma


Quantum circuit learning is applied to computing option prices and their sensitivities. The advantage of this method is that a suitable choice of quantum circuit architecture makes it possible to compute the sensitivities analytically by applying parameter-shift rules.

Subspace Search Variational Quantum Eigensolver

Shah Ishman Mohtashim, Turbasu Chatterjee, Arnav Das


The variational quantum eigensolver (VQE) is an algorithm for searching the ground state of a quantum system. The SSVQE uses a simple technique to find the excited energy states by transforming the |0⋯0⟩ to the ground state, and another orthogonal basis state |0⋯1⟩ to the first excited state and so on. As a demonstration, the weighted SSVQE is used to find out the excited states of a transverse Ising model with 4 spins and that of the hydrogen molecule.

Quantum PPO/TRPO - LSTMs and memory proximal policy optimization for black-box quantum control

Abhilash Majumder


Reinforcement Learning as quantum control leverages quantum hybrid circuits (QHC) for creating optimizations on policy networks for Deep RL. Policy-gradient-based reinforcement learning (RL) algorithms are well suited for optimizing the variational parameters of QAOA in a noise-robust fashion, opening up the way for developing RL techniques for continuous quantum control. This is advantageous to help mitigate and monitor the potentially unknown sources of errors in modern quantum simulators. This demo aims to provide an implementation of PPO on policy algorithm with QHC for continuous control.

EVA (Exponential Value Approximation) algorithm

Guillermo Alonso-Linaje


VQE is currently one of the most widely used algorithms for optimizing problems using quantum computers. A necessary step in this algorithm is calculating the expectation value given a state, which is calculated by decomposing the Hamiltonian into Pauli operators and obtaining this value for each of them. In this work, we have designed an algorithm capable of figuring this value using a single circuit. A time cost study has been carried out, and it has been found that in certain more complex Hamiltonians, it is possible to obtain a good performance over the current methods.

Meta-Variational Quantum Eigensolver

Nahum Sá


In this tutorial I follow the Meta-VQE paper. The Meta-VQE algorithm is a variational quantum algorithm that is suited for NISQ devices and encodes parameters of a Hamiltonian into a variational ansatz. We can obtain good estimations of the ground state of the Hamiltonian by changing only those encoded parameters.

Feature maps for kernel-based quantum classifiers

Semyon Sinchenko


In this tutorial we implement a few examples of feature maps for kernel based quantum machine learning. We'll see how quantum feature maps could make linear unseparable data separable after applying a kernel and measuring an observable. We will follow an article and also implement all the kernel functions with PennyLane.

Variational Quantum Circuits for Deep Reinforcement Learning

Samuel Yen-Chi Chen


This work explores variational quantum circuits for deep reinforcement learning. Specifically, we reshape classical deep reinforcement learning algorithms like experience replay and target network into a representation of variational quantum circuits. Moreover, we use a quantum information encoding scheme to reduce the number of model parameters compared to classical neural networks. To the best of our knowledge, this work is the first proof-of-principle demonstration of variational quantum circuits to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and target network. Besides, our variational quantum circuits can be deployed in many near-term NISQ machines.

QCNN for Speech Commands Recognition

C.-H. Huck Yang


We train a hybrid quantum convolution neural network (QCNN) on acoustic data with up to 10,000 features. This model uses layers of random quantum gates to efficiently encode convolutional features. We perform a neural saliency analysis to provide a classical activation mapping to compare classical and quantum models, illustrating that the QCNN self-attention model did learn meaningful representations. An additional connectionist temporal classification (CTC) loss on character recognition is also provided for continuous speech recognition.

Layerwise learning for quantum neural networks

Felipe Oyarce Andrade


In this project we’ve implemented a strategy presented by Skolik et al., 2020 for effectively training quantum neural networks. In layerwise learning the strategy is to gradually increase the number of parameters by adding a few layers and training them while freezing the parameters of previous layers already trained. An easy way for understanding this technique is to think that we’re dividing the problem into smaller circuits to successfully avoid falling into Barren Plateaus. We provide a proof-of-concept implementation of this technique in Pennylane’s Pytorch interface for binary classification in the MNIST dataset.

A Quantum-Enhanced Transformer

Riccardo Di Sipio


The Transformer neural network architecture revolutionized the analysis of text. Here we show an example of a Transformer with quantum-enhanced multi-headed attention. In the quantum-enhanced version, dense layers are replaced by simple Variational Quantum Circuits. An implementation based on PennyLane and TensorFlow-2.x illustrates the basic concept.

A Quantum-Enhanced LSTM Layer

Riccardo Di Sipio


In Natural Language Processing, documents are usually presented as sequences of words. One of the most successful techniques to manipulate this kind of data is the Recurrent Neural Network architecture, and in particular a variant called Long Short-Term Memory (LSTM). Using the PennyLane library and its PyTorch interface, one can easily define a LSTM network where Variational Quantum Circuits (VQCs) replace linear operations. An application to Part-of-Speech tagging is presented in this tutorial.

Quantum Machine Learning Model Predictor for Continuous Variables

Roberth Saénz Pérez Alvarado


According to the paper "Predicting toxicity by quantum machine learning" (Teppei Suzuki, Michio Katouda 2020), it is possible to predict continuous variables—like those in the continuous-variable quantum neural network model described in Killoran et al. (2018)—using 2 qubits per feature. This is done by applying encodings, variational circuits, and some linear transformations on expectation values in order to predict values close to the real target. Based on an example from PennyLane, and using a small dataset which consists of a one-dimensional feature and one output (so that the processing does not take too much time), the algorithm showed reliable results.

Trainable Quanvolutional Neural Networks

Denny Mattern, Darya Martyniuk, Fabian Bergmann, and Henri Willems


We implement a trainable version of Quanvolutional Neural Networks using parametrized RandomCircuits. Parameters are optimized using standard gradient descent. Our code is based on the Quanvolutional Neural Networks demo by Andrea Mari. This demo results from our research as part of the PlanQK consortium.

Using a Keras optimizer for Iris classification with a QNode and loss function

Hemant Gahankari


Using PennyLane, we explain how to create a quantum function and train a quantum function using a Keras optimizer directly, i.e., not using a Keras layer. The objective is to train a quantum function to predict classes of the Iris dataset.

Linear regression using angle embedding and a single qubit

Hemant Gahankari


In this example, we create a hybrid neural network (mix of classical and quantum layers), train it and get predictions from it. The data set consists of temperature readings in degrees Centigrade and corresponding Fahrenheit. The objective is to train a neural network that predicts Fahrenheit values given Centigrade values.

Amplitude embedding in Iris classification with PennyLane's KerasLayer

Hemant Gahankari


Using amplitude embedding from PennyLane, this demonstration aims to explain how to pass classical data into the quantum function and convert it to quantum data. It also shows how to create a PennyLane KerasLayer from a QNode, train it and check the performance of the model.

Angle embedding in Iris classification with PennyLane's KerasLayer

Hemant Gahankari


Using angle embedding from PennyLane, this demonstration aims to explain how to pass classical data into the quantum function and convert it to quantum data. It also shows how to create a PennyLane KerasLayer from a QNode, train it and check the performance of the model.

Characterizing the loss landscape of variational quantum circuits

Patrick Huembeli and Alexandre Dauphin


Using PennyLane and complex PyTorch, we compute the Hessian of the loss function of VQCs and show how to characterize the loss landscape with it. We show how the Hessian can be used to escape flat regions of the loss landscape.