Quantum Generative Adversarial Networks with Cirq + TensorFlow

This demo constructs a Quantum Generative Adversarial Network (QGAN) (Lloyd and Weedbrook (2018), Dallaire-Demers and Killoran (2018)) using two subcircuits, a generator and a discriminator. The generator attempts to generate synthetic quantum data to match a pattern of “real” data, while the discriminator tries to discern real data from fake data (see image below). The gradient of the discriminator’s output provides a training signal for the generator to improve its fake generated data.


../_images/qgan.png

Using Cirq + TensorFlow

PennyLane allows us to mix and match quantum devices and classical machine learning software. For this demo, we will link together Google’s Cirq and TensorFlow libraries.

We begin by importing PennyLane, NumPy, and TensorFlow.

import pennylane as qml
import numpy as np
import tensorflow as tf

We also declare a 3-qubit simulator device running in Cirq.

dev  = qml.device('cirq.simulator', wires=3)

Generator and Discriminator

In classical GANs, the starting point is to draw samples either from some “real data” distribution, or from the generator, and feed them to the discriminator. In this QGAN example, we will use a quantum circuit to generate the real data.

For this simple example, our real data will be a qubit that has been rotated (from the starting state \(\left|0\right\rangle\)) to some arbitrary, but fixed, state.

def real(phi, theta, omega):
    qml.Rot(phi, theta, omega, wires=0)

For the generator and discriminator, we will choose the same basic circuit structure, but acting on different wires.

Both the real data circuit and the generator will output on wire 0, which will be connected as an input to the discriminator. Wire 1 is provided as a workspace for the generator, while the discriminator’s output will be on wire 2.

def generator(w):
    qml.RX(w[0], wires=0)
    qml.RX(w[1], wires=1)
    qml.RY(w[2], wires=0)
    qml.RY(w[3], wires=1)
    qml.RZ(w[4], wires=0)
    qml.RZ(w[5], wires=1)
    qml.CNOT(wires=[0, 1])
    qml.RX(w[6], wires=0)
    qml.RY(w[7], wires=0)
    qml.RZ(w[8], wires=0)


def discriminator(w):
    qml.RX(w[0], wires=0)
    qml.RX(w[1], wires=2)
    qml.RY(w[2], wires=0)
    qml.RY(w[3], wires=2)
    qml.RZ(w[4], wires=0)
    qml.RZ(w[5], wires=2)
    qml.CNOT(wires=[1, 2])
    qml.RX(w[6], wires=2)
    qml.RY(w[7], wires=2)
    qml.RZ(w[8], wires=2)

We create two QNodes. One where the real data source is wired up to the discriminator, and one where the generator is connected to the discriminator. In order to pass TensorFlow Variables into the quantum circuits, we specify the "tf" interface.

@qml.qnode(dev, interface="tf")
def real_disc_circuit(phi, theta, omega, disc_weights):
    real(phi, theta, omega)
    discriminator(disc_weights)
    return qml.expval(qml.PauliZ(2))


@qml.qnode(dev, interface="tf")
def gen_disc_circuit(gen_weights, disc_weights):
    generator(gen_weights)
    discriminator(disc_weights)
    return qml.expval(qml.PauliZ(2))

QGAN cost functions

There are two cost functions of interest, corresponding to the two stages of QGAN training. These cost functions are built from two pieces: the first piece is the probability that the discriminator correctly classifies real data as real. The second piece is the probability that the discriminator classifies fake data (i.e., a state prepared by the generator) as real.

The discriminator is trained to maximize the probability of correctly classifying real data, while minimizing the probability of mistakenly classifying fake data.

The generator is trained to maximize the probability that the discriminator accepts fake data as real.

def prob_real_true(disc_weights):
    true_disc_output = real_disc_circuit(phi, theta, omega, disc_weights)
    # convert to probability
    prob_real_true = (true_disc_output + 1) / 2
    return prob_real_true


def prob_fake_true(gen_weights, disc_weights):
    fake_disc_output = gen_disc_circuit(gen_weights, disc_weights)
    # convert to probability
    prob_fake_true = (fake_disc_output + 1) / 2
    return prob_fake_true


def disc_cost(disc_weights):
    cost = prob_fake_true(gen_weights, disc_weights) - prob_real_true(disc_weights)
    return cost


def gen_cost(gen_weights):
    return -prob_fake_true(gen_weights, disc_weights)

Training the QGAN

We initialize the fixed angles of the “real data” circuit, as well as the initial parameters for both generator and discriminator. These are chosen so that the generator initially prepares a state on wire 0 that is very close to the \(\left| 1 \right\rangle\) state.

phi = np.pi / 6
theta = np.pi / 2
omega = np.pi / 7
np.random.seed(0)
eps = 1e-2
init_gen_weights = np.array([np.pi] + [0] * 8) + \
                   np.random.normal(scale=eps, size=(9,))
init_disc_weights = np.random.normal(size=(9,))

gen_weights = tf.Variable(init_gen_weights)
disc_weights = tf.Variable(init_disc_weights)

We begin by creating the optimizer:

opt = tf.keras.optimizers.SGD(0.1)

In the first stage of training, we optimize the discriminator while keeping the generator parameters fixed.

cost = lambda: disc_cost(disc_weights)

for step in range(50):
    opt.minimize(cost, disc_weights)
    if step % 5 == 0:
        cost_val = cost().numpy()
        print("Step {}: cost = {}".format(step, cost_val))

Out:

Step 0: cost = -0.1094201769647043
Step 5: cost = -0.38998838139377767
Step 10: cost = -0.6660191301143641
Step 15: cost = -0.8550836123740737
Step 20: cost = -0.9454460261415534
Step 25: cost = -0.980587795459769
Step 30: cost = -0.9931367838787679
Step 35: cost = -0.9974893060399808
Step 40: cost = -0.9989861294543871
Step 45: cost = -0.9994998381416371

At the discriminator’s optimum, the probability for the discriminator to correctly classify the real data should be close to one.

print("Prob(real classified as real): ", prob_real_true(disc_weights).numpy())

Out:

Prob(real classified as real):  0.9998971772299683

For comparison, we check how the discriminator classifies the generator’s (still unoptimized) fake data:

print("Prob(fake classified as real): ", prob_fake_true(gen_weights, disc_weights).numpy())

Out:

Prob(fake classified as real):  0.00024288418215956398

In the adversarial game we now have to train the generator to better fool the discriminator. For this demo, we only perform one stage of the game. For more complex models, we would continue training the models in an alternating fashion until we reach the optimum point of the two-player adversarial game.

cost = lambda: gen_cost(gen_weights)

for step in range(200):
    opt.minimize(cost, gen_weights)
    if step % 5 == 0:
        cost_val = cost().numpy()
        print("Step {}: cost = {}".format(step, cost_val))

Out:

Step 0: cost = -0.0002664676193875337
Step 5: cost = -0.0004265994072571999
Step 10: cost = -0.0006873304639380962
Step 15: cost = -0.0011112649161937327
Step 20: cost = -0.001800249450372604
Step 25: cost = -0.002917897449343343
Step 30: cost = -0.004727605023155945
Step 35: cost = -0.007646650529466115
Step 40: cost = -0.012325834585041662
Step 45: cost = -0.01975453182595288
Step 50: cost = -0.03136851740600832
Step 55: cost = -0.04909774136831402
Step 60: cost = -0.07520403823514243
Step 65: cost = -0.11169032819611857
Step 70: cost = -0.15917322876043727
Step 75: cost = -0.21566073612893888
Step 80: cost = -0.27637398983802086
Step 85: cost = -0.33541722075847247
Step 90: cost = -0.38835003965175474
Step 95: cost = -0.43371746517084375
Step 100: cost = -0.47284845555839183
Step 105: cost = -0.508777012095095
Step 110: cost = -0.5451965337468323
Step 115: cost = -0.5856615227431803
Step 120: cost = -0.6327876616058461
Step 125: cost = -0.6872443387234739
Step 130: cost = -0.7468425520602864
Step 135: cost = -0.8066387307464851
Step 140: cost = -0.8607328855333378
Step 145: cost = -0.9048395817153079
Step 150: cost = -0.9376674301569636
Step 155: cost = -0.9604095279003069
Step 160: cost = -0.9753704938610888
Step 165: cost = -0.9848743049632418
Step 170: cost = -0.9907761925863667
Step 175: cost = -0.9943897043892296
Step 180: cost = -0.9965828604268625
Step 185: cost = -0.9979067380732037
Step 190: cost = -0.9987029062652951
Step 195: cost = -0.9991812985717186

At the optimum of the generator, the probability for the discriminator to be fooled should be close to 1.

print("Prob(fake classified as real): ", prob_fake_true(gen_weights, disc_weights).numpy())

Out:

Prob(fake classified as real):  0.9994220353985931

At the joint optimum the discriminator cost will be close to zero, indicating that the discriminator assigns equal probability to both real and generated data.

print("Discriminator cost: ", disc_cost(disc_weights).numpy())

# The generator has successfully learned how to simulate the real data
# enough to fool the discriminator.

Out:

Discriminator cost:  -0.00047514183137520316

Total running time of the script: ( 0 minutes 48.665 seconds)

Gallery generated by Sphinx-Gallery