PennyLane
Install
Install

Related materials

  • Related contentHow to optimize a QML model using JAX and Optax
  • Related contentUsing JAX with PennyLane
  • Related contentVariational classifier

Contents

  1. Set up your model, data, and cost
  2. Initialize your parameters
  3. Create the optimizer
  4. JIT-compiling the optimization
  5. Timing the optimization
  6. About the author

Downloads

  • Download Python script
  • Download Notebook
  • View on GitHub
  1. Demos/
  2. Quantum Machine Learning/
  3. How to optimize a QML model using Catalyst and quantum just-in-time (QJIT) compilation

How to optimize a QML model using Catalyst and quantum just-in-time (QJIT) compilation

Josh Izaac

Josh Izaac

Published: April 25, 2024. Last updated: April 14, 2026.

Once you have set up your quantum machine learning model (which typically includes deciding on your circuit architecture/ansatz, determining how you embed or integrate your data, and creating your cost function to minimize a quantity of interest), the next step is optimization. That is, setting up a classical optimization loop to find a minimal value of your cost function.

In this example, we’ll show you how to use JAX, an autodifferentiable machine learning framework, and Optax, a suite of JAX-compatible gradient-based optimizers, to optimize a PennyLane quantum machine learning model which has been quantum just-in-time compiled using the qjit() decorator and Catalyst.

../../_images/OGthumbnail_large_how-to-optimize-qjit-optax_2024-04-23.png

Set up your model, data, and cost

Here, we will create a simple QML model for our optimization. In particular:

  • We will embed our data through a series of rotation gates.

  • We will then have an ansatz of trainable rotation gates with parameters weights; it is these values we will train to minimize our cost function.

  • We will train the QML model on data, a (5, 4) array, and optimize the model to match target predictions given by target.

import pennylane as qp
from jax import numpy as jnp
import optax
import catalyst

n_wires = 5
data = jnp.sin(jnp.mgrid[-2:2:0.2].reshape(n_wires, -1)) ** 3
targets = jnp.array([-0.2, 0.4, 0.35, 0.2])

dev = qp.device("lightning.qubit", wires=n_wires)

@qp.qnode(dev)
def circuit(data, weights):
    """Quantum circuit ansatz"""

    @qp.for_loop(0, n_wires, 1)
    def data_embedding(i):
        qp.RY(data[i], wires=i)

    data_embedding()

    @qp.for_loop(0, n_wires, 1)
    def ansatz(i):
        qp.RX(weights[i, 0], wires=i)
        qp.RY(weights[i, 1], wires=i)
        qp.RX(weights[i, 2], wires=i)
        qp.CNOT(wires=[i, (i + 1) % n_wires])

    ansatz()

    # we use a sum of local Z's as an observable since a
    # local Z would only be affected by params on that qubit.
    return qp.expval(qp.sum(*[qp.PauliZ(i) for i in range(n_wires)]))

The catalyst.vmap() function allows us to specify that the first argument to circuit (data) contains a batch dimension. In this example, the batch dimension is the second axis (axis 1).

circuit = qp.qjit(catalyst.vmap(circuit, in_axes=(1, None)))

We will define a simple cost function that computes the overlap between model output and target data:

def my_model(data, weights, bias):
    return circuit(data, weights) + bias

@qp.qjit
def loss_fn(params, data, targets):
    predictions = my_model(data, params["weights"], params["bias"])
    loss = jnp.sum((targets - predictions) ** 2 / len(data))
    return loss

Note that the model above is just an example for demonstration – there are important considerations that must be taken into account when performing QML research, including methods for data embedding, circuit architecture, and cost function, in order to build models that may have use. This is still an active area of research; see our demonstrations for details.

Initialize your parameters

Now, we can generate our trainable parameters weights and bias that will be used to train our QML model.

weights = jnp.ones([n_wires, 3])
bias = jnp.array(0.)
params = {"weights": weights, "bias": bias}

Plugging the trainable parameters, data, and target labels into our cost function, we can see the current loss as well as the parameter gradients:

loss_fn(params, data, targets)

print(qp.qjit(catalyst.grad(loss_fn, method="fd"))(params, data, targets))
{'bias': Array(-0.75432067, dtype=float64), 'weights': Array([[-1.95077271e-01,  5.28546590e-02, -4.89252073e-01],
       [-1.99687794e-02, -5.32871564e-02,  9.22904869e-02],
       [-2.71755507e-03, -9.64672786e-05, -4.79570827e-03],
       [-6.35443870e-02,  3.61110014e-02, -2.05196876e-01],
       [-9.02635405e-02,  1.63759364e-01, -5.64262612e-01]],      dtype=float64)}

Create the optimizer

We can now use Optax to create an Adam optimizer, and train our circuit.

We first define our update_step function, which needs to do a couple of things:

  • Compute the gradients of the loss function. We can do this via the catalyst.grad() function.

  • Apply the update step of our optimizer via opt.update

  • Update the parameters via optax.apply_updates

opt = optax.adam(learning_rate=0.3)

@qp.qjit
def update_step(i, args):
    params, opt_state, data, targets = args

    grads = catalyst.grad(loss_fn, method="fd")(params, data, targets)
    updates, opt_state = opt.update(grads, opt_state)
    params = optax.apply_updates(params, updates)

    return (params, opt_state, data, targets)

loss_history = []

opt_state = opt.init(params)

for i in range(100):
    params, opt_state, _, _ = update_step(i, (params, opt_state, data, targets))
    loss_val = loss_fn(params, data, targets)

    if i % 5 == 0:
        print(f"Step: {i} Loss: {loss_val}")

    loss_history.append(loss_val)
Step: 0 Loss: 0.27303537436157665
Step: 5 Loss: 0.03255924063336564
Step: 10 Loss: 0.02928202264036072
Step: 15 Loss: 0.03337864907518662
Step: 20 Loss: 0.031236313525510095
Step: 25 Loss: 0.027191477655469254
Step: 30 Loss: 0.022688381745801005
Step: 35 Loss: 0.018162725916522356
Step: 40 Loss: 0.014789693435682942
Step: 45 Loss: 0.011206958770927884
Step: 50 Loss: 0.009409798461639115
Step: 55 Loss: 0.017898232202829118
Step: 60 Loss: 0.012861302997020133
Step: 65 Loss: 0.009916018517948042
Step: 70 Loss: 0.00861165084794141
Step: 75 Loss: 0.0065854927308626274
Step: 80 Loss: 0.006778134094237612
Step: 85 Loss: 0.0060437335914725375
Step: 90 Loss: 0.006139652770400204
Step: 95 Loss: 0.00498952607046071

JIT-compiling the optimization

In the above example, we just-in-time (JIT) compiled our cost function loss_fn. However, we can also JIT compile the entire optimization loop; this means that the for-loop around optimization is not happening in Python, but is compiled and executed natively. This avoids (potentially costly) data transfer between Python and our JIT compiled cost function with each update step.

params = {"weights": weights, "bias": bias}

@qp.qjit
def optimization(params, data, targets):
    opt_state = opt.init(params)
    args = (params, opt_state, data, targets)
    (params, opt_state, _, _) = qp.for_loop(0, 100, 1)(update_step)(args)
    return params

Note that we use for_loop() rather than a standard Python for loop, to allow the control flow to be JIT compatible.

final_params = optimization(params, data, targets)

print(final_params)
{'bias': Array(-0.75292874, dtype=float64), 'weights': Array([[ 1.63087052,  1.55018941,  0.67212613],
       [ 0.72660653,  0.36422548, -0.75624732],
       [ 2.78387463,  0.62720878,  3.44996471],
       [-1.10119503, -0.12679585,  0.89283741],
       [ 1.27236319,  1.10631087,  2.22051402]], dtype=float64)}

Timing the optimization

We can time the two approaches (JIT compiling just the cost function, vs JIT compiling the entire optimization loop) to explore the differences in performance:

from timeit import repeat

opt = optax.adam(learning_rate=0.3)

def optimization_noqjit(params):
    opt_state = opt.init(params)

    for i in range(100):
        params, opt_state, _, _ = update_step(i, (params, opt_state, data, targets))

    return params

reps = 5
num = 2

times = repeat("optimization_noqjit(params)", globals=globals(), number=num, repeat=reps)
result = min(times) / num

print(f"Quantum jitting just the cost (best of {reps}): {result} sec per loop")

times = repeat("optimization(params, data, targets)", globals=globals(), number=num, repeat=reps)
result = min(times) / num

print(f"Quantum jitting the entire optimization (best of {reps}): {result} sec per loop")
Quantum jitting just the cost (best of 5): 0.7256872614999992 sec per loop
Quantum jitting the entire optimization (best of 5): 0.5226842850000111 sec per loop

About the author

Josh Izaac
Josh Izaac

Josh Izaac

Josh is a theoretical physicist, software tinkerer, and occasional baker. At Xanadu, he contributes to the development and growth of Xanadu’s open-source quantum software products.

Total running time of the script: (0 minutes 18.693 seconds)

Share demo

Ask a question on the forum

Related Demos

How to optimize a QML model using JAX and Optax

Using JAX with PennyLane

Variational classifier

How to optimize a QML model using JAX and JAXopt

How to use Catalyst with Lightning-GPU

Multidimensional regression with a variational quantum circuit

How to quantum just-in-time compile VQE with Catalyst

Quantum gradients with backpropagation

Basic tutorial: qubit rotation

Digital zero-noise extrapolation with Catalyst

PennyLane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Built by researchers, for research. Created with ❤️ by Xanadu.

Research

  • Research
  • Performance
  • Hardware & Simulators
  • Demos
  • Compilation Hub
  • Quantum Datasets

Education

  • Teach
  • Learn
  • Codebook
  • Coding Challenges
  • Videos
  • Glossary

Software

  • Install PennyLane
  • Features
  • Documentation
  • Catalyst Compilation Docs
  • Development Guide
  • API
  • GitHub
Stay updated with our newsletter

© Copyright 2026 | Xanadu | All rights reserved

TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.

Privacy Policy|Terms of Service|Cookie Policy|Code of Conduct