PennyLane v0.18 released

PennyLane team

The latest release of PennyLane is now out and available for everyone to use. It comes with many new additions, inlcuding an in-built high-performance simulator, the ability to perform backpropagation using PyTorch, improved quantum aware optimization techniques, the ability to define custom quantum gradient rules, and much more.

This release is particularly special, including new features and bug fixes from Code Together 🙌 and unitaryHACK ⚛️ contributors. If you’re not sure what Code Together is all about, be sure to check out our blogpost.

Integrated high-performance simulator ⚡

The high-performance lightning.qubit simulator is now-shipping 📦 for everyone who upgrades or installs the latest version of PennyLane.

The lightning.qubit device is a fast state-vector simulator equipped with the efficient adjoint method for differentiating quantum circuits, check out the plugin release notes for more details!

To use this new simulator in PennyLane, it can be instantiated as follows:

dev = qml.device("lightning.qubit", wires=10)

Once created, the lightning.qubit device can be used with any existing QNode.

In addition to a performant C++ backend, lightning.qubit comes with support for differentiating quantum circuits via the adjoint method. This can lead to significant speed improvements compared to default.qubit when shots=None.

Backpropagation using PyTorch

Powered by recent upgrades, you can now use PyTorch as a quantum simulation backend. 💡

The built-in PennyLane simulator default.qubit now supports backpropogation with PyTorch; simply specify diff_method="backprop" when creating your QNode:

dev = qml.device("default.qubit", wires=3)

@qml.qnode(dev, interface="torch", diff_method="backprop")
def circuit(x):
    qml.Rot(x[0], x[1], x[2], wires=0)
    return qml.expval(qml.PauliZ(0))

x = torch.tensor([0.54, 0.1, 0.2], dtype=torch.float64, requires_grad=True)
res = circuit(x)

As a result, default.qubit can now use end-to-end classical backpropagation as a means to compute gradients. Using this method, the created QNode is a ‘white-box’ that is tightly integrated with your PyTorch computation, including TorchScript and GPU support.

This is now the default differentiation method when using default.qubit with PyTorch.

Shout out to Slimane Thabet, Esteban Payares, and Arshpreet Singh for this mega contribution from #unitaryHACK.

RotosolveOptimizer for general parametrized circuits

Quantum-aware optimization techniques have received a huge upgrade in this release. The RotosolveOptimizer can now tackle general parametrized circuits, and is no longer restricted to single-qubit Pauli rotations. 🪐

This includes:

  • layers of gates controlled by the same parameter,
  • controlled variants of parametrized gates, and
  • Hamiltonian time evolution.

This optimization technique is cutting-edge, and straight from recent quantum machine learning research. For more details, see Vidal and Theis, 2018 and Wierichs, Izaac, Wang, Lin 2021, as well as our recent PennyLane demonstration general parameter-shift rules.

dev = qml.device('default.qubit', wires=3, shots=None)

def cost_function(rot_param, layer_par, crot_param):
    for i, par in enumerate(rot_param):
        qml.RX(par, wires=i)
    for w in dev.wires:
        qml.RX(layer_par, wires=w)
    for i, par in enumerate(crot_param):
        qml.CRY(par, wires=[i, (i+1) % 3])

    return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1) @ qml.PauliZ(2))

Note that the eigenvalue spectrum of the gate generator needs to be known to use RotosolveOptimizer for a general gate, and it is required to produce equidistant frequencies.

This cost function has one frequency for each of the first RX rotation angles, three frequencies for the layer of RX gates that depend on layer_par, and two frequencies for each of the CRY gate parameters. By providing details regarding the spectrum of these parametrized operators, Rotosolve can then be used to minimize the cost_function:

# Initial parameters
init_param = [
    np.array([0.3, 0.2, 0.67], requires_grad=True),
    np.array(1.1, requires_grad=True),
    np.array([-0.2, 0.1, -2.5], requires_grad=True),

# Numbers of frequencies per parameter
num_freqs = [[1, 1, 1], 3, [2, 2, 2]]

opt = qml.RotosolveOptimizer()
param = init_param.copy()

for step in range(3):
    param, cost, sub_cost = opt.step_and_cost(
    print(f"Cost before step: {cost}")
    print(f"Minimization substeps: {np.round(sub_cost, 6)}")
Cost before step: 0.042008210392535605
Minimization substeps: [-0.230905 -0.863336 -0.980072 -0.980072 -1.       -1.       -1.      ]
Cost before step: -0.999999999068121
Minimization substeps: [-1. -1. -1. -1. -1. -1. -1.]
Cost before step: -1.0
Minimization substeps: [-1. -1. -1. -1. -1. -1. -1.]

For usage details, please see the Rotosolve optimizer documentation. Be sure to also check out our Rotosolve tutorial for details behind the theory underpinning the Rotosolve optimization.

Faster, trainable, Hamiltonian simulations

Variational quantum algorithms are even more powerful in this release, as Hamiltonians are now trainable with respect to their coefficients. Find quantum gradients with respect to Hamiltonians, and train your algorithms over classes of parametrized Hamiltonians.

from pennylane import numpy as np

dev = qml.device("default.qubit", wires=2)
def circuit(coeffs, param):
    qml.RX(param, wires=0)
    qml.RY(param, wires=0)
    return qml.expval(
        qml.Hamiltonian(coeffs, [qml.PauliX(0), qml.PauliZ(0)], simplify=True)

coeffs = np.array([-0.05, 0.17])
param = np.array(1.7)
grad_fn = qml.grad(circuit)

In addition, Hamiltonians are now natively supported on the default.qubit when shots=None, with expectation values automatically computed via fast sparse methods. As the number of terms in the Hamiltonian grows, this can significantly improve the performance of variational quantum eigensolver (VQE) workflows:

Custom gradient transforms

Want to specify your own quantum gradient logic, and explore optimization beyond the parameter-shift rule?

This is now possible; custom quantum gradient transforms can be created using the new @qml.gradients.gradient_transform decorator.

Quantum gradient transforms are a specific type of batch transformation. To create a quantum gradient transform, simply write a function that accepts a tape, and returns a batch of tapes to be independently executed on a quantum device, alongside a post-processing function that processes the tape results into the gradient.

Supported gradient transforms must be of the following form:

def my_custom_gradient(tape, argnum=None, **kwargs):
  return gradient_tapes, processing_fn

Various built-in quantum gradient transforms are provided within the qml.gradients module, including qml.gradients.param_shift. Once defined, quantum gradient transforms can be applied directly to QNodes:

>>> @qml.qnode(dev)
... def circuit(x):
...     qml.RX(x, wires=0)
...     qml.CNOT(wires=[0, 1])
...     return qml.expval(qml.PauliZ(0))
>>> circuit(0.3)
tensor(0.95533649, requires_grad=True)
>>> qml.gradients.param_shift(circuit)(0.5)

Quantum gradient transforms are fully differentiable, allowing higher order derivatives to be accessed:

>>> qml.grad(qml.gradients.param_shift(circuit))(0.5)
tensor(-0.87758256, requires_grad=True)

For more details, please see the gradients documentation.

Batch transforms

The ability to define batch transforms has been added via the new @qml.batch_transform decorator.

A batch transform is a transform that takes a single tape or QNode as input, and executes multiple tapes or QNodes independently. The results may then be post-processed before being returned.

By creating a batch transformation, you can leverage the ability to transform and post-process QNodes, while retaining the ability to

  • Autodifferentiate your quantum model on hardware,
  • Evaluate your transformation on all hardware compatible with PennyLane,
  • Make use of the ability to submit a single batch of quantum jobs for execution, significantly reducing overall runtime.

In addition, batch transformations are themselves trainable — write a parametrized batch transformation, and then train it to achieve a particular outcome!

For more details, including how to write batch transformations, please see the batch transform decorator documentation.

For a primer on quantum transformations, don’t forget to read out previous blog post on transformations.


In addition to the new features listed above, the release contains a wide array of improvements and optimizations:

  • The qml.grouping.group_observables transform is now differentiable.
  • A gradient recipe for Hamiltonian coefficients has been added. This makes it possible to compute parameter-shift gradients of these coefficients on devices that natively support Hamiltonians.
  • The device test suite has been expanded to cover more qubit operations and observables.

Breaking changes

As new things are added, outdated features are removed. Here’s what will be disappearing in this release:

  • Specifying shots=None with qml.sample was previously deprecated. From this release onwards, setting shots=None when sampling will raise an error also for default.qubit.jax.
  • An error is raised during QNode creation when a user requests backpropagation on a device with finite-shots.

In addition, several features have been marked for deprecation, and will raise warnings when used. They will be removed in a future release:

  • The class qml.Interferometer is deprecated and will be renamed qml.InterferometerUnitary in the upcoming release.
  • All optimizers except for Rotosolve and Rotoselect now have a public attribute stepsize. Temporary backward compatibility has been added to support the use of _stepsize for one release cycle. update_stepsize method is deprecated.

These highlights are just scratching the surface — check out the full release notes for more details.


As always, this release would not have been possible without the hard work of our development team and contributors:

Vishnu Ajith, Akash Narayanan B, Thomas Bromley, Olivia Di Matteo, Sahaj Dhamija, Tanya Garg, Anthony Hayes, Theodor Isacsson, Josh Izaac, Prateek Jain, Ankit Khandelwal, Nathan Killoran, Christina Lee, Ian McLean, Johannes Jakob Meyer, Romain Moyard, Lee James O’Riordan, Esteban Payares, Pratul Saini, Maria Schuld, Arshpreet Singh, Jay Soni, Ingrid Strandberg, Antal Száva, Slimane Thabet, David Wierichs, Vincent Wong.