PennyLane v0.27 released

PennyLane team

They say good things come in threes… well \(3^3 = 27\) 🤯! But if that isn’t enough to persuade you about how good version 0.27 of PennyLane is, then check out all of the awesome new functionality below.

Get your fix of quantum data 💾

Classical machine learning ushered in many standardized datasets that are widely used for benchmarking new algorithms. PennyLane is taking steps in the analogous quantum direction with our all-new qml.data module. A quantum dataset can be comprised of anything that is an input to or an output from a quantum device. Easily browse, download, and use a wide variety of quantum data — including input to your quantum functions, and pre-generated outputs.

Downloadable datasets

With this release, there are a variety of quantum datasets available for dowload and we will continue to add more in the future. The list of currently available datasets can be accessed directly within PennyLane via qml.data.list_datasets(). To load one of these datasets, use qml.data.load().

For example, let’s load the data corresponding to the Hydrogen molecule, with bond length 1.1:

>>> H2data = qml.data.load(data_name="qchem", molname="H2", basis="STO-3G", bondlength=1.1)
>>> print(H2data)
[<Dataset = description: qchem/H2/STO-3G/1.1, attributes: ['molecule', 'hamiltonian', ...]>]

Once a dataset is loaded, its properties can be accessed and used directly in PennyLane workflows:

>>> N = H2data[0].hamiltonian.wires
>>> dev = qml.device('default.qubit', wires=N)
>>> @qml.qnode(dev)
... def circuit():
...     return qml.expval(H2data[0].hamiltonian)
>>> print(circuit())
0.4810692051726486

Create custom datasets

Create your own custom quantum datasets with qml.data.Dataset:

>>> H = 1.0 * qml.PauliZ(wires=0) + 0.5 * qml.PauliX(wires=1)
>>> E = np.linalg.eigvalsh(qml.matrix(H)) 
>>> my_data = qml.data.Dataset(data_name='Example', hamiltonian=H, energies=E)
>>> my_data.data_name
'Example'
>>> my_data.hamiltonian
    (0.5) [X1]
  + (1.0) [Z0]
>>> my_data.energies
array([-1.5, -0.5,  0.5,  1.5])

Saving and reading from your custom datasets is as easy as using qml.data.Dataset.write and qml.data.Dataset.read, respectively.

Check out the full release notes for more information. If you have any feedback, or additional datasets you would like to see included, let us know either on GitHub or on our discussion forum.

Adaptive optimization 🏃🏋️🏊

PennyLane adapts to new discoveries in quantum computing, machine learning, and chemistry. Circuits themselves can adapt too… meta adaptation!

In addition to qml.LieAlgebraOptimizer, v0.27 introduces a new adaptive optimizer: qml.AdaptiveOptimizer. The qml.AdaptiveOptimizer takes an initial circuit and a collection of operators as input, and adds a selected gate to the circuit at each optimization step. This process is then repeated until the cost function has converged to a local minimum (within a given threshold).

Use the adaptive optimizer to build, explore, and even generalize influential algorithms such as ADAPT-VQE!

qml.AdaptiveOptimizer is defined like any other optimizer:

opt = qml.optimize.AdaptiveOptimizer()

Next, let’s define an ADAPT-VQE procedure by:

  1. creating a Hamiltonian
  2. defining an operator_pool which our adaptive optimization will use to grow the circuit
  3. creating a circuit to optimize
symbols = ["H", "H", "H"]
geometry = np.array([[0.01076341, 0.04449877, 0.0],
                    [0.98729513, 1.63059094, 0.0],
                    [1.87262415, -0.00815842, 0.0]], requires_grad=False)
H, qubits = qml.qchem.molecular_hamiltonian(symbols, geometry, charge = 1)

n_electrons = 2
singles, doubles = qml.qchem.excitations(n_electrons, qubits)
singles_excitations = [qml.SingleExcitation(0.0, x) for x in singles]
doubles_excitations = [qml.DoubleExcitation(0.0, x) for x in doubles]
operator_pool = doubles_excitations + singles_excitations

hf_state = qml.qchem.hf_state(n_electrons, qubits)
dev = qml.device("default.qubit", wires=qubits)

@qml.qnode(dev)
def circuit():
    qml.BasisState(hf_state, wires=range(qubits))
    return qml.expval(H)

Now we optimize!

for i in range(len(operator_pool)):
    circuit, energy, gradient = opt.step_and_cost(
      circuit, operator_pool, drain_pool=True
    )
    if gradient < 1e-3:
        break

For a detailed breakdown of its implementation, check out our demo.

QNodes are smarter than ever 🧩

Wouldn’t it be nice if PennyLane just knew what machine learning library it needed to interface with? It sure would. Fret no more! QNodes now accept an auto argument which automatically detects the machine learning library to use.

import pennylane as qml
import torch
import jax
from jax import numpy as jnp

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev, interface="auto")
def circuit(weight):
    qml.RX(weight[0], wires=0)
    qml.RY(weight[1], wires=1)
    return qml.expval(qml.PauliZ(0))
>>> circuit(torch.tensor([0, 1]))
tensor(1.0000, dtype=torch.float64)
>>> jax.grad(circuit)(jnp.array([1.34, 1.5]))
DeviceArray([-9.7348452e-01, -2.2351742e-08], dtype=float32)

Upgraded JAX-JIT gradient support 🏎

With this release, JAX-JIT support for computing the gradient of QNodes that return a single vector of probabilities or multiple expectation values is now available.

from jax.config import config
config.update("jax_enable_x64", True)

dev = qml.device("lightning.qubit", wires=2)

@jax.jit
@qml.qnode(dev, diff_method="parameter-shift", interface="jax")
def circuit(x, y):
    qml.RY(x, wires=0)
    qml.RY(y, wires=1)
    qml.CNOT(wires=[0, 1])
    return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))

x = jnp.array(1.0)
y = jnp.array(2.0)
>>> jax.jacobian(circuit, argnums=[0, 1])(x, y)
(DeviceArray([-0.84147098,  0.35017549], dtype=float64, weak_type=True),
 DeviceArray([ 0.       , -0.4912955], dtype=float64, weak_type=True))

Note that this change depends on jax.pure_callback, which requires jax>=0.3.17.

Improvements 🛠

In addition to the new features listed above, the release contains a wide array of improvements and optimizations:

  • qml.adjoint now supports parameter batching or broadcasting if the base operation supports parameter broadcasting.
  • qml.OrbitalRotation is now decomposed into two qml.SingleExcitation operations for faster execution and more efficient parameter-shift gradient calculations on devices that natively support qml.SingleExcitation.
  • Added support for sums and products of operator classes with scalar tensors of any interface (NumPy, JAX, Tensorflow, and PyTorch).
  • Explicit support for qml.SparseHamiltonian using the adjoint gradient method with lightning.gpu. This can result in more efficient simulations when working with large Hamiltonians on a single GPU.

Deprecations and breaking changes 💔

As new things are added, outdated features are removed. To keep track of things in the deprecation pipeline, we’ve created a deprecation page.

Here’s what will be changing in this release:

  • The grouping module qml.grouping has been deprecated. Use qml.pauli or qml.pauli.grouping instead. The module will still be available until v0.28.
  • Operator.compute_terms is removed. On a specific instance of an operator, op.terms() can be used instead. There is no longer a static method for this.

These highlights are just scratching the surface — check out the full release notes for more details.

Contributors ✍️

As always, this release would not have been possible without the hard work of our development team and contributors:

Kamal Mohamed Ali, Guillermo Alonso-Linaje, Juan Miguel Arrazola, Utkarsh Azad, Thomas Bromley, Albert Mitjans Coma, Isaac De Vlugt, Olivia Di Matteo, Amintor Dusko, Lillian M. A. Frederiksen, Diego Guala, Josh Izaac, Soran Jahangiri, Edward Jiang, Korbinian Kottmann, Christina Lee, Romain Moyard, Lee J. O’Riordan, Mudit Pandey, Chae-Yeun Park, Monit Sharma, Shuli Shu Matthew Silverman, Jay Soni, Antal Száva, Trevor Vincent, and David Wierichs.