In Quantum Machine Learning (QML), algorithms are often focused on a particular circuit template. Be it a scalable architecture we want to try for different circuit width/depth, or a box we use as building block for a layered structure, circuit templates are a central ingredient in algorithm design.
PennyLane has a number of built-in templates. These are very practical when viewed as ingredients for larger, more involved programs defined by the user.
Quantum learning models usually involve a set of trainable parameters. Among the first steps in the pipeline of most algorithms is then the initialization of these parameters. The total number of parameters may vary from one template to another, as well as the shape of the tensor of parameters, or what role each of them plays in the circuit. Based on their role, we might want some of them to be centered around a value, evenly distributed, or even plain \(0\).
In this sense, PennyLane doesn’t restrict us in any way. Since it uses autograd, we have the freedom of structuring the parameter tensor(s) of a quantum function. PennyLane does have one recommended way of initializing parameters when it comes to predefined templates, exploiting their specified shape, which is defined under the hood.
In this how-to, we first define two simple circuits based on two of the built-in templates so we can showcase how to initialize their parameters. After these, we will also show an example that requires a custom function to prepare the initial parameters.
We start with the usual suspects:
import pennylane as qml from pennylane import numpy as np
QAOA Embedding Layer
Let’s take a look at the predefined layer template for QAOA. We consider a simple example with only two qubits:
from pennylane.templates import QAOAEmbedding dev = qml.device('default.qubit', wires=2) @qml.qnode(dev) def circuit(weights, f): QAOAEmbedding(features=f, weights=weights, wires=range(2)) return qml.expval(qml.PauliZ(0))
If we want to run this circuit, we’ll have to give the weights some value! In PennyLane, we do this by using the shape attribute of the QAOAEmbedding template.
>>> print(QAOAEmbedding.shape(n_layers=2, n_wires=2)) (2, 3)
Note how QAOAEmbedding.shape outputs a tuple, which we can right away use as argument. For instance, we can use NumPy’s random module to initialize every weight uniformly at random in the range \([0,1]\). We pass the shape of the QAOA template as an argument to get a tensor of the appropriate shape:
>>> weights = np.random.random(QAOAEmbedding.shape(n_layers=2, n_wires=2)) >>> features = np.array([1., 2.], requires_grad=False) >>> circuit(weights, features) tensor(-0.77166652, requires_grad=True)
It is always good to check exactly what we’ve run:
>>> print(circuit.draw()) 0: ──RX(1)──╭RZ(0.117)──RY(0.0351)──RX(1)──╭RZ(0.536)──RY(0.534)──RX(1)──┤ ⟨Z⟩ 1: ──RX(2)──╰RZ(0.117)──RY(0.879)───RX(2)──╰RZ(0.536)──RY(0.995)──RX(2)──┤
Readily enough we see there are indeed \(2 \times 3\) parameters as stated by the shape attribute: one for the \(ZZ\) gate, and one for each of the two \(RY\) gates, per layer.
Basic Entangler Layer
For a different example, it could be the case we already have a large variational circuit trained for a specific task. For some reason, we believe adding one last basic entangler layer might have beneficial effects on the overall performance of our model. At the same time, though, we would like for this last new layer to have a small impact on the output, to begin with. This transfer learning flavoured idea could be brought to reality using, for example, the BasicEntanglerLayers template layer:
from pennylane.templates import BasicEntanglerLayers dev = qml.device('default.qubit', wires=3) @qml.qnode(dev) def circuit(weights): ########################### # Already trained circuit # ########################### BasicEntanglerLayers(weights=weights, wires=range(3)) return [qml.expval(qml.PauliZ(wires=i)) for i in range(3)]
As a minimal example, we consider just this final layer. Since we want it to start having a very small impact on the overall dynamics, we initialize it as a trivial layer!
>>> weights = np.zeros(BasicEntanglerLayers.shape(n_layers=1, n_wires=3)) >>> print(circuit(weights)) [1. 1. 1.]
The output state is still the initial state \(|000\rangle\), and the circuit implements nothing but the identity. Note how, again, we called the shape attribute of the template.
User-defined feature map
We approach the end of this how-to by considering a real life example. In particular, we use a data re-uploading feature map to show how one would go about it. There are a number of ways in which the circuit corresponding to the feature map could be built. For instance, one could take advantage of PennyLane’s broadcast function, a handy way of populating a circuit with given gates organized in a certain pattern.
We start by defining the building blocks of the feature map:
from pennylane import broadcast from math import pi def block(x, weights, wires): # three layers of 1-qubit gates broadcast(unitary=qml.Hadamard, wires=wires, pattern="single") broadcast(unitary=qml.RZ, wires=wires, pattern="single", parameters=x) broadcast(unitary=qml.RY, wires=wires, pattern="single", parameters=weights) # ring of controlled 2-qubit gates broadcast(unitary=qml.CRZ, wires=wires, pattern="ring", parameters=weights)
Next, we can define an ansatz based on this block. Common procedures here could be a simple layered repetition in depth, or a brick-layered pattern by applying the block to subsets of wires only. For the sake of simplicity, we go for the latter.
n_wires = 3 dev = qml.device('default.qubit', n_wires=n_wires) @qml.qnode(dev) def circuit(x, params): for j, layer_params in enumerate(params): block(x, layer_params, wires=range(n_wires)) return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_wires)]
Notice this new circuit is not a PennyLane template yet (although this how-to teaches us how, we will not be following it for now). This means that we cannot just use the shape attribute to initialize the parameters at random. Instead, we need to come up with our own procedure. For instance, we could use the module np.random again, but this time initialize our parameters according to a normal distribution re-weighted by \(\pi\). Following this, the initial parameters will concentrate more around \(0\), but still take values comparable in magnitude to \(\pi\):
def random_params(n_layers, n_wires): return pi*np.random.randn(n_layers, 2, n_wires)
Notice how this expression differs from the ones we used above. For pre-defined templates we passed the shape attribute as an argument, whereas now we need to specify all the dimensions separately.
Finally, we can check that everything works out as expected. Noteworthy is that this last circuit knows how many layers there are solely from the first dimension of the params tensor, as one can see in the range of the for loop.
>>> params = random_params(n_layers=2, n_wires=3) >>> print(circuit([1., 1., 1.], params)) [-0.81115284 -0.53971458 -0.30064673] >>> print(circuit.draw()) 0: ──H──RZ(1)──RY(-2.18)───╭C───────────────────────╭RZ(1.2)──H──────RZ(1)──────RY(0.32)────╭C─────────────────────╭RZ(1.18)──┤ ⟨Z⟩ 1: ──H──RZ(1)──RY(0.0114)──╰RZ(0.618)──╭C────────H──│─────────RZ(1)──RY(0.915)──────────────╰RZ(0.513)──╭C─────────│──────────┤ ⟨Z⟩ 2: ──H──RZ(1)──RY(1.87)────────────────╰RZ(1.5)─────╰C────────H──────RZ(1)──────RY(-0.203)──────────────╰RZ(0.92)──╰C─────────┤ ⟨Z⟩
And that was all, folks! Now we know how to initialize parameters for a range of quantum machine learning models, both user-defined and built-in to PennyLane.