{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# This cell is added by sphinx-gallery\n# It can be customized to whatever you like\n%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Quantum volume {#quantum_volume}\n==============\n\n::: {.meta}\n:property=\\\"og:description\\\": Learn about quantum volume, and how to\ncompute it. :property=\\\"og:image\\\":\n\n:::\n\n::: {.related}\nqsim\\_beyond\\_classical Beyond classical computing with qsim\n:::\n\n*Author: Olivia Di Matteo --- Posted: 15 December 2020. Last updated: 15\nApril 2021.*\n\nTwice per year, a project called the TOP500 releases a list of the 500\nmost powerful supercomputing systems in the world. However, there is a\nlarge amount of variation in how supercomputers are built. They may run\ndifferent operating systems and have varying amounts of memory.\n[Some](https://en.wikipedia.org/wiki/Fugaku_(supercomputer)) use 48-core\nprocessors, while\n[others](https://en.wikipedia.org/wiki/Sunway_TaihuLight) use processors\nwith up to 260 cores. The speed of processors will differ, and they may\nbe connected in different ways. We can\\'t rank them by simply counting\nthe number of processors!\n\nIn order to make a fair comparison, we need benchmarking standards that\ngive us a holistic view of their performance. To that end, the TOP500\nrankings are based on something called the LINPACK benchmark. The task\nof the supercomputers is to solve a dense system of linear equations,\nand the metric of interest is the rate at which they perform\n[floating-point operations\n(FLOPS)](https://en.wikipedia.org/wiki/FLOPS). Today\\'s top machines\nreach speeds well into the regime of hundreds of petaFLOPS! While a\nsingle number certainly cannot tell the whole story, it still gives us\ninsight into the quality of the machines, and provides a standard so we\ncan compare them.\n\nA similar problem is emerging with quantum computers: we can\\'t judge\nquantum computers on the number of qubits alone. Present-day devices\nhave a number of limitations, an important one being gate error rates.\nTypically the qubits on a chip are not all connected to each other, so\nit may not be possible to perform operations on arbitrary pairs of them.\n\nConsidering this, can we tell if a machine with 20 noisy qubits is\nbetter than one with 5 very high-quality qubits? Or if a machine with 8\nfully-connected qubits is better than one with 16 qubits of comparable\nerror rate, but arranged in a square lattice? How can we make\ncomparisons between different types of qubits?\n\n![..](../demonstrations/quantum_volume/qubit_graph_variety.svg){.align-center\nwidth=\"50.0%\"}\n\nWhich of these qubit hardware graphs is the best?\n\nTo compare across all these facets, researchers have proposed a metric\ncalled \\\"quantum volume\\\". Roughly, the quantum volume is a measure of\nthe effective number of qubits a processor has. It is calculated by\ndetermining the largest number of qubits on which it can reliably run\ncircuits of a prescribed type. You can think of it loosely as a quantum\nanalogue of the LINPACK benchmark. Different quantum computers are\ntasked with solving the same problem, and the success will be a function\nof many properties: error rates, qubit connectivity, even the quality of\nthe software stack. A single number won\\'t tell us everything about a\nquantum computer, but it does establish a framework for comparing them.\n\nAfter working through this tutorial, you\\'ll be able to define quantum\nvolume, explain the problem on which it\\'s based, and run the protocol\nto compute it!\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Designing a benchmark for quantum computers\n===========================================\n\nThere are many different properties of a quantum computer that\ncontribute to the successful execution of a computation. Therefore, we\nmust be very explicit about what exactly we are benchmarking, and what\nis our measure of success. In general, to set up a benchmark for a\nquantum computer we need to decide on a number of things:\n\n1. A family of circuits with a well-defined structure and variable size\n2. A set of rules detailing how the circuits can be compiled\n3. A measure of success for individual circuits\n4. A measure of success for the family of circuits\n5. (Optional) An experimental design specifying how the circuits are to\n be run\n\nWe\\'ll work through this list in order to see how the protocol for\ncomputing quantum volume fits within this framework.\n\nThe circuits\n------------\n\nQuantum volume relates to the largest *square* circuit that a quantum\nprocessor can run reliably. This benchmark uses *random* square circuits\nwith a very particular form:\n\n![..](../demonstrations/quantum_volume/model_circuit_cross.png){.align-center\nwidth=\"60.0%\"}\n\nA schematic of the random circuit structure used in the quantum volume\nprotocol. Image source:.\n\nSpecifically, the circuits consist of $d$ sequential layers acting on\n$d$ qubits. Each layer consists of two parts: a random permutation of\nthe qubits, followed by Haar-random SU(4) operations performed on\nneighbouring pairs of qubits. (When the number of qubits is odd, the\nbottom-most qubit is idle while the SU(4) operations run on the pairs.\nHowever, it will still be incorporated by way of the permutations.)\nThese circuits satisfy the criteria in item 1 \\-\\-- they have\nwell-defined structure, and it is clear how they can be scaled to\ndifferent sizes.\n\nAs for the compilation rules of item 2, to compute quantum volume we\\'re\nallowed to do essentially anything we\\'d like to the circuits in order\nto improve them. This includes optimization, hardware-aware\nconsiderations such as qubit placement and routing, and even resynthesis\nby finding unitaries that are close to the target, but easier to\nimplement on the hardware.\n\nBoth the circuit structure and the compilation highlight how quantum\nvolume is about more than just the number of qubits. The error rates\nwill affect the achievable depth, and the qubit connectivity contributes\nthrough the layers of permutations because a very well-connected\nprocessor will be able to implement these in fewer steps than a\nless-connected one. Even the quality of the software and the compiler\nplays a role here: higher-quality compilers will produce circuits that\nfit better on the target devices, and will thus produce higher quality\nresults.\n\nThe measures of success\n-----------------------\n\nNow that we have our circuits, we have to define the quantities that\nwill indicate how well we\\'re able to run them. For that, we need a\nproblem to solve. The problem used for computing quantum volume is\ncalled the *heavy output generation problem*. It has roots in the\nproposals for demonstrating quantum advantage. Many such proposals make\nuse of the properties of various random quantum circuit families, as the\ndistribution of the measurement outcomes may not be easy to sample using\nclassical techniques.\n\nA distribution that is theorized to fulfill this property is the\ndistribution of *heavy* output bit strings. Heavy bit strings are those\nwhose outcome probabilities are above the median of the distribution.\nFor example, suppose we run a two-qubit circuit and find that the\nmeasurement probabilities for the output states are as follows:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"measurement_probs = {\"00\": 0.558, \"01\": 0.182, \"10\": 0.234, \"11\": 0.026}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The median of this probability distribution is:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import numpy as np\nprob_array = np.fromiter(measurement_probs.values(), dtype=np.float64)\nprint(f\"Median = {np.median(prob_array):.3f}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"::: {.rst-class}\nsphx-glr-script-out\n\nOut:\n\n``` {.none}\nMedian = 0.208\n```\n:::\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This means that the heavy bit strings are \\'00\\' and \\'10\\', because\nthese are the two probabilities above the median. If we were to run this\ncircuit, the probability of obtaining one of the heavy outputs is:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"heavy_output_prob = np.sum(prob_array[prob_array > np.median(prob_array)])\nprint(f\"Heavy output probability = {heavy_output_prob}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"::: {.rst-class}\nsphx-glr-script-out\n\nOut:\n\n``` {.none}\nHeavy output probability = 0.792\n```\n:::\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Each circuit in a circuit family has its own heavy output probability.\nIf our quantum computer is of high quality, then we should expect to see\nheavy outputs quite often across all the circuits. On the other hand, if\nit\\'s of poor quality and everything is totally decohered, we will end\nup with output probabilities that are roughly all the same, as noise\nwill reduce the probabilities to the uniform distribution.\n\nThe heavy output generation problem quantifies this \\-\\-- for our family\nof random circuits, do we obtain heavy outputs at least 2/3 of the time\non average? Furthermore, do we obtain this with high confidence? This is\nthe basis for quantum volume. Looking back at the criteria for our\nbenchmarks, for item 3 the measure of success for each circuit is how\noften we obtain heavy outputs when we run the circuit and take a\nmeasurement. For item 4 the measure of success for the whole family is\nwhether or not the mean of these probabilities is greater than 2/3 with\nhigh confidence.\n\nOn a related note, it is important to determine what heavy output\nprobability we should *expect* to see on average. The intuition for how\nthis can be calculated is as follows,. Suppose that our random square\ncircuits scramble things up enough so that the effective operation looks\nlike a Haar-random unitary $U$. Since in the circuits we are applying\n$U$ to the all-zero ket, the measurement outcome probabilities will be\nthe moduli squared of the entries in the first column of $U$.\n\nNow if $U$ is Haar-random, we can say something about the form of these\nentries. In particular, they are complex numbers for which both the real\nand imaginary parts are normally distributed with mean 0 and variance\n$1/2^m$, where $m$ is the number of qubits. Taking the modulus squared\nof such numbers and making a histogram yields a distribution of\nprobabilities with the form $\\hbox{Pr}(p) \\sim 2^m e^{-2^m p}.$ This is\nalso known as the *Porter-Thomas distribution*.\n\nBy looking at the form of the underlying probability distribution, the\nexponential distribution $\\hbox{Pr}(x) = e^{-x}$, we can calculate some\nproperties of the heavy output probabilities. First, we can integrate\nthe exponential distribution to find that the median sits at $\\ln 2$. We\ncan further compute the expectation value of obtaining something greater\nthan the median by integrating $x e^{-x}$ from $\\ln 2$ to $\\infty$ to\nobtain $(1 + \\ln 2)/2$. This is the expected heavy output probability!\nNumerically it is around 0.85, as we will observe later in our results.\n\nThe benchmark\n=============\n\nNow that we have our circuits and our measures of success, we\\'re ready\nto define the quantum volume.\n\n::: {.admonition .defn}\nDefinition\n\nThe quantum volume $V_Q$ of an $n$-qubit processor is defined as\n\n$$\\log_2(V_Q) = \\hbox{argmax}_m \\min (m, d(m))$$\n\nwhere $m \\leq n$ is a number of qubits, and $d(m)$ is the number of\nqubits in the largest square circuits for which we can reliably sample\nheavy outputs with probability greater than 2/3.\n:::\n\nTo see this more concretely, suppose we have a 20-qubit device and find\nthat we get heavy outputs reliably for up to depth-4 circuits on any set\nof 4 qubits, then the quantum volume is $\\log_2 V_Q = 4$. Quantum volume\nis incremental, as shown below \\-\\-- we gradually work our way up to\nlarger circuits, until we find something we can\\'t do. Very loosely,\nquantum volume is like an effective number of qubits. Even if we have\nthose 20 qubits, only groups of up to 4 of them work well enough\ntogether to sample from distributions that would be considered hard.\n\n![..](../demonstrations/quantum_volume/qv_square_circuits.svg){.align-center\nwidth=\"75.0%\"}\n\nThis quantum computer has $\\log_2 V_Q = 4$, as the 4-qubit square\ncircuits are the largest ones it can run successfully.\n\nThe maximum achieved quantum volume has been doubling at an increasing\nrate. In late 2020, the most recent announcements have been\n$\\log_2 V_Q = 6$ on IBM\\'s 27-qubit superconducting device\n[ibmq\\_montreal]{.title-ref}, and $\\log_2 V_Q = 7$ on a Honeywell\ntrapped-ion qubit processor . A device with an expected quantum volume\nof $\\log_2 V_Q\n= 22$ has also been announced by IonQ, though benchmarking results have\nnot yet been published.\n\n::: {.note}\n::: {.title}\nNote\n:::\n\nIn many sources, the quantum volume of processors is reported as $V_Q$\nexplicitly, rather than $\\log_2 V_Q$ as is the convention in this demo.\nAs such, IonQ\\'s processor has the potential for a quantum volume of\n$2^{22} > 4000000$. Here we use the $\\log$ because it is more\nstraightforward to understand that they have 22 high-quality,\nwell-connected qubits than to extract this at first glance from the\nexplicit value of the volume.\n:::\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Computing the quantum volume\n============================\n\nEquipped with our definition of quantum volume, it\\'s time to compute it\nourselves! We\\'ll use the\n[PennyLane-Qiskit](https://pennylaneqiskit.readthedocs.io/en/latest/)\nplugin to compute the volume of a simulated version of one of the IBM\nprocessors, since their properties are easily accessible through this\nplugin.\n\nLoosely, the protocol for quantum volume consists of three steps:\n\n1. Construct random square circuits of increasing size\n2. Run those circuits on both a simulator and on a noisy hardware\n device\n3. Perform a statistical analysis of the results to determine what size\n circuits the device can run reliably\n\nThe largest reliable size will become the $m$ in the expression for\nquantum volume.\n\nStep 1: construct random square circuits\n----------------------------------------\n\nRecall that the structure of the circuits above is alternating layers of\npermutations and random SU(4) operations on pairs of qubits. Let\\'s\nimplement the generation of such circuits in PennyLane.\n\nFirst we write a function that randomly permutes qubits. We\\'ll do this\nby using numpy to generate a permutation, and then apply it with the\nbuilt-in `~.pennylane.Permute`{.interpreted-text role=\"func\"}\nsubroutine.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import pennylane as qml\n\n# Object for random number generation from numpy\nrng = np.random.default_rng()\n\ndef permute_qubits(num_qubits):\n # A random permutation\n perm_order = list(rng.permutation(num_qubits))\n qml.Permute(perm_order, wires=list(range(num_qubits)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we need to apply SU(4) gates to pairs of qubits. PennyLane\ndoesn\\'t have built-in functionality to generate these random matrices,\nhowever its cousin [Strawberry Fields](https://strawberryfields.ai/)\ndoes! We will use the `random_interferometer` method, which can generate\nunitary matrices uniformly at random. This function actually generates\nelements of U(4), but they are essentially equivalent up to a global\nphase.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from strawberryfields.utils import random_interferometer\n\ndef apply_random_su4_layer(num_qubits):\n for qubit_idx in range(0, num_qubits, 2):\n if qubit_idx < num_qubits - 1:\n rand_haar_su4 = random_interferometer(N=4)\n qml.QubitUnitary(rand_haar_su4, wires=[qubit_idx, qubit_idx + 1])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let\\'s write a layering method to put the two together \\-\\-- this\nis just for convenience and to highlight the fact that these two methods\ntogether make up one layer of the circuit depth.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def qv_circuit_layer(num_qubits):\n permute_qubits(num_qubits)\n apply_random_su4_layer(num_qubits)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let\\'s take a look! We\\'ll set up an ideal device with 5 qubits, and\ngenerate a circuit with 3 qubits. In this demo, we\\'ll work explicitly\nwith [quantum\ntapes](https://pennylane.readthedocs.io/en/latest/code/qml_tape.html)\nsince they are not immediately tied to a device. This will be convenient\nlater when we need to run the same random circuit on two devices\nindependently.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"num_qubits = 5\ndev_ideal = qml.device(\"default.qubit\", shots=None, wires=num_qubits)\n\nm = 3 # number of qubits\n\nwith qml.tape.QuantumTape() as tape:\n qml.layer(qv_circuit_layer, m, num_qubits=m)\n\nexpanded_tape = tape.expand(stop_at=lambda op: isinstance(op, qml.QubitUnitary))\nprint(qml.drawer.tape_text(expanded_tape, wire_order=dev_ideal.wires, show_all_wires=True, show_matrices=True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"::: {.rst-class}\nsphx-glr-script-out\n\nOut:\n\n``` {.none}\n0: \u2500\u256dSWAP\u2500\u256dU(M0)\u2500\u256dU(M1)\u2500\u256dSWAP\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256dU(M2)\u2500\u2524\n1: \u2500\u2570SWAP\u2500\u2570U(M0)\u2500\u2570U(M1)\u2500\u2502\u2500\u2500\u2500\u2500\u2500\u256dSWAP\u2500\u2570U(M2)\u2500\u2524\n2: \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2570SWAP\u2500\u2570SWAP\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n3: \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n4: \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\nM0 =\n[[-0.17514647+0.00759447j 0.11975927+0.16007614j -0.41793925+0.49643728j\n 0.62304058-0.34640531j]\n [-0.73367896-0.58079555j -0.11348577+0.00751965j -0.02640159-0.15592112j\n -0.19507153-0.21998821j]\n [ 0.02988983+0.09364586j -0.74053162+0.55032455j 0.31350059-0.01305651j\n 0.16283233-0.11885036j]\n [-0.13103809-0.25850305j 0.18298996+0.2497364j 0.34879438+0.57771772j\n -0.02385446+0.60346274j]]\nM1 =\n[[ 0.14296171+0.28087257j -0.5985737 -0.27489922j -0.43838149+0.10344812j\n 0.04022491+0.51216658j]\n [-0.21538853+0.02728431j -0.24776721-0.57146257j 0.60975755+0.36241573j\n 0.21787038-0.11953391j]\n [-0.24405375+0.05780278j -0.11688629-0.17397518j -0.51628349-0.11381455j\n 0.44143429-0.64714776j]\n [-0.750841 -0.47630904j -0.28666068+0.22820556j -0.09459735+0.07429451j\n -0.17243398+0.17582253j]]\nM2 =\n[[-0.63733359+1.91519046e-01j -0.49615702+9.79920998e-05j\n 0.06949634+4.54968771e-01j 0.21112196-2.33571716e-01j]\n[ 0.4774216 +5.19692450e-02j -0.2741782 -3.71778068e-01j\n 0.09817361+6.01972062e-01j -0.39517581+1.66741872e-01j]\n[ 0.14401687-1.53582182e-01j 0.51636466-1.58216631e-01j\n 0.43804144+3.62586089e-01j 0.4473567 -3.74872915e-01j]\n[ 0.51670588+1.23210608e-01j -0.48982566-9.40288988e-02j\n -0.19210465-2.36457367e-01j 0.53202679-3.05278186e-01j]]\n```\n:::\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The first thing to note is that the last two qubits are never used in\nthe operations, since the quantum volume circuits are square. Another\nimportant point is that this circuit with 3 layers actually has depth\nmuch greater than 3, since each layer has both SWAPs and SU(4)\noperations that are further decomposed into elementary gates when run on\nthe actual processor.\n\nOne last thing we\\'ll need before running our circuits is the machinery\nto determine the heavy outputs. This is quite an interesting aspect of\nthe protocol \\-\\-- we\\'re required to compute the heavy outputs\nclassically in order to get the results! As a consequence, it will only\nbe possible to calculate quantum volume for processors up to a certain\npoint before they become too large.\n\nThat said, classical simulators are always improving, and can simulate\ncircuits with numbers of qubits well into the double digits (though they\nmay need a supercomputer to do so). Furthermore, the designers of the\nprotocol don\\'t expect this to be an issue until gate error rates\ndecrease below $\\approx 10^{-4}$, after which we may need to make\nadjustments to remove the classical simulation, or even consider new\nvolume metrics.\n\nThe heavy outputs can be retrieved from a classically-obtained\nprobability distribution as follows:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def heavy_output_set(m, probs):\n # Compute heavy outputs of an m-qubit circuit with measurement outcome\n # probabilities given by probs, which is an array with the probabilities\n # ordered as '000', '001', ... '111'.\n\n # Sort the probabilities so that those above the median are in the second half\n probs_ascending_order = np.argsort(probs)\n sorted_probs = probs[probs_ascending_order]\n\n # Heavy outputs are the bit strings above the median\n heavy_outputs = [\n # Convert integer indices to m-bit binary strings\n format(x, f\"#0{m+2}b\")[2:] for x in list(probs_ascending_order[2 ** (m - 1) :])\n ]\n\n # Probability of a heavy output\n prob_heavy_output = np.sum(sorted_probs[2 ** (m - 1) :])\n\n return heavy_outputs, prob_heavy_output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As an example, let\\'s compute the heavy outputs and probability for our\ncircuit above.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Adds a measurement of the first m qubits to the previous circuit\nwith tape:\n qml.probs(wires=range(m))\n\n# Run the circuit, compute heavy outputs, and print results\noutput_probs = qml.execute([tape], dev_ideal, None) # returns a list of result !\noutput_probs = output_probs[0].reshape(2 ** m, )\nheavy_outputs, prob_heavy_output = heavy_output_set(m, output_probs)\n\nprint(\"State\\tProbability\")\nfor idx, prob in enumerate(output_probs):\n bit_string = format(idx, f\"#05b\")[2:]\n print(f\"{bit_string}\\t{prob:.4f}\")\n\nprint(f\"\\nMedian is {np.median(output_probs):.4f}\")\nprint(f\"Probability of a heavy output is {prob_heavy_output:.4f}\")\nprint(f\"Heavy outputs are {heavy_outputs}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"::: {.rst-class}\nsphx-glr-script-out\n\nOut:\n\n``` {.none}\nState Probability\n000 0.0157\n001 0.0200\n010 0.0026\n011 0.2765\n100 0.0175\n101 0.4266\n110 0.0045\n111 0.2365\n\nMedian is 0.0188\nProbability of a heavy output is 0.9596\nHeavy outputs are ['001', '111', '011', '101']\n```\n:::\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Step 2: run the circuits\n========================\n\nNow it\\'s time to run the protocol. First, let\\'s set up our hardware\ndevice. We\\'ll use a simulated version of the 5-qubit IBM Ourense as an\nexample \\-\\-- the reported quantum volume according to IBM is $V_Q=8$,\nso we endeavour to reproduce that here. This means that we should be\nable to run our square circuits reliably on up to $\\log_2 V_Q =3$\nqubits.\n\n::: {.note}\n::: {.title}\nNote\n:::\n\nIn order to access the IBM Q backend, users must have an IBM Q account\nconfigured. This can be done by running:\n\n> ``` {.python3}\n> from qiskit import IBMQ\n> IBMQ.save_account('MY_API_TOKEN')\n> ```\n\nA token can be generated by logging into your IBM Q account\n[here](https://quantum-computing.ibm.com/login) .\n:::\n\n::: {.note}\n::: {.title}\nNote\n:::\n\nIn the time since the original release of this demo, the Ourense device\nis no longer available from IBM Q. However, we leave the original\nresults for expository purposes, and note that the methods are\napplicable in general. Users can get a list of available IBM Q backends\nby importing IBM Q, specifying their provider and then calling:\n`provider.backends()`\n:::\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"dev_ourense = qml.device(\"qiskit.ibmq\", wires=5, backend=\"ibmq_bogota\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we can take a look at the arrangement of the qubits on the\nprocessor by plotting its hardware graph.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\nimport networkx as nx\n\nourense_hardware_graph = nx.Graph(dev_ourense.backend.configuration().coupling_map)\n\nnx.draw_networkx(\n ourense_hardware_graph,\n node_color=\"cyan\",\n labels={x: x for x in range(dev_ourense.num_wires)},\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](../demonstrations/quantum_volume/ourense.svg){.align-center\nwidth=\"75.0%\"}\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This hardware graph is not fully connected, so the quantum compiler will\nhave to make some adjustments when non-connected qubits need to\ninteract.\n\nTo actually perform the simulations, we\\'ll need to access a copy of the\nOurense noise model. Again, we won\\'t be running on Ourense directly\n\\-\\--we\\'ll set up a local device to simulate its behaviour.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from qiskit.providers.aer import noise\n\nnoise_model = noise.NoiseModel.from_backend(dev_ourense.backend)\n\ndev_noisy = qml.device(\n \"qiskit.aer\", wires=dev_ourense.num_wires, shots=1000, noise_model=noise_model\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As a final point, since we are allowed to do as much optimization as we\nlike, let\\'s put the compiler to work. The compiler will perform a\nnumber of optimizations to simplify our circuit. We\\'ll also specify\nsome high-quality qubit placement and routing techniques in order to fit\nthe circuits on the hardware graph in the best way possible.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"coupling_map = dev_ourense.backend.configuration().to_dict()[\"coupling_map\"]\n\ndev_noisy.set_transpile_args(\n **{\n \"optimization_level\": 3,\n \"coupling_map\": coupling_map,\n \"layout_method\": \"sabre\",\n \"routing_method\": \"sabre\",\n }\n)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let\\'s run the protocol. We\\'ll start with the smallest circuits on 2\nqubits, and make our way up to 5. At each $m$, we\\'ll look at 200\nrandomly generated circuits.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"min_m = 2\nmax_m = 5\nnum_ms = (max_m - min_m) + 1\n\nnum_trials = 200\n\n# To store the results\nprobs_ideal = np.zeros((num_ms, num_trials))\nprobs_noisy = np.zeros((num_ms, num_trials))\n\nfor m in range(min_m, max_m + 1):\n for trial in range(num_trials):\n\n # Simulate the circuit analytically\n with qml.tape.QuantumTape() as tape:\n qml.layer(qv_circuit_layer, m, num_qubits=m)\n qml.probs(wires=range(m))\n\n output_probs = qml.execute([tape], dev_ideal, None)\n output_probs = output_probs[0].reshape(2 ** m, )\n heavy_outputs, prob_heavy_output = heavy_output_set(m, output_probs)\n\n # Execute circuit on the noisy device\n qml.execute([tape], dev_noisy, None)\n\n # Get the output bit strings; flip ordering of qubits to match PennyLane\n counts = dev_noisy._current_job.result().get_counts()\n reordered_counts = {x[::-1]: counts[x] for x in counts.keys()}\n\n device_heavy_outputs = np.sum(\n [\n reordered_counts[x] if x[:m] in heavy_outputs else 0\n for x in reordered_counts.keys()\n ]\n )\n fraction_device_heavy_output = device_heavy_outputs / dev_noisy.shots\n\n probs_ideal[m - min_m, trial] = prob_heavy_output\n probs_noisy[m - min_m, trial] = fraction_device_heavy_output"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Step 3: perform a statistical analysis\n======================================\n\nHaving run our experiments, we can now get to the heart of the quantum\nvolume protocol: what *is* the largest square circuit that our processor\ncan run? Let\\'s first check out the means and see how much higher they\nare than 2/3.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"probs_mean_ideal = np.mean(probs_ideal, axis=1)\nprobs_mean_noisy = np.mean(probs_noisy, axis=1)\n\nprint(f\"Ideal mean probabilities:\")\nfor idx, prob in enumerate(probs_mean_ideal):\n print(f\"m = {idx + min_m}: {prob:.6f} {'above' if prob > 2/3 else 'below'} threshold.\")\n\nprint(f\"\\nDevice mean probabilities:\")\nfor idx, prob in enumerate(probs_mean_noisy):\n print(f\"m = {idx + min_m}: {prob:.6f} {'above' if prob > 2/3 else 'below'} threshold.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"::: {.rst-class}\nsphx-glr-script-out\n\nOut:\n\n``` {.none}\nIdeal mean probabilities:\nm = 2: 0.797979 above threshold.\nm = 3: 0.844052 above threshold.\nm = 4: 0.841203 above threshold.\nm = 5: 0.856904 above threshold.\n\nDevice mean probabilities:\nm = 2: 0.773760 above threshold.\nm = 3: 0.794875 above threshold.\nm = 4: 0.722860 above threshold.\nm = 5: 0.692935 above threshold.\n```\n:::\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that the ideal probabilities are well over 2/3. In fact, they\\'re\nquite close to the expected value of $(1 + \\ln 2)/2$, which we recall\nfrom above is $\\approx 0.85$. For this experiment, we see that the\ndevice probabilities are also above the threshold. But it isn\\'t enough\nthat just the mean of the heavy output probabilities is greater than\n2/3. Since we\\'re dealing with randomness, we also want to ensure these\nresults were not just a fluke! To be confident, we also want to be above\n2/3 within 2 standard deviations $(\\sigma)$ of the mean. This is\nreferred to as a 97.5% confidence interval (since roughly 97.5% of a\nnormal distribution sits within $2\\sigma$ of the mean.)\n\nAt this point, we\\'re going to do some statistical sorcery and make some\nassumptions about our distributions. Whether or not a circuit is\nsuccessful (in the sense that it produces heavy outputs more the 2/3 of\nthe time) is a binary outcome. When we sample many circuits, it is\nalmost like we are sampling from a *binomial distribution* where the\noutcome probability is equivalent to the heavy output probability. In\nthe limit of a large number of samples (in this case 200 circuits), a\nbinomial distribution starts to look like a normal distribution. If we\nmake this approximation, we can compute the standard deviation and use\nit to make our confidence interval. With the normal approximation, the\nstandard deviation is\n\n$$\\sigma = \\sqrt{\\frac{p_h(1 - p_h)}{N}},$$\n\nwhere $p_h$ is the average heavy output probability, and $N$ is the\nnumber of circuits.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"stds_ideal = np.sqrt(probs_mean_ideal * (1 - probs_mean_ideal) / num_trials)\nstds_noisy = np.sqrt(probs_mean_noisy * (1 - probs_mean_noisy) / num_trials)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that we have our standard deviations, let\\'s see if our means are at\nleast $2\\sigma$ away from the threshold!\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"fig, ax = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(9, 6))\nax = ax.ravel()\n\nfor m in range(min_m - 2, max_m + 1 - 2):\n ax[m].hist(probs_noisy[m, :])\n ax[m].set_title(f\"m = {m + min_m}\", fontsize=16)\n ax[m].set_xlabel(\"Heavy output probability\", fontsize=14)\n ax[m].set_ylabel(\"Occurrences\", fontsize=14)\n ax[m].axvline(x=2.0 / 3, color=\"black\", label=\"2/3\")\n ax[m].axvline(x=probs_mean_noisy[m], color=\"red\", label=\"Mean\")\n ax[m].axvline(\n x=(probs_mean_noisy[m] - 2 * stds_noisy[m]),\n color=\"red\",\n linestyle=\"dashed\",\n label=\"2\u03c3\",\n )\n\nfig.suptitle(\"Heavy output distributions for (simulated) Ourense QPU\", fontsize=18)\nplt.legend(fontsize=14)\nplt.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"![](../demonstrations/quantum_volume/ourense_heavy_output_distributions.svg){.align-center\nwidth=\"90.0%\"}\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let\\'s verify this numerically:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"two_sigma_below = probs_mean_noisy - 2 * stds_noisy\n\nfor idx, prob in enumerate(two_sigma_below):\n print(f\"m = {idx + min_m}: {prob:.6f} {'above' if prob > 2/3 else 'below'} threshold.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"::: {.rst-class}\nsphx-glr-script-out\n\nOut:\n\n``` {.none}\nm = 2: 0.714590 above threshold.\nm = 3: 0.737770 above threshold.\nm = 4: 0.659562 below threshold.\nm = 5: 0.627701 below threshold.\n```\n:::\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We see that we are $2\\sigma$ above the threshold only for $m=2$, and\n$m=3$. Thus, we find that the quantum volume of our simulated Ourense is\n$\\log_2 V_Q = 3$, or $V_Q = 8$, as expected.\n\nThis framework and code will allow you to calculate the quantum volume\nof many different processors. Try it yourself! What happens if we don\\'t\nspecify a large amount of compiler optimization? How does the volume\ncompare across different hardware devices? You can even build your own\ndevice configurations and noise models to explore the extent to which\ndifferent factors affect the volume.\n\nConcluding thoughts\n===================\n\nQuantum volume is a metric used for comparing the quality of different\nquantum computers. By determining the largest square random circuits a\nprocessor can run reliably, it provides a measure of the effective\nnumber of qubits a processor has. Furthermore, it goes beyond just\ngauging quality by a number of qubits \\-\\-- it incorporates many\ndifferent aspects of a device such as its compiler, qubit connectivity,\nand gate error rates.\n\nHowever, as with any benchmark, it is not without limitations. A key one\nalready discussed is that the heavy output generation problem requires\nus to simulate circuits classically in addition to running them on a\ndevice. While this is perhaps not an issue now, it will surely become\none in the future. The number of qubits continues to increase and error\nrates are getting lower, both of which imply that our square circuits\nwill be growing in both width and depth as time goes on. Eventually they\nwill reach a point where they are no longer classical simulable and we\nwill have to design new benchmarks.\n\nAnother limitation is that the protocol only looks at one type of\ncircuit, i.e., square circuits. It might be the case that a processor\nhas very few qubits, but also very low error rates. For example, what if\na processor with 5 qubits can run circuits with up to 20 layers? Quantum\nvolume would limit us to $\\log_2 V_Q = 5$ and the high quality of those\nqubits is not reflected in this. To that end, a more general *volumetric\nbenchmark* framework was proposed that includes not only square\ncircuits, but also rectangular circuits [^1]. Investigating very deep\ncircuits on few qubits (and very shallow circuits on many qubits) will\ngive us a broader overview of a processor\\'s quality. Furthermore, the\nflexibility of the framework of[^2] will surely inspire us to create new\ntypes of benchmarks. Having a variety of benchmarks calculated in\ndifferent ways is beneficial and gives us a broader view of the\nperformance of quantum computers.\n\nReferences {#quantum_volume_references}\n==========\n\nAbout the author\n================\n\n[^1]: Blume-Kohout, R., & Young, K. C., A volumetric framework for\n quantum computer benchmarks, [Quantum, 4, 362\n (2020).](http://dx.doi.org/10.22331/q-2020-11-15-362)\n\n[^2]: Blume-Kohout, R., & Young, K. C., A volumetric framework for\n quantum computer benchmarks, [Quantum, 4, 362\n (2020).](http://dx.doi.org/10.22331/q-2020-11-15-362)\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 0
}