# Introduction

The IBM Quantum Challenge Fall 2022 runs from 9:00 a.m. US Eastern on Friday, November 11 until 9:00 a.m. US Eastern on Friday, November 18, and will focus on two trailblazing offerings unveiled by IBM Quantum in recent years: Qiskit Runtime and Primitives. Participants in this year’s Fall Challenge will enjoy eight days of learning, exploring the most promising applications in quantum computing through special missions distributed across four levels of ascending difficulty

Qiskit’s Fall 2022 event focused on two new offerings:

Primitives - foundational, elementary, building blocks for users to perform quantum computations, developers to implement quantum algorithms, and researchers to solve complex problems and deliver new applications.

Runtime - the Qiskit Runtime service has been built on the concept of containerized execution; an execution model, where you have multiple elements of computation packaged and run portably on any system.

In what has turned out to be one of the best narratives during these Quantum events, the Qiskit Fall 2022 event put us in the captain’s seat of the Earth’s first faster-than-light starship solving quantum-inspired problems in an attempt to keep the crew safe along the mission. This was my first experience with Qiskit’s Primitives and Runtime execution and I can see the benefit of using this type of workload management, especially when experimenting with many iterations of small variations.

Without any further ado, let’s jump into the challenges.

# Lab1 - Qiskit Runtime and Primitives

## Bernstein Vazirani

The first lab is all about learning the fundamentals of Primitives and how to practically use them. Before we jump right into that however, we are presented with the background of the Bernstein Vazirani function. The Bernstein Vazirani function is a fundamental quantum algorithm which showed that there can be advantages in using a quantum computer as a computational tool for more complex problems.

Having been stepped through a concrete example, our first challenge was to define a BV function that could work for any “hidden string”.

## Sampler & Parameterized Circuits

The next concept we are introduced to is parameterized circuit - which in this case allows us to dynamically bind P-gate rotations to the sample circuit. We combine this with an introduction to using Sampler and passing our circuit through Runtime.

## Estimator

With the basics covered, we jump into a multi-step exercise to explore how we can pass multiple sets of parameters and circuits to Estimator and execute it once to gather the expectation values.

## Quantum Error Mitigation & Suppression

The last part of the first lab was to explore error mitigation and suppression in context of Qiskit Runtime.

## Sampler & M3

Qiskit Sampler allows us to leverage M3 (Matrix-free measurement mitigation) error mitigation routines. As part of the lab we are given a quick background into noise and M3.

M3 works in a reduced subspace that is defined by the noisy input that need to be corrected. Since the input can be smaller than the full dimensionality of Hilbert space, the resulting linear system of equations is much easier to solve.

$$(\overset{\sim}{A})^{-1}\overset{\sim}{A}\vec{p}_{ideal} = (\overset{\sim}{A})^{-1}\vec{p}_{noisy}$$

All that is left is putting the M3 approach to code and testing it out.

## Estimator & ZNE

We were then introduced to Zero-Noise Extrapolation error mitigation and explored how to implement it using Qiskit Estimators.

With ZNE a quantum program is altered to run at different effect levels of processor noise. The result of the computation is extrapolated to an estimated value at a noiseless level. There several methods for estimating the value:

Of which Qiskit implements the digital approach.

# Lab2 - Quantum Machine Learning

The focus of the second lab is to implement the previously learned M3 error mitigation to various machine learning applications. Before we jump straight into error mitigation however the notebook goes over the basics of quantum machine learning, specifically data embedding and ansatz.

In quantum machine learning, a quantum feature map transforms $\vec{x} \rightarrow | \phi(\vec{x})\rangle$ using a unitary transformation $\vec{U_\phi}(\vec{x})$, which is typically a parameterized quantum circuit. Parametrized quantum circuits in quantum machine learning tend to be used for encoding data and as a quantum model. Their parameters are determined by the data being encoded and optimization process. They can generate a key subset of the states within the output Hilbert space and it allows them to be used as machine learning model

Throughout this notebook we will be using a RealAmplitudes 2-local ansatz:

At which point we are given a sample data which we need to embed in a quantum circuit we define.

We then ran our circuit on an ideal and noisy simulator to see the effects of the noise on our output. We will be using this as a base measurement, and then work on incorporating error mitigation to see the overall impact.

Adding in the M3 mitigation similarly to the first notebook:

Once we run this mitigated approach on the noisy backend we can then compare the approaches and see the accuracy differences between them all:

Where fidelity is the closeness of two quantum states. Given two states $|\psi\rangle = U|0\rangle$ and $|\varphi\rangle = V|0\rangle$ generated by the unitaries $U$ and $V$, the fidelity is defined as

$\left|\langle \psi \mid \varphi \rangle\right|^2 = \left|\langle 0 \mid U^{\dagger} V \mid 0\rangle \right|^2$

And this can be implemented in code by the following circuit:

## Quantum Kernels & QSVMs

Now that we have a way to implement error correction, and a way to calculate the resulting fidelity, we can explore quantum kernels.

Machine learning algorithms can map the input dataset to higher dimensional feature map using kernel function, $k(\vec{x}_i, \vec{x}_j) = \langle f(\vec{x}_i), f(\vec{x}_j) \rangle$ where $k$ is kernel function, $\vec{x}_i, \vec{x}_j$ are $n$ dimensional inputs, $f$ is map from $n$-dimension to $m$-dimension space. When we consider that data is finite, the quantum kernel can be expressed as matrix

$K_{ij}=|\langle\phi^{\dagger}(\vec{x}_j)|\phi(\vec{x}_i)\rangle|^2$

and since we can express the quantum kernel as a parameterized circuit with $$n$$ qubits that becomes:

$|\langle\phi^{\dagger}(\vec{x}_j)|\phi(\vec{x}_i)\rangle|^2 = |\langle 0^{\otimes n}|U^{\dagger}_{\phi(\vec{x}_j)} U_{\phi(\vec{x}_i)}|0^{\otimes n} \rangle |^2$

The code to implement this is simplistic with leveraging Sampler similarly to how we have done up to now.

And lastly we can check the accuracy of our training:

# Lab3 - Quantum Optimization

The third notebook will be building on the first two and explore optimization in the case of a Traveling Salesman Problem.

After being given a quick explanation of what a TSP problem is, the first part of our challenge is to model the situation into a graph format.

Skipping the code as we are more interested in the quantum approach, however we go through this example classically to have a working shortest path to compare in our quantum application.

This is not all that is required however. We also need to convert this quadratic into a form that can be used in a quantum circuit. Enter the Ising Hamiltonian. An Ising Hamiltonian is a representation of the energy in a particular system. This allows us to use eigensolvers to find the minimum energy, which represents the shortest path solution of our original graph problem.

We can get the Ising representation of our quadratic using built in functions of Qiskit.

## Reference Classical Eigensolver

Again, we will first try and solve this representation classically to compare to our original baseline.

So far so good, we have the same shortest path solution.

Now let’s run this on a quantum computer and see what we end up with.

## Parameterized Quantum Circuits

Our QUBO example above is, by definition, unconstrained. The requirement is that the variables are binary, in the case of TSP either 1 or 0, visited or not. Most of the problems we are interested in solving will not be as simple and will include additional constraints. These constraints can be incorporated into our QUBO approach however by introducing penalties into the objective function.

We will explore this by creating a problem specific parameterized quantum circuit for our TSP problem.

The challenge explores a paper and the approach to optimizing a VQE run by focusing on breaking down the overall solution space into multiple single-solution problem spaces - corresponding to each constraint.

In short, since our challenge example is a 3-node graph, we can look at the solution sub-spaces in three separate groupings.

### First case - W state

To start, we need to come up with a quantum circuit that generates the three qubit W-state.

Which when run on a quantum computer we would expect 1,2,4 to be equal, or close to equal distribution - representing 001,010,100 - or our required W state.

We then continue to follow the paper’s routine and build out the TSP problem’s second constraint in circuit.

Holistically, what we are trying to do is take each of the W states (total 3 states from above) and then build out each state’s solution space. In other words were are building out each hypothetical on if each W state was selected, and building it into a single quantum circuit.

Working with the W state code above, we can continue:

Which ends up looking like the following circuit for 3 constraints.

### Second case - second constraint

Now that we have a good understanding of how to combine solution states on the first constraint, let’s move on to combining this with the second constraint solutions.

Here, unlike the first PQC, we need to add more correlations among the qubits since we will be mapping more variables across the whole matrix representation. Since we have variables that appear both in the first and second line, we can no longer realize the constraints by just tensor products. We will entangle the parametrized W gates using CNOT gates to achieve the entanglement.

Let’s start building the second PQC similarly to the first and start with defining our L-shaped constraint.

Next, we need to encode the remaining constraints excluding the ones we applied. Since the constraints for the remaining part are already determined, they can be read in a similar way as the previous problem by applying the corresponding CNOT gates followed by the parametrized W state gates on the qubits mapped variables.

Then bridging the two sections together we get our full second constraint circuit.

Finally, we can solve our second-constraint circuit and list out the optimized parameters.

Where this gets tied all together is where we compare it to the first model in terms of convergence. We can see that the more optimized solution set converges quicker.

### Third case - all constraints

Now we can continue and expand to all constraints. We consider all constraints to completely exclude the infeasible answers in the above image. The set of the bases of our new quantum state then includes only feasible answers. Unlike the previous two states where we saw convergence to the optimal solution, we should see here an immediate, optimal result.

We will approach this by treating the quantum circuit as a permutation matrix and recursively build on blocks we’ve already established.

We are pointed again to the referenced paper on how to approach the solution.

Using our existing blocks and the referenced approach we can put it to code.

And the beauty of the circuit above is that it represents the superposition of the 6 feasible answers. Running our third model through a VQE runtime however, we can clearly see that while our first two models converged towards the right answer, the third model only represents feasible answers, so immediately starts at the minimal solution space.

# Summary

In all this was a great event as usual. The cohesive narrative, and building on the exercises allowed us to experiment with the Qiskit runtime primitives in the context of a challenge many of us have seen before - namely optimization. I enjoyed the comparison of the first, second, and third model efficiencies in the optimization notebook as well, clearly illustrating the concepts of the reference paper and making it “make sense”.

Thanks folks, until next time!