QAOA with Classiq & NVIDIA's CUDA-Q
Here we show how you can create a QAOA circuit using Classiq, and executing this using CUDA-Q
After visiting GTC, I was amazed by how powerful NVIDIA GPUs were for Quantum computing applications and how strong the CUDA-Q offering is. Except for a high-level programming language, CUDA-Q has many of the components to run (hybrid-)quantum algorithms efficiently on hardware and simulators.
So, how would it look if we combined the high-level language and compiler of Classiq together with CUDA-Q for efficient execution?
For this use case I will take the QAOA algorithm. Classiq has shown that the Classiq compiler can generate very efficient QAOA circuits compared to other tools. Next to that, creating QAOA circuits in Classiq’s high-level language is a breeze and no quantum computing knowledge is required.
The complete notebook that I will use can be seen here, in this blog post I will just look at small parts, to outline the concept.
The whole flow will look like this:
Problem description
For this example, I will use the max-cut problem. If you are unfamiliar with the problem here is a description.
We will use networkx to create a graph, in Python like this:
G = nx.Graph()
nodes = [0, 1, 2, 3, 4]
edges = [[0, 1], [1, 2], [2, 3], [3, 0], [2, 4], [3, 4]]
G.add_nodes_from(nodes)
G.add_edges_from(edges)
Create a max-cut problem instance using networkx
To create a max-cut problem model from this graph we will use Pyomo. If you are unfamiliar with Pyomo, it is a tool that can describe optimization problems just like this. It has not been developed as a quantum computing tool, so no quantum computing knowledge is required to use it. I will use the Pyomo model from the public Classiq Library, which describes the problem like this:
def arithmetic_eq(x1: int, x2: int) -> int:
return x1 * (1 - x2) + x2 * (1 - x1)
# we define a function which returns the pyomo model for a graph input
def maxcut(graph: nx.Graph) -> pyo.ConcreteModel:
model = pyo.ConcreteModel()
model.x = pyo.Var(graph.nodes, domain=pyo.Binary)
model.cost = pyo.Objective(
expr=sum(
arithmetic_eq(model.x[node1], model.x[node2])
for (node1, node2) in graph.edges
),
sense=pyo.maximize,
)
return model
Pyomo model to describe a max-cut problem
When you now call this maxcut(G)
function you will get a Pyomo model that describes the optimization problem, for graph G
.
Note that there are also other ways to create a QAOA circuit in Classiq that will give the programmer more flexibility on the circuit design while still being a high-level abstraction. An example for the max-cut problem can be found here.
Circuit generation
Now that we have the problem, we need to create a QAOA circuit to solve it. To do this, we can easily pass this promo model to the Classiq synthesis engine and generate a circuit. This can be done with just a few lines of code like this:
combi = CombinatorialProblem(pyo_model=maxcut_model, num_layers=NUM_LAYERS)
qprog = combi.get_qprog()
show(qprog)
Here the qprog
variable has the QAOA circuit to solve this max-cut problem.
Translate into CUDA-Q
Unfortunately, exporting a CUDA-Q kernel from Classiq is not natively supported. We, therefore, need to transform the QASM 3.0 code generated by Classiq into a CUDA-Q kernel. I will not describe how that can be done in detail here; have a look at the full notebook here to see the code. It is a very simple translation that only considers the gates for this specific circuit, so not a generic translation by any means.
(GPU) Execution
To use the CUDA-q observe
function that will be used in the optimization we also need to provide the Hamiltonian of the problem. Luckily, there is a helper function in the Classiq SDK that can get the Hamiltonian from the Pyomo model, using the pyo_model_to_hamiltonian()
function. The Hamiltonian format that is used by Classiq is not the same as the format that NVIDIA requires so there is some small translation function in the main notebook.
With this, we have the quantum circuit, and we have the Hamiltonian that we would like to minimize. The final step is to pass all this to CUDA-Q for execution and optimization. The code below is taken from the CUDA-Q examples.
optimizer = cudaq.optimizers.NelderMead()
optimizer.initial_parameters = np.random.uniform(-np.pi / 8.0, np.pi / 8.0,
parameter_count)
print("Initial parameters = ", optimizer.initial_parameters)
# Define the objective, return `<state(params) | H | state(params)>`
def objective(parameters):
return cudaq.observe(qaoa_kernel, hamiltonian, parameters).expectation()
# Optimize!
optimal_expectation, optimal_parameters = optimizer.optimize(
dimensions=parameter_count, function=objective)
This will efficiently run the complete program on a GPU if you have this available; it is also possible to use a CPU simulator if you do not have a GPU available.
This shows how easily you can combine the power of high-level quantum programming with Classiq and the accelerated execution environment of NVIDIA.