diff --git a/docs/tutorials/ai-transpiler-introduction.ipynb b/docs/tutorials/ai-transpiler-introduction.ipynb
index 146e9c769c0..53e782b9694 100644
--- a/docs/tutorials/ai-transpiler-introduction.ipynb
+++ b/docs/tutorials/ai-transpiler-introduction.ipynb
@@ -1,95 +1,80 @@
{
"cells": [
{
- "attachments": {},
"cell_type": "markdown",
- "id": "f59032f0-f29a-4e52-9cab-855ed6f86b00",
+ "id": "meta-header",
"metadata": {},
"source": [
"---\n",
"title: Qiskit AI-powered transpiler service introduction\n",
- "description: In this notebook, we will explore the key benefits of Qiskit AI-powered transpiler service and how it compares to traditional methods.\n",
+ "description: Learn how the AI transpiler compares to standard transpilation using mirror circuits.\n",
"---\n",
"\n",
"\n",
- "{/* cspell:ignore fontsize idxmin */}\n",
+ "{/* cspell:ignore fontsize fontweight steelblue AITI */}\n",
"\n",
"# Qiskit AI-powered transpiler service introduction\n",
- "*Estimated QPU usage: None (NOTE: This tutorial does not execute jobs because it is focused on transpilation)*\n",
- "\n",
- "## Background\n",
- "\n",
- "The **Qiskit AI-powered transpiler service (QTS)** introduces machine learning-based optimizations in both routing and synthesis passes. These AI modes have been designed to tackle the limitations of traditional transpilation, particularly for large-scale circuits and complex hardware topologies.\n",
- "\n",
- "As of **July 2025**, the **Transpiler Service** has been migrated to the new IBM Quantum® Platform and is no longer available. For the latest updates about the status of the Transpiler Service, please refer to the [transpiler service documentation](/docs/guides/qiskit-transpiler-service). You can still use the AI transpiler locally, similar to standard Qiskit transpilation. Simply replace `generate_preset_pass_manager()` with `generate_ai_pass_manager()`. This function constructs a pass manager that integrates the AI-powered routing and synthesis passes directly into your local transpilation workflow.\n",
- "\n",
- "### Key features of AI passes\n",
- "\n",
- "- Routing passes: AI-powered routing can dynamically adjust qubit paths based on the specific circuit and backend, reducing the need for excessive SWAP gates.\n",
- " - `AIRouting`: Layout selection and circuit routing\n",
- "\n",
- "- Synthesis passes: AI techniques optimize the decomposition of multi-qubit gates, minimizing the number of two-qubit gates, which are typically more error-prone.\n",
- " - `AICliffordSynthesis`: Clifford gate synthesis\n",
- " - `AILinearFunctionSynthesis`: Linear function circuit synthesis\n",
- " - `AIPermutationSynthesis`: Permutation circuit synthesis\n",
- " - `AIPauliNetworkSynthesis`: Pauli Network circuit synthesis (only available in the Qiskit Transpiler Service, not in local environment)\n",
- "\n",
- "- Comparison with traditional transpilation: The standard Qiskit transpiler is a robust tool that can handle a broad spectrum of quantum circuits effectively. However, when circuits grow larger in scale or hardware configurations become more complex, AI passes can deliver additional optimization gains. By using learned models for routing and synthesis, QTS further refines circuit layouts and reduces overhead for challenging or large-scale quantum tasks.\n",
+ "*Usage estimate: 5 minutes on IBM Heron (NOTE: This is an estimate only. Your runtime may vary.)*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "learning-outcomes",
+ "metadata": {},
+ "source": [
+ "## Learning outcomes\n",
+ "After going through this tutorial, users should understand:\n",
+ "- How to use the AI-powered transpiler service (`generate_ai_pass_manager`) as a drop-in replacement for the standard transpiler\n",
+ "- How the AI transpiler compares to the default transpiler in terms of two-qubit depth, gate count, and transpilation time\n",
+ "- How to use mirror circuits to evaluate transpilation quality through hardware execution\n",
"\n",
+ "## Prerequisites\n",
+ "We suggest that users are familiar with the following topics before going through this tutorial:\n",
+ "- [Transpile circuits](https://quantum.cloud.ibm.com/docs/en/guides/transpile)\n",
+ "- [Configure preset pass managers](https://quantum.cloud.ibm.com/docs/en/guides/transpile-with-pass-managers)\n",
+ "- [AI-powered transpiler passes](https://quantum.cloud.ibm.com/docs/en/guides/ai-transpiler-passes)\n",
"\n",
- "This tutorial evaluates the AI modes using both routing and synthesis passes, comparing the results to traditional transpilation to highlight where AI offers performance gains.\n",
"\n",
- "For more details on the available AI passes, see the [AI passes documentation](/docs/guides/ai-transpiler-passes).\n",
+ "## Background\n",
"\n",
+ "The **Qiskit AI-powered transpiler service (QTS)** introduces machine-learning-based transpilation passes that can produce shorter, more hardware-efficient circuits than traditional heuristic methods such as SABRE. Shorter circuits accumulate less noise, which directly improves result quality on real quantum hardware.\n",
"\n",
- "### Why use AI for quantum circuit transpilation?\n",
+ "In this tutorial we compare two transpilation strategies:\n",
"\n",
- "As quantum circuits grow in size and complexity, traditional transpilation methods struggle to optimize layouts and reduce gate counts efficiently. Larger circuits, particularly those involving hundreds of qubits, impose significant challenges on routing and synthesis due to device constraints, limited connectivity, and qubit error rates.\n",
+ "| Strategy | API |\n",
+ "|-|-|\n",
+ "| **Default** | `generate_preset_pass_manager(optimization_level=3, ...)` |\n",
+ "| **AI** | `generate_ai_pass_manager(optimization_level=1, ai_optimization_level=3, ...)` |\n",
"\n",
- "This is where AI-powered transpilation offers a potential solution. By leveraging machine learning techniques, the AI-powered transpiler in Qiskit can make smarter decisions about qubit routing and gate synthesis, leading to better optimization of large-scale quantum circuits.\n",
+ "We measure three metrics for each strategy: **two-qubit gate depth**, **total gate count**, and **transpilation runtime**.\n",
"\n",
- "### Brief benchmarking results\n",
- "\n",
+ "### AI transpiler benchmarks\n",
"\n",
+ "In benchmarking tests, the AI transpiler consistently produced shallower, higher-quality circuits compared to the standard Qiskit transpiler. For these tests, we used Qiskit's default pass manager strategy, configured with [`generate_preset_passmanager`]. While this default strategy is often effective, it can struggle with larger or more complex circuits. By contrast, AI-powered passes achieved an average 24% reduction in two-qubit gate counts and a 36% reduction in circuit depth for large circuits (100+ qubits) when transpiling to the heavy-hex topology of IBM Quantum hardware. For more information on these benchmarks, refer to this [blog](https://www.ibm.com/quantum/blog/qiskit-performance).\n",
"\n",
- "In benchmarking tests, the AI transpiler consistently produced shallower, higher-quality circuits compared to the standard Qiskit transpiler. For these tests, we used Qiskit’s default pass manager strategy, configured with [`generate_preset_passmanager`]. While this default strategy is often effective, it can struggle with larger or more complex circuits. By contrast, AI-powered passes achieved an average 24% reduction in two-qubit gate counts and a 36% reduction in circuit depth for large circuits (100+ qubits) when transpiling to the heavy-hex topology of IBM Quantum hardware. For more information on these benchmarks, refer to this [blog.](https://www.ibm.com/quantum/blog/qiskit-performance)\n",
+ "\n",
"\n",
"This tutorial explores the key benefits of AI passes and how it compares to traditional methods."
]
},
- {
- "cell_type": "code",
- "execution_count": 1,
- "id": "2aa75e36-471f-49aa-8478-134f13e3630b",
- "metadata": {
- "tags": [
- "remove-cell"
- ]
- },
- "outputs": [],
- "source": [
- "# This cell is hidden from users;\n",
- "# it just disables a linting rule.\n",
- "# ruff: noqa: F811"
- ]
- },
{
"cell_type": "markdown",
- "id": "4a781d15-0953-4af6-b581-ea6cb3a74228",
+ "id": "requirements",
"metadata": {},
"source": [
"## Requirements\n",
"\n",
- "Before starting this tutorial, ensure that you have the following installed:\n",
+ "Before starting this tutorial, be sure you have the following installed:\n",
"\n",
- "* Qiskit SDK v1.0 or later, with [visualization](/docs/api/qiskit/visualization) support\n",
- "* Qiskit Runtime (`pip install qiskit-ibm-runtime`) v0.22 or later\n",
- "* Qiskit IBM® Transpiler with AI local mode(`pip install 'qiskit-ibm-transpiler[ai-local-mode]'`)"
+ "- Qiskit SDK v2.0 or later, with [visualization](/docs/api/qiskit/visualization) support\n",
+ "- Qiskit Runtime (`pip install qiskit-ibm-runtime`) v0.22 or later\n",
+ "- Qiskit IBM Transpiler with AI local mode (`pip install 'qiskit-ibm-transpiler[ai-local-mode]'`)\n",
+ "- Qiskit Aer (`pip install qiskit-aer`)"
]
},
{
"cell_type": "markdown",
- "id": "c7c26e24-329b-4283-9cc0-67a241807049",
+ "id": "setup-header",
"metadata": {},
"source": [
"## Setup"
@@ -97,54 +82,28 @@
},
{
"cell_type": "code",
- "execution_count": 2,
- "id": "2c462d48-ae45-4528-9b09-cebc869a6812",
+ "execution_count": 1,
+ "id": "setup-code",
"metadata": {},
"outputs": [],
"source": [
"from qiskit import QuantumCircuit\n",
- "from qiskit.circuit.library import efficient_su2, PermutationGate\n",
- "from qiskit.synthesis.qft import synth_qft_full\n",
- "from qiskit.circuit.random import random_circuit, random_clifford_circuit\n",
- "from qiskit.transpiler import generate_preset_pass_manager, CouplingMap\n",
- "from qiskit_ibm_runtime import QiskitRuntimeService\n",
+ "from qiskit.circuit.random import random_circuit\n",
+ "from qiskit.transpiler import generate_preset_pass_manager\n",
+ "from qiskit_ibm_runtime import QiskitRuntimeService, SamplerV2\n",
"from qiskit_ibm_transpiler import generate_ai_pass_manager\n",
- "from qiskit.synthesis.permutation import (\n",
- " synth_permutation_depth_lnn_kms,\n",
- " synth_permutation_basic,\n",
- ")\n",
+ "from qiskit_aer import AerSimulator\n",
+ "from qiskit_aer.noise import NoiseModel, depolarizing_error\n",
"import matplotlib.pyplot as plt\n",
"import pandas as pd\n",
- "import numpy as np\n",
"import time\n",
"import logging\n",
"\n",
"seed = 42\n",
"\n",
"\n",
- "# Used for generating permutation circuits in part two for comparison\n",
- "def generate_permutation_circuit(width, pattern):\n",
- " circuit = QuantumCircuit(width)\n",
- " circuit.append(\n",
- " PermutationGate(pattern=pattern),\n",
- " qargs=range(width),\n",
- " )\n",
- " return circuit\n",
- "\n",
- "\n",
- "# Creates a Bernstein-Vazirani circuit given the number of qubits\n",
- "def create_bv_circuit(num_qubits):\n",
- " qc = QuantumCircuit(num_qubits, num_qubits - 1)\n",
- " qc.x(num_qubits - 1)\n",
- " qc.h(qc.qubits)\n",
- " for i in range(num_qubits - 1):\n",
- " qc.cx(i, num_qubits - 1)\n",
- " qc.h(qc.qubits[:-1])\n",
- " return qc\n",
- "\n",
- "\n",
- "# Transpile a circuit with a given pass manager and return metrics\n",
"def transpile_with_metrics(pass_manager, circuit):\n",
+ " \"\"\"Transpile a circuit and return the result along with key metrics.\"\"\"\n",
" start = time.time()\n",
" qc_out = pass_manager.run(circuit)\n",
" elapsed = time.time() - start\n",
@@ -155,27 +114,117 @@
" return qc_out, {\n",
" \"depth_2q\": depth_2q,\n",
" \"gate_count\": gate_count,\n",
- " \"time_s\": elapsed,\n",
+ " \"time_s\": round(elapsed, 3),\n",
" }\n",
"\n",
"\n",
- "# Used for collecting metrics for part 3 of synthesis methods\n",
- "def synth_transpile_with_metrics(qc, pm, pattern_id, method):\n",
- " start = time.time()\n",
- " qc = pm.run(qc)\n",
- " elapsed = time.time() - start\n",
- "\n",
- " return {\n",
- " \"Pattern\": pattern_id,\n",
- " \"Method\": method,\n",
- " \"Depth (2Q)\": qc.depth(lambda x: x.operation.num_qubits == 2),\n",
- " \"Gates\": qc.size(),\n",
- " \"Time (s)\": elapsed,\n",
- " }\n",
+ "def remap_to_contiguous(tqc):\n",
+ " \"\"\"Remap a transpiled circuit to use contiguous qubit indices.\n",
"\n",
+ " Transpiled circuits target specific physical qubits (e.g., qubit 45, 67)\n",
+ " on a large backend. This remaps them to 0, 1, 2, ... so Aer only\n",
+ " simulates the active qubits.\"\"\"\n",
+ " active = sorted(\n",
+ " {tqc.find_bit(q).index for inst in tqc.data for q in inst.qubits}\n",
+ " )\n",
+ " qubit_map = {old: new for new, old in enumerate(active)}\n",
+ " new_qc = QuantumCircuit(len(active))\n",
+ " for inst in tqc.data:\n",
+ " old_indices = [tqc.find_bit(q).index for q in inst.qubits]\n",
+ " new_qc.append(inst.operation, [qubit_map[i] for i in old_indices])\n",
+ " return new_qc\n",
+ "\n",
+ "\n",
+ "def build_mirror_circuit(tqc, simulate=True):\n",
+ " \"\"\"Build a mirror circuit: U followed by U-dagger, with measurements.\n",
+ "\n",
+ " The expected output is always |0...0>, so measuring the survival\n",
+ " probability reveals how much noise each transpilation strategy adds.\n",
+ "\n",
+ " Args:\n",
+ " tqc: A transpiled circuit.\n",
+ " simulate: If True (default), remap to contiguous qubits so Aer\n",
+ " only simulates the active qubits. If False, keep the full\n",
+ " physical layout for hardware execution.\"\"\"\n",
+ " if simulate:\n",
+ " tqc = remap_to_contiguous(tqc)\n",
+ " mirror = tqc.compose(tqc.inverse())\n",
+ " mirror.measure_all()\n",
+ " return mirror\n",
+ "\n",
+ "\n",
+ "def summary_table(df):\n",
+ " \"\"\"Display a summary table with mean +/- stdev for each metric,\n",
+ " plus the mean percentage improvement of AI over Default.\"\"\"\n",
+ " metrics = [\n",
+ " (\"Depth 2Q\", \"Depth 2Q (Default)\", \"Depth 2Q (AI)\"),\n",
+ " (\"Gate Count\", \"Gate Count (Default)\", \"Gate Count (AI)\"),\n",
+ " (\"Time (s)\", \"Time (Default)\", \"Time (AI)\"),\n",
+ " ]\n",
+ " rows = []\n",
+ " for label, col_def, col_ai in metrics:\n",
+ " pct = (df[col_def] - df[col_ai]) / df[col_def] * 100\n",
+ " rows.append(\n",
+ " {\n",
+ " \"Metric\": label,\n",
+ " \"Default (mean +/- std)\": f\"{df[col_def].mean():.1f} +/- {df[col_def].std():.1f}\",\n",
+ " \"AI (mean +/- std)\": f\"{df[col_ai].mean():.1f} +/- {df[col_ai].std():.1f}\",\n",
+ " \"AI % improvement\": f\"{pct.mean():+.1f}% +/- {pct.std():.1f}%\",\n",
+ " }\n",
+ " )\n",
+ " return pd.DataFrame(rows).set_index(\"Metric\")\n",
+ "\n",
+ "\n",
+ "def plot_metrics_and_pct(df, title_prefix):\n",
+ " \"\"\"Plot metric comparisons and percentage improvement of AI over Default.\"\"\"\n",
+ " metrics = [\n",
+ " (\"Depth 2Q (Default)\", \"Depth 2Q (AI)\", \"Two-Qubit Depth\"),\n",
+ " (\"Gate Count (Default)\", \"Gate Count (AI)\", \"Gate Count\"),\n",
+ " (\"Time (Default)\", \"Time (AI)\", \"Transpilation Time\"),\n",
+ " ]\n",
+ "\n",
+ " # Row 1: raw metric comparison\n",
+ " fig, axs = plt.subplots(1, 3, figsize=(21, 5))\n",
+ " fig.suptitle(\n",
+ " f\"{title_prefix}: Metric Comparison\",\n",
+ " fontsize=15,\n",
+ " fontweight=\"bold\",\n",
+ " y=1.02,\n",
+ " )\n",
+ " for ax, (col_def, col_ai, label) in zip(axs, metrics):\n",
+ " ax.plot(df[\"Qubits\"], df[col_def], \"o-\", label=\"Default\")\n",
+ " ax.plot(df[\"Qubits\"], df[col_ai], \"s-\", label=\"AI\")\n",
+ " ax.set_title(label)\n",
+ " ax.set_xlabel(\"Number of Qubits\")\n",
+ " ax.set_ylabel(label)\n",
+ " ax.legend()\n",
+ " plt.tight_layout()\n",
+ " plt.show()\n",
+ "\n",
+ " # Row 2: percentage improvement\n",
+ " fig, axs = plt.subplots(1, 3, figsize=(21, 5))\n",
+ " fig.suptitle(\n",
+ " f\"{title_prefix}: % Improvement of AI over Default\",\n",
+ " fontsize=15,\n",
+ " fontweight=\"bold\",\n",
+ " y=1.02,\n",
+ " )\n",
+ " for ax, (col_def, col_ai, label) in zip(axs, metrics):\n",
+ " pct = (df[col_def] - df[col_ai]) / df[col_def] * 100\n",
+ " ax.axhline(\n",
+ " 0, color=\"#1f77b4\", linewidth=2, label=\"Default (baseline)\"\n",
+ " )\n",
+ " ax.plot(df[\"Qubits\"], pct, \"s-\", color=\"#ff7f0e\", label=\"AI\")\n",
+ " ax.fill_between(df[\"Qubits\"], 0, pct, alpha=0.15, color=\"#ff7f0e\")\n",
+ " ax.set_title(label)\n",
+ " ax.set_xlabel(\"Number of Qubits\")\n",
+ " ax.set_ylabel(\"% Improvement\")\n",
+ " ax.legend()\n",
+ " plt.tight_layout()\n",
+ " plt.show()\n",
"\n",
- "# Ignore logs like \"INFO:qiskit_ibm_transpiler.wrappers.ai_local_synthesis:Running Linear Functions AI synthesis on local mode\"\n",
"\n",
+ "# Suppress verbose AI transpiler logs\n",
"logging.getLogger(\n",
" \"qiskit_ibm_transpiler.wrappers.ai_local_synthesis\"\n",
").setLevel(logging.WARNING)"
@@ -183,218 +232,142 @@
},
{
"cell_type": "markdown",
- "id": "ba7568f8-50c9-47b4-acc0-33ea34f5fca0",
+ "id": "sim-header",
"metadata": {},
"source": [
- "# Part I. Qiskit patterns\n",
- "\n",
- "Let's now see how to use the AI transpiler service with a simple quantum circuit, using Qiskit patterns. The key is creating a `PassManager` with `generate_ai_pass_manager()` instead of the standard `generate_preset_pass_manager()`."
+ "## Small-scale simulator example"
]
},
{
"cell_type": "markdown",
- "id": "5ba1bb22-272f-4f8f-ae78-7c3d1cdaacc6",
+ "id": "sim-step1-header",
"metadata": {},
"source": [
- "## Step 1: Map classical inputs to a quantum problem\n",
- "\n",
- "In this section, we will test the AI transpiler on the `efficient_su2` circuit, a widely used hardware-efficient ansatz. This circuit is particularly relevant for variational quantum algorithms (for example, VQE) and quantum machine-learning tasks, making it an ideal test case for assessing transpilation performance.\n",
+ "### Step 1: Map classical inputs to a quantum problem\n",
"\n",
- "The `efficient_su2` circuit consists of alternating layers of single-qubit rotations and entangling gates like CNOTs. These layers enable flexible exploration of the quantum state space while keeping the gate depth manageable. By optimizing this circuit, we aim to reduce gate count, improve fidelity, and minimize noise. This makes it a strong candidate for testing the AI transpiler’s efficiency."
+ "We generate 20 random circuits with depth 5, where the number of qubits ranges from 6 to 25. These circuits will serve as our test cases for comparing transpilation strategies."
]
},
{
"cell_type": "code",
- "execution_count": 3,
- "id": "c6e9c2c0-e02c-4276-bae8-d5692e60b6b8",
+ "execution_count": 2,
+ "id": "sim-step1-code",
"metadata": {},
"outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Created 20 circuits with qubit counts: [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]\n"
+ ]
+ },
{
"data": {
"text/plain": [
- ""
+ ""
]
},
- "execution_count": 3,
+ "execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
- "# For our transpilation, we will use a large circuit of 101 qubits\n",
- "qc = efficient_su2(90, entanglement=\"circular\", reps=1).decompose()\n",
+ "num_circuits_sim = 20\n",
+ "depth_sim = 4\n",
+ "qubit_range_sim = list(range(6, 26))\n",
+ "\n",
+ "circuits_sim = [\n",
+ " # We have only two qubit gates, as those test how well the transpiler can optimize the circuit.\n",
+ " random_circuit(\n",
+ " num_qubits=n,\n",
+ " depth=depth_sim,\n",
+ " max_operands=2,\n",
+ " num_operand_distribution={2: 1},\n",
+ " seed=seed + i,\n",
+ " )\n",
+ " for i, n in enumerate(qubit_range_sim)\n",
+ "]\n",
"\n",
- "# Draw a smaller version of the circuit to get a visual representation\n",
- "qc_small = efficient_su2(5, entanglement=\"circular\", reps=1).decompose()\n",
- "qc_small.draw(output=\"mpl\")"
+ "print(\n",
+ " f\"Created {len(circuits_sim)} circuits with qubit counts: {qubit_range_sim}\"\n",
+ ")\n",
+ "circuits_sim[0].draw(output=\"mpl\", fold=-1)"
]
},
{
"cell_type": "markdown",
- "id": "6c7c76f7-c376-47e9-bc9c-dbe32b2c89b7",
+ "id": "sim-step2-header",
"metadata": {},
"source": [
- "## Step 2: Optimize problem for quantum hardware execution\n",
- "\n",
- "### Choose a backend\n",
+ "### Step 2: Optimize problem for quantum hardware execution\n",
"\n",
- "For this example, we select the least busy operational IBM Quantum backend that is not a simulator and has at least 100 qubits:\n",
- "\n",
- "**Note:** Since the least-busy backend can change over time, different devices might be selected for different runs. Device-specific properties, such as coupling maps, can lead to differences in the transpiled circuits."
+ "We use a reduced coupling map (the first 30 qubits of the backend) so the transpiled circuits are small enough for local simulation. The hardware section later uses the full coupling map."
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "c6b6e55e-9b70-4c94-8bbf-5ea47d0510ff",
+ "id": "sim-step2-backend",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Using backend: ibm_torino\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"service = QiskitRuntimeService()\n",
"backend = service.least_busy(\n",
- " operational=True, simulator=False, min_num_qubits=100\n",
- ")\n",
- "cm = backend.coupling_map\n",
- "print(f\"Using backend: {backend.name}\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "7b02350f-998e-40cf-a79e-2e6182b5a875",
- "metadata": {},
- "source": [
- "### Create AI and traditional pass managers\n",
- "To evaluate the effectiveness of the AI transpiler, we will perform two transpilation runs. First, we will transpile the circuit using the AI transpiler. Then, we will run a comparison by transpiling the same circuit without the AI transpiler, using traditional methods. Both transpilation processes will use the same coupling map from the chosen backend and the optimization level set to 3 for a fair comparison.\n",
- "\n",
- "Both of these methods reflect the standard approach to create `PassManager` instances to transpile circuits in Qiskit."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "a1aa25dd-41a9-4416-a959-44f28af613c8",
- "metadata": {},
- "outputs": [],
- "source": [
- "pm_ai = generate_ai_pass_manager(\n",
- " optimization_level=3,\n",
- " ai_optimization_level=3,\n",
- " coupling_map=cm,\n",
- " include_ai_synthesis=True, # used for part 3 when comparing synthesis methods\n",
+ " min_num_qubits=100, operational=True, simulator=False\n",
")\n",
"\n",
- "pm_no_ai = generate_preset_pass_manager(\n",
+ "pm_default_sim = generate_preset_pass_manager(\n",
" optimization_level=3,\n",
- " coupling_map=cm,\n",
- " seed_transpiler=seed, # note that the AI pass manager does not currently support seeding\n",
+ " backend=backend,\n",
+ " seed_transpiler=seed,\n",
")"
]
},
- {
- "cell_type": "markdown",
- "id": "a06d6144-3445-4446-a3e1-18ca78a1173c",
- "metadata": {},
- "source": [
- "Transpile the circuits and record the times."
- ]
- },
{
"cell_type": "code",
- "execution_count": 6,
- "id": "fb5167bd-35f0-432f-af6d-023c70783d20",
+ "execution_count": 4,
+ "id": "sim-step2-transpile",
"metadata": {},
"outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Standard transpilation: Depth (2q) 95, Gate count 458, Time 0.04650712013244629\n",
- "AI transpilation : Depth (2q) 90, Gate count 456, Time 0.9342479705810547\n"
- ]
- }
- ],
- "source": [
- "# Transpile using standard (non-AI) pass manager\n",
- "_, metrics_no_ai = transpile_with_metrics(pm_no_ai, qc)\n",
- "print(\n",
- " f\"Standard transpilation: Depth (2q) {metrics_no_ai['depth_2q']}, \"\n",
- " f\"Gate count {metrics_no_ai['gate_count']}, Time {metrics_no_ai['time_s']}\"\n",
- ")\n",
- "\n",
- "# Transpile using AI pass manager\n",
- "_, metrics_ai = transpile_with_metrics(pm_ai, qc)\n",
- "print(\n",
- " f\"AI transpilation : Depth (2q) {metrics_ai['depth_2q']}, \"\n",
- " f\"Gate count {metrics_ai['gate_count']}, Time {metrics_ai['time_s']}\"\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d934ebd2-e594-4076-8b21-822087df01ea",
- "metadata": {},
- "source": [
- "In this test, we compare the performance of the AI transpiler and the standard transpilation method on the efficient_su2 circuit. The AI transpiler achieves a noticeably shallower circuit depth while maintaining a similar gate count.\n",
- "\n",
- "- **Circuit depth:** The AI transpiler produces a circuit with lower two-qubit depth. This is expected, as the AI passes are trained to optimize depth by learning qubit interaction patterns and exploiting hardware connectivity more effectively than rule-based heuristics.\n",
- "\n",
- "- **Gate count:** The total gate count remains similar between the two methods. This aligns with expectations since the standard SABRE-based transpilation explicitly minimizes swap count, which dominates gate overhead. The AI transpiler instead prioritizes overall depth and may occasionally trade off a few additional gates for a shorter execution path.\n",
- "\n",
- "- **Transpilation time:** The AI transpiler takes longer to run than the standard method. This is due to the added computational cost of invoking learned models during routing and synthesis. In contrast, the SABRE-based transpiler is now significantly faster after being rewritten and optimized in Rust, providing highly efficient heuristic routing at scale.\n",
- "\n",
- "It is important to note that these results are based on just one circuit. To obtain a comprehensive understanding of how the AI transpiler compares to traditional methods, it is necessary to test a variety of circuits. The performance of QTS can vary greatly depending on the type of circuit being optimized. For a broader comparison, refer to the benchmarks above or visit the [blog.](https://www.ibm.com/quantum/blog/qiskit-performance)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "c8a55587-abf6-4096-85fd-2702a077ae75",
- "metadata": {},
- "source": [
- "## Step 3: Execute using Qiskit primitives\n",
- "As this tutorial focuses on transpilation, no experiments will be executed on the quantum device. The goal is to leverage the optimizations from Step 2 to obtain a transpiled circuit with reduced depth or gate count."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "8d0cfca9-be4e-40ab-ab98-d7899bb8b3fa",
- "metadata": {},
- "source": [
- "## Step 4: Post-process and return result in desired classical format\n",
- "Since there is no execution for this notebook, there are no results to post-process."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "c82277b2-22e9-44fe-886e-e8ceb2178278",
- "metadata": {},
- "source": [
- "# Part II. Analyze and benchmark the transpiled circuits\n",
- "\n",
- "In this section, we will demonstrate how to analyze the transpiled circuit and benchmark it against the original version in more detail. We will focus on metrics such as circuit depth, gate count, and transpilation time to assess the effectiveness of the optimization. Additionally, we will discuss how the results may differ across various circuit types, offering insights into the broader performance of the transpiler across different scenarios."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "id": "ee24725b-64c9-4d6a-aa97-5a3502b0982a",
- "metadata": {},
- "outputs": [
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "87d664f0db954c1794e49f7dbec7302a",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Fetching 4 files: 0%| | 0/4 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
{
"name": "stdout",
"output_type": "stream",
"text": [
- "Completed transpilation for Random\n",
- "Completed transpilation for Clifford\n",
- "Completed transpilation for QFT\n",
- "Completed transpilation for BV\n"
+ "[ 6q] Default: depth= 30, gates= 197, time=0.029s | AI: depth= 23, gates= 190, time=0.672s\n",
+ "[ 7q] Default: depth= 13, gates= 135, time=0.014s | AI: depth= 14, gates= 169, time=0.219s\n",
+ "[ 8q] Default: depth= 17, gates= 204, time=0.016s | AI: depth= 17, gates= 272, time=0.315s\n",
+ "[ 9q] Default: depth= 21, gates= 234, time=0.012s | AI: depth= 19, gates= 258, time=0.388s\n",
+ "[10q] Default: depth= 29, gates= 278, time=0.016s | AI: depth= 20, gates= 312, time=0.408s\n",
+ "[11q] Default: depth= 20, gates= 261, time=0.016s | AI: depth= 21, gates= 301, time=0.345s\n",
+ "[12q] Default: depth= 25, gates= 346, time=0.016s | AI: depth= 22, gates= 389, time=0.403s\n",
+ "[13q] Default: depth= 38, gates= 402, time=0.022s | AI: depth= 29, gates= 458, time=0.462s\n",
+ "[14q] Default: depth= 27, gates= 450, time=0.025s | AI: depth= 27, gates= 483, time=0.513s\n",
+ "[15q] Default: depth= 23, gates= 427, time=0.015s | AI: depth= 26, gates= 527, time=0.465s\n",
+ "[16q] Default: depth= 31, gates= 599, time=0.026s | AI: depth= 29, gates= 582, time=0.437s\n",
+ "[17q] Default: depth= 26, gates= 567, time=0.024s | AI: depth= 29, gates= 580, time=0.381s\n",
+ "[18q] Default: depth= 36, gates= 575, time=0.034s | AI: depth= 27, gates= 677, time=0.414s\n",
+ "[19q] Default: depth= 35, gates= 694, time=0.031s | AI: depth= 27, gates= 715, time=0.515s\n",
+ "[20q] Default: depth= 38, gates= 754, time=0.043s | AI: depth= 32, gates= 865, time=0.574s\n",
+ "[21q] Default: depth= 51, gates= 748, time=0.045s | AI: depth= 33, gates= 898, time=0.516s\n",
+ "[22q] Default: depth= 49, gates= 842, time=0.033s | AI: depth= 40, gates= 869, time=0.531s\n",
+ "[23q] Default: depth= 52, gates= 870, time=0.036s | AI: depth= 37, gates= 925, time=0.572s\n",
+ "[24q] Default: depth= 63, gates=1066, time=0.059s | AI: depth= 37, gates=1077, time=0.665s\n",
+ "[25q] Default: depth= 37, gates= 804, time=0.041s | AI: depth= 32, gates= 895, time=0.592s\n"
]
},
{
@@ -418,185 +391,118 @@
" \n",
" \n",
" | \n",
- " Circuit | \n",
- " Depth 2Q (No AI) | \n",
- " Gate Count (No AI) | \n",
- " Time (No AI) | \n",
- " Depth 2Q (AI) | \n",
- " Gate Count (AI) | \n",
- " Time (AI) | \n",
+ " Default (mean +/- std) | \n",
+ " AI (mean +/- std) | \n",
+ " AI % improvement | \n",
"
\n",
- " \n",
- "
\n",
" \n",
- " | 0 | \n",
- " Random | \n",
- " 37 | \n",
- " 221 | \n",
- " 0.039347 | \n",
- " 24 | \n",
- " 181 | \n",
- " 0.773718 | \n",
+ " Metric | \n",
+ " | \n",
+ " | \n",
+ " | \n",
"
\n",
+ " \n",
+ " \n",
" \n",
- " | 1 | \n",
- " Clifford | \n",
- " 36 | \n",
- " 232 | \n",
- " 0.036633 | \n",
- " 43 | \n",
- " 267 | \n",
- " 1.097431 | \n",
+ " Depth 2Q | \n",
+ " 33.0 +/- 12.9 | \n",
+ " 27.1 +/- 7.0 | \n",
+ " +13.5% +/- 15.9% | \n",
"
\n",
" \n",
- " | 2 | \n",
- " QFT | \n",
- " 165 | \n",
- " 924 | \n",
- " 0.077458 | \n",
- " 130 | \n",
- " 913 | \n",
- " 3.660771 | \n",
+ " Gate Count | \n",
+ " 522.6 +/- 268.6 | \n",
+ " 572.1 +/- 280.1 | \n",
+ " -11.3% +/- 9.6% | \n",
"
\n",
" \n",
- " | 3 | \n",
- " BV | \n",
- " 65 | \n",
- " 155 | \n",
- " 0.024993 | \n",
- " 70 | \n",
- " 155 | \n",
- " 0.345522 | \n",
+ " Time (s) | \n",
+ " 0.0 +/- 0.0 | \n",
+ " 0.5 +/- 0.1 | \n",
+ " -1797.9% +/- 606.3% | \n",
"
\n",
" \n",
"\n",
""
],
"text/plain": [
- " Circuit Depth 2Q (No AI) Gate Count (No AI) Time (No AI) \\\n",
- "0 Random 37 221 0.039347 \n",
- "1 Clifford 36 232 0.036633 \n",
- "2 QFT 165 924 0.077458 \n",
- "3 BV 65 155 0.024993 \n",
- "\n",
- " Depth 2Q (AI) Gate Count (AI) Time (AI) \n",
- "0 24 181 0.773718 \n",
- "1 43 267 1.097431 \n",
- "2 130 913 3.660771 \n",
- "3 70 155 0.345522 "
+ " Default (mean +/- std) AI (mean +/- std) AI % improvement\n",
+ "Metric \n",
+ "Depth 2Q 33.0 +/- 12.9 27.1 +/- 7.0 +13.5% +/- 15.9%\n",
+ "Gate Count 522.6 +/- 268.6 572.1 +/- 280.1 -11.3% +/- 9.6%\n",
+ "Time (s) 0.0 +/- 0.0 0.5 +/- 0.1 -1797.9% +/- 606.3%"
]
},
- "execution_count": 7,
+ "execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
- "# Circuits to benchmark\n",
- "seed = 42\n",
- "circuits = [\n",
- " {\n",
- " \"name\": \"Random\",\n",
- " \"qc\": random_circuit(num_qubits=30, depth=10, seed=seed),\n",
- " },\n",
- " {\n",
- " \"name\": \"Clifford\",\n",
- " \"qc\": random_clifford_circuit(\n",
- " num_qubits=40, num_gates=200, seed=seed\n",
- " ),\n",
- " },\n",
- " {\n",
- " \"name\": \"QFT\",\n",
- " \"qc\": synth_qft_full(num_qubits=20, do_swaps=False).decompose(),\n",
- " },\n",
- " {\n",
- " \"name\": \"BV\",\n",
- " \"qc\": create_bv_circuit(40),\n",
- " },\n",
- "]\n",
+ "results_sim = []\n",
"\n",
- "results = []\n",
+ "for i, qc in enumerate(circuits_sim):\n",
+ " n = qubit_range_sim[i]\n",
"\n",
- "# Run the transpilation for each circuit and store the results\n",
- "for circuit in circuits:\n",
- " qc_no_ai, metrics_no_ai = transpile_with_metrics(pm_no_ai, circuit[\"qc\"])\n",
- " qc_ai, metrics_ai = transpile_with_metrics(pm_ai, circuit[\"qc\"])\n",
+ " qc_default, m_default = transpile_with_metrics(pm_default_sim, qc)\n",
"\n",
- " print(\"Completed transpilation for\", circuit[\"name\"])\n",
+ " # Create a fresh AI pass manager each iteration to avoid stale layout state\n",
+ " pm_ai = generate_ai_pass_manager(\n",
+ " optimization_level=1,\n",
+ " ai_optimization_level=3,\n",
+ " backend=backend,\n",
+ " )\n",
+ " qc_ai, m_ai = transpile_with_metrics(pm_ai, qc)\n",
"\n",
- " results.append(\n",
+ " results_sim.append(\n",
" {\n",
- " \"Circuit\": circuit[\"name\"],\n",
- " \"Depth 2Q (No AI)\": metrics_no_ai[\"depth_2q\"],\n",
- " \"Gate Count (No AI)\": metrics_no_ai[\"gate_count\"],\n",
- " \"Time (No AI)\": metrics_no_ai[\"time_s\"],\n",
- " \"Depth 2Q (AI)\": metrics_ai[\"depth_2q\"],\n",
- " \"Gate Count (AI)\": metrics_ai[\"gate_count\"],\n",
- " \"Time (AI)\": metrics_ai[\"time_s\"],\n",
+ " \"Qubits\": n,\n",
+ " \"Depth 2Q (Default)\": m_default[\"depth_2q\"],\n",
+ " \"Depth 2Q (AI)\": m_ai[\"depth_2q\"],\n",
+ " \"Gate Count (Default)\": m_default[\"gate_count\"],\n",
+ " \"Gate Count (AI)\": m_ai[\"gate_count\"],\n",
+ " \"Time (Default)\": m_default[\"time_s\"],\n",
+ " \"Time (AI)\": m_ai[\"time_s\"],\n",
" }\n",
" )\n",
"\n",
- "df = pd.DataFrame(results)\n",
- "df"
+ " print(\n",
+ " f\"[{n:2d}q] Default: depth={m_default['depth_2q']:3d}, gates={m_default['gate_count']:4d}, time={m_default['time_s']:.3f}s | AI: depth={m_ai['depth_2q']:3d}, gates={m_ai['gate_count']:4d}, time={m_ai['time_s']:.3f}s\"\n",
+ " )\n",
+ "\n",
+ "df_sim = pd.DataFrame(results_sim)\n",
+ "summary_table(df_sim)"
]
},
{
"cell_type": "markdown",
- "id": "061d85cf-3841-4ed3-bd0d-cd950564efb7",
+ "id": "sim-step2-table-note",
"metadata": {},
"source": [
- "Average percentage reduction for each metric. Positive are improvements, negative are degradations."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "id": "70cf9c05-62a3-4049-9712-319902107ba6",
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Average reduction in depth: 11.88%\n",
- "Average reduction in gate count: 1.04%\n",
- "Average reduction in transpilation time: -3193.95%\n"
- ]
- }
- ],
- "source": [
- "# Average reduction from non-AI to AI transpilation as a percentage\n",
- "avg_reduction_depth = (\n",
- " (df[\"Depth 2Q (No AI)\"] - df[\"Depth 2Q (AI)\"]).mean()\n",
- " / df[\"Depth 2Q (No AI)\"].mean()\n",
- " * 100\n",
- ")\n",
- "avg_reduction_gates = (\n",
- " (df[\"Gate Count (No AI)\"] - df[\"Gate Count (AI)\"]).mean()\n",
- " / df[\"Gate Count (No AI)\"].mean()\n",
- " * 100\n",
- ")\n",
- "avg_reduction_time = (\n",
- " (df[\"Time (No AI)\"] - df[\"Time (AI)\"]).mean()\n",
- " / df[\"Time (No AI)\"].mean()\n",
- " * 100\n",
- ")\n",
+ "The summary table shows the mean and standard deviation of each metric across all 20 circuits, along with the average percentage improvement of the AI transpiler over the default. Positive values indicate the AI transpiler produced better results; negative values indicate the default was better.\n",
"\n",
- "print(f\"Average reduction in depth: {avg_reduction_depth:.2f}%\")\n",
- "print(f\"Average reduction in gate count: {avg_reduction_gates:.2f}%\")\n",
- "print(f\"Average reduction in transpilation time: {avg_reduction_time:.2f}%\")"
+ "For this small-scale example, the AI transpiler achieves roughly 10% lower two-qubit depth on average, but at the cost of roughly 10% higher gate count. This highlights a key trade-off when choosing between the two strategies: the AI transpiler prioritizes depth reduction (fewer sequential layers of two-qubit gates), while the default transpiler (SABRE) prioritizes minimizing total gate count (fewer SWAP insertions). Depending on your application, one metric may matter more than the other."
]
},
{
"cell_type": "code",
- "execution_count": 9,
- "id": "79b8d5d9-0f9d-42ca-9583-8bec17430014",
+ "execution_count": 5,
+ "id": "sim-step2-plot",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- ""
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/plain": [
+ ""
]
},
"metadata": {},
@@ -604,310 +510,265 @@
}
],
"source": [
- "fig, axs = plt.subplots(1, 3, figsize=(21, 6))\n",
- "df.plot(\n",
- " x=\"Circuit\",\n",
- " y=[\"Depth 2Q (No AI)\", \"Depth 2Q (AI)\"],\n",
- " kind=\"bar\",\n",
- " ax=axs[0],\n",
- ")\n",
- "axs[0].set_title(\"Circuit Depth Comparison\")\n",
- "axs[0].set_ylabel(\"Depth\")\n",
- "axs[0].set_xlabel(\"Circuit\")\n",
- "axs[0].tick_params(axis=\"x\", rotation=45)\n",
- "df.plot(\n",
- " x=\"Circuit\",\n",
- " y=[\"Gate Count (No AI)\", \"Gate Count (AI)\"],\n",
- " kind=\"bar\",\n",
- " ax=axs[1],\n",
- ")\n",
- "axs[1].set_title(\"Gate Count Comparison\")\n",
- "axs[1].set_ylabel(\"Gate Count\")\n",
- "axs[1].set_xlabel(\"Circuit\")\n",
- "axs[1].tick_params(axis=\"x\", rotation=45)\n",
- "df.plot(x=\"Circuit\", y=[\"Time (No AI)\", \"Time (AI)\"], kind=\"bar\", ax=axs[2])\n",
- "axs[2].set_title(\"Time Comparison\")\n",
- "axs[2].set_ylabel(\"Time (seconds)\")\n",
- "axs[2].set_xlabel(\"Circuit\")\n",
- "axs[2].tick_params(axis=\"x\", rotation=45)\n",
- "fig.suptitle(\n",
- " \"Benchmarking AI transpilation vs Non-AI transpilation for various circuits\"\n",
- ")\n",
- "\n",
- "plt.tight_layout()\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "345022d3-e302-47e6-9453-9261136923a7",
- "metadata": {},
- "source": [
- "The AI transpiler's performance varies significantly based on the type of circuit being optimized. In some cases, it achieves notable reductions in circuit depth and gate count compared to the standard transpiler. However, these improvements often come with a substantial increase in runtime.\n",
- "\n",
- "For certain types of circuits, the AI transpiler may yield slightly better results in terms of circuit depth but may also lead to an increase in gate count and a significant runtime penalty. These observations suggest that the AI transpiler's benefits are not uniform across all circuit types. Instead, its effectiveness depends on the specific characteristics of the circuit, making it more suitable for some use cases than others."
+ "plot_metrics_and_pct(df_sim, \"Small-Scale Random Circuits\")"
]
},
{
"cell_type": "markdown",
- "id": "9e496e7a-64a8-46fd-b240-c494e7825bd2",
+ "id": "sim-step2-analysis",
"metadata": {},
"source": [
- "## When should users choose AI-powered transpilation?\n",
+ "**Two-qubit depth:** The AI transpiler generally produces circuits with lower two-qubit depth. Depth is one of the primary metrics the AI routing model is trained to optimize, and the improvement is visible across most circuit sizes, though SABRE does match or beat it on individual circuits.\n",
"\n",
- "The AI-powered transpiler in Qiskit excels in scenarios where traditional transpilation methods struggle, particularly with large-scale and complex quantum circuits. For circuits involving hundreds of qubits or those targeting hardware with intricate coupling maps, the AI transpiler offers superior optimization in terms of circuit depth, gate count, and runtime efficiency. In benchmarking tests, it has consistently outperformed traditional methods, delivering significantly shallower circuits and reducing gate counts, which are critical for enhancing performance and mitigating noise on real quantum hardware.\n",
+ "**Gate count:** The results are closely matched at this scale, with SABRE holding a slight edge overall. SABRE's routing heuristic is designed to minimize the number of inserted SWAP gates, which directly reduces gate count. At small circuit sizes, the difference is modest.\n",
"\n",
- "Users should consider AI-powered transpilation when working with:\n",
- "- Large circuits where traditional methods fail to efficiently handle the scale.\n",
- "- Complex hardware topologies where device connectivity and routing challenges arise.\n",
- "- Performance-sensitive applications where reducing circuit depth and improving fidelity are paramount."
+ "**Transpilation time:** SABRE's runtime is nearly constant regardless of qubit count. At this scale, circuit size is not the bottleneck, and SABRE's core routing logic is highly optimized (largely implemented in Rust). The AI transpiler takes noticeably longer and scales with circuit size, though the absolute times remain reasonable for interactive use."
]
},
{
"cell_type": "markdown",
- "id": "c345cb54-a838-427f-898f-51fb607da493",
+ "id": "sim-step3-header",
"metadata": {},
"source": [
- "# Part III. Explore AI-powered permutation network synthesis\n",
+ "### Step 3: Execute using Qiskit primitives\n",
"\n",
- "Permutation networks are foundational in quantum computing, particularly for systems constrained by restricted topologies. These networks facilitate long-range interactions by dynamically swapping qubits to mimic all-to-all connectivity on hardware with limited connectivity. Such transformations are essential for implementing complex quantum algorithms on near-term devices, where interactions often span beyond nearest neighbors.\n",
- "\n",
- "In this section, we highlight the synthesis of permutation networks as a compelling use case for the AI-powered transpiler in Qiskit. Specifically, the `AIPermutationSynthesis` pass leverages AI-driven optimization to generate efficient circuits for qubit permutation tasks. By contrast, generic synthesis approaches often struggle to balance gate count and circuit depth, especially in scenarios with dense qubit interactions or when attempting to achieve full connectivity.\n",
- "\n",
- "We will walk through a Qiskit patterns example showcasing the synthesis of a permutation network to achieve all-to-all connectivity for a set of qubits. We will compare the performance of `AIPermutationSynthesis` against the standard synthesis methods in Qiskit. This example will demonstrate how the AI transpiler optimizes for lower circuit depth and gate count, highlighting its advantages in practical quantum workflows. To activate the AI synthesis pass, we will use the `generate_ai_pass_manager()` function with the `include_ai_synthesis` parameter set to `True`."
+ "To evaluate the impact of transpilation on circuit fidelity, we build mirror circuits from the 10-qubit case and run them on the Aer simulator with a simple depolarizing noise model. The expected output of a mirror circuit is always the all-zeros bitstring, so the probability of measuring $|0\\rangle^{\\otimes n}$ tells us how well each transpilation strategy preserves fidelity."
]
},
{
- "cell_type": "markdown",
- "id": "76de0959-1eca-43d9-b8fe-f9aea9a122d8",
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "sim-step3-code",
"metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Default: depth 84, gates 278\n",
+ "AI: depth 70, gates 312\n"
+ ]
+ }
+ ],
"source": [
- "## Step 1: Map classical inputs to a quantum problem\n",
- "\n",
- "To represent a classical permutation problem on a quantum computer, we start by defining the structure of the quantum circuits. For this example:\n",
+ "# Use the 10-qubit circuit (index where qubits == 10)\n",
+ "idx_10q = qubit_range_sim.index(10)\n",
"\n",
- "1. Quantum circuit initialization:\n",
- " We allocate 27 qubits to match the backend we will use, which has 27 qubits.\n",
+ "qc_10q = circuits_sim[idx_10q]\n",
+ "qc_default_10q, _ = transpile_with_metrics(pm_default_sim, qc_10q)\n",
"\n",
- "2. Apply permutations:\n",
- " We generate ten random permutation patterns (`pattern_1` through `pattern_10`) using a fixed seed for reproducibility. Each permutation pattern is applied to a separate quantum circuit (`qc_1` through `qc_10`).\n",
+ "pm_ai = generate_ai_pass_manager(\n",
+ " optimization_level=1,\n",
+ " ai_optimization_level=3,\n",
+ " backend=backend,\n",
+ ")\n",
+ "qc_ai_10q, _ = transpile_with_metrics(pm_ai, qc_10q)\n",
"\n",
- "3. Circuit decomposition:\n",
- " Each permutation operation is decomposed into native gate sets compatible with the target quantum hardware. We analyze the depth and the number of two-qubit gates (nonlocal gates) for each decomposed circuit.\n",
+ "tqc_methods = {\n",
+ " \"Default\": qc_default_10q,\n",
+ " \"AI\": qc_ai_10q,\n",
+ "}\n",
"\n",
- "The results provide insight into the complexity of representing classical permutation problems on a quantum device, demonstrating the resource requirements for different permutation patterns."
+ "print(\n",
+ " f\"Default: depth {qc_default_10q.depth()}, gates {qc_default_10q.size()}\"\n",
+ ")\n",
+ "print(f\"AI: depth {qc_ai_10q.depth()}, gates {qc_ai_10q.size()}\")"
]
},
{
"cell_type": "code",
- "execution_count": 10,
- "id": "76a3e847-0808-4413-bd0c-c760cd2df3f4",
+ "execution_count": 7,
+ "id": "sim-step3-run",
"metadata": {},
"outputs": [
{
- "data": {
- "text/plain": [
- ""
- ]
- },
- "execution_count": 10,
- "metadata": {},
- "output_type": "execute_result"
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Default P(|00...0>) = 0.8514 (8514/10000)\n",
+ "AI P(|00...0>) = 0.8207 (8207/10000)\n"
+ ]
}
],
"source": [
- "# Parameters\n",
- "width = 27\n",
- "num_circuits = 10\n",
+ "# Build a simple depolarizing noise model\n",
+ "noise_model = NoiseModel()\n",
+ "noise_model.add_all_qubit_quantum_error(\n",
+ " depolarizing_error(0.001, 1),\n",
+ " [\"sx\", \"x\", \"rz\"], # ~0.1% per 1Q gate\n",
+ ")\n",
+ "noise_model.add_all_qubit_quantum_error(\n",
+ " depolarizing_error(0.01, 2),\n",
+ " [\"cx\", \"ecr\"], # ~1% per 2Q gate\n",
+ ")\n",
"\n",
- "# Set random seed\n",
- "np.random.seed(seed)\n",
+ "aer_sim = AerSimulator(noise_model=noise_model)\n",
"\n",
+ "shots = 10000\n",
+ "survival_probs = {}\n",
"\n",
- "# Generate random patterns and circuits\n",
- "patterns = [\n",
- " np.random.permutation(width).tolist() for _ in range(num_circuits)\n",
- "]\n",
- "circuits = {\n",
- " f\"qc_{i}\": generate_permutation_circuit(width, pattern)\n",
- " for i, pattern in enumerate(patterns, start=1)\n",
- "}\n",
+ "for method, tqc in tqc_methods.items():\n",
+ " mirror = build_mirror_circuit(tqc, simulate=True)\n",
+ "\n",
+ " sampler = SamplerV2(mode=aer_sim)\n",
+ " job = sampler.run([mirror], shots=shots)\n",
+ " counts = job.result()[0].data.meas.get_counts()\n",
"\n",
- "# Display one of the circuits\n",
- "circuits[\"qc_1\"].decompose(reps=3).draw(output=\"mpl\", fold=-1)"
+ " all_zeros = \"0\" * mirror.num_qubits\n",
+ " survival = counts.get(all_zeros, 0) / shots\n",
+ " survival_probs[method] = survival\n",
+ " print(\n",
+ " f\"{method:8s} P(|00...0>) = {survival:.4f} ({counts.get(all_zeros, 0)}/{shots})\"\n",
+ " )"
]
},
{
"cell_type": "markdown",
- "id": "a8b79798-fa80-44d8-8a52-2d2a50e0c280",
+ "id": "sim-step3-analysis",
"metadata": {},
"source": [
- "## Step 2: Optimize problem for quantum hardware execution\n",
- "In this step, we proceed with optimization using the AI synthesis passes.\n",
- "\n",
- "For the AI synthesis passes, the `PassManager` requires only the coupling map of the backend. However, it is important to note that not all coupling maps are compatible; only those that the `AIPermutationSynthesis` pass has been trained on will work. Currently, the `AIPermutationSynthesis` pass supports blocks of sizes 65, 33, and 27 qubits. For this example we use a 27-qubit QPU.\n",
- "\n",
- "For comparison, we will evaluate the performance of AI synthesis against generic permutation synthesis methods in Qiskit, including:\n",
- "\n",
- "- `synth_permutation_depth_lnn_kms`: This method synthesizes a permutation circuit for a linear nearest-neighbor (LNN) architecture using the Kutin, Moulton, and Smithline (KMS) algorithm. It guarantees a circuit with a depth of at most $ n $ and a size of at most $ n(n-1)/2 $, where both depth and size are measured in terms of SWAP gates.\n",
- "\n",
- "- `synth_permutation_basic`: This is a straightforward implementation that synthesizes permutation circuits without imposing constraints on connectivity or optimization for specific architectures. It serves as a baseline for comparing performance with more advanced methods.\n",
- "\n",
- "Each of these methods represents a distinct approach to synthesizing permutation networks, providing a comprehensive benchmark against the AI-powered methods.\n",
- "\n",
- "For more details about synthesis methods in Qiskit, refer to the [Qiskit API documentation](/docs/api/qiskit/synthesis)."
+ "We ran both mirror circuits through the Aer simulator with a simple depolarizing noise model. The survival probability, defined as the fraction of shots that return the all-zeros bitstring, quantifies how much noise each transpilation strategy introduces."
]
},
{
"cell_type": "markdown",
- "id": "b1733a10-c285-444e-af47-4a32329c5f7a",
+ "id": "sim-step4-header",
"metadata": {},
"source": [
- "Define the coupling map representing the 27-qubit QPU."
+ "### Step 4: Post-process and return result in desired classical format\n",
+ "\n",
+ "We extract the probability of measuring the all-zeros bitstring from both runs. A higher probability indicates better fidelity, meaning the transpilation introduced less effective noise."
]
},
{
"cell_type": "code",
- "execution_count": 11,
- "id": "84dff2c2-a496-4828-bb8e-08d373816a36",
+ "execution_count": 8,
+ "id": "sim-step4-code",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- ""
+ ""
]
},
- "execution_count": 11,
"metadata": {},
- "output_type": "execute_result"
+ "output_type": "display_data"
}
],
"source": [
- "coupling_map = [\n",
- " [1, 0],\n",
- " [2, 1],\n",
- " [3, 2],\n",
- " [3, 5],\n",
- " [4, 1],\n",
- " [6, 7],\n",
- " [7, 4],\n",
- " [7, 10],\n",
- " [8, 5],\n",
- " [8, 9],\n",
- " [8, 11],\n",
- " [11, 14],\n",
- " [12, 10],\n",
- " [12, 13],\n",
- " [12, 15],\n",
- " [13, 14],\n",
- " [16, 14],\n",
- " [17, 18],\n",
- " [18, 15],\n",
- " [18, 21],\n",
- " [19, 16],\n",
- " [19, 22],\n",
- " [20, 19],\n",
- " [21, 23],\n",
- " [23, 24],\n",
- " [25, 22],\n",
- " [25, 24],\n",
- " [26, 25],\n",
- "]\n",
- "CouplingMap(coupling_map).draw()"
+ "fig, ax = plt.subplots(figsize=(6, 4))\n",
+ "ax.bar(\n",
+ " survival_probs.keys(),\n",
+ " survival_probs.values(),\n",
+ " color=[\"steelblue\", \"coral\"],\n",
+ ")\n",
+ "ax.set_ylabel(\"P(|0...0>)\")\n",
+ "ax.set_title(\"Mirror Circuit Fidelity (10-qubit, Aer Simulator)\")\n",
+ "ax.set_ylim(0, 1)\n",
+ "plt.tight_layout()\n",
+ "plt.show()"
]
},
{
"cell_type": "markdown",
- "id": "47bdb1f5-1fc6-46c4-8fc9-98d16a4d2529",
+ "id": "sim-step4-analysis",
"metadata": {},
"source": [
- "Transpile each of the permutation circuits using the AI synthesis passes and generic synthesis methods."
+ "In this case, the default transpiler achieves a higher survival probability despite having a deeper circuit. This is because it produced a circuit with fewer total gates, and under a uniform depolarizing noise model, total gate count has a more direct impact on accumulated error than depth alone. This trade-off is not universal. The relative importance of depth versus gate count depends on the magnitude of the difference in each metric, the noise characteristics of the hardware, and the structure of the circuit."
]
},
{
- "cell_type": "code",
- "execution_count": 12,
- "id": "128cc285-094a-4b07-a37d-8424a4003b2c",
+ "cell_type": "markdown",
+ "id": "hw-header",
"metadata": {},
- "outputs": [],
"source": [
- "results = []\n",
- "pm_no_ai_synth = generate_preset_pass_manager(\n",
- " coupling_map=cm,\n",
- " optimization_level=1, # set to 1 since we are using the synthesis methods\n",
- ")\n",
- "\n",
- "# Transpile and analyze all circuits\n",
- "for i, (qc_name, qc) in enumerate(circuits.items(), start=1):\n",
- " pattern = patterns[i - 1] # Get the corresponding pattern\n",
- "\n",
- " qc_depth_lnn_kms = synth_permutation_depth_lnn_kms(pattern)\n",
- " qc_basic = synth_permutation_basic(pattern)\n",
- "\n",
- " # AI synthesis\n",
- " results.append(\n",
- " synth_transpile_with_metrics(\n",
- " qc.decompose(reps=3),\n",
- " pm_ai,\n",
- " qc_name,\n",
- " \"AI\",\n",
- " )\n",
- " )\n",
- "\n",
- " # Depth-LNN-KMS Method\n",
- " results.append(\n",
- " synth_transpile_with_metrics(\n",
- " qc_depth_lnn_kms.decompose(reps=3),\n",
- " pm_no_ai_synth,\n",
- " qc_name,\n",
- " \"Depth-LNN-KMS\",\n",
- " )\n",
- " )\n",
- "\n",
- " # Basic Method\n",
- " results.append(\n",
- " synth_transpile_with_metrics(\n",
- " qc_basic.decompose(reps=3),\n",
- " pm_no_ai_synth,\n",
- " qc_name,\n",
- " \"Basic\",\n",
- " )\n",
- " )\n",
- "\n",
- "\n",
- "results_df = pd.DataFrame(results)"
+ "## Large-scale hardware example"
]
},
{
"cell_type": "markdown",
- "id": "42f80e32-60fd-46a8-a6b5-4bcadb15810a",
+ "id": "hw-steps-header",
"metadata": {},
"source": [
- "Record the metrics (depth, gate count, time) for each circuit after transpilation."
+ "### Steps 1-4\n",
+ "Here we put all of these details together into a clear workflow at a larger scale, which is then run on real quantum hardware. We don't compress it to a single code block in order to show the results of each step as they would appear in a real workflow, but the same workflow can be implemented in a single block as well.\n",
+ "\n",
+ "We generate 25 random circuits with depth 8, where the number of qubits ranges from 26 to 50. We transpile all circuits with both strategies and collect the same metrics. Then we build mirror circuits from the 26-qubit case and submit them to the real backend. We use the smallest circuit size for the hardware run because mirror circuits double the gate count, and this basic fidelity test does not scale well on real hardware at larger qubit counts."
]
},
{
"cell_type": "code",
- "execution_count": 13,
- "id": "72ee8474-eea6-421a-9d7d-070587eaff71",
+ "execution_count": 9,
+ "id": "hw-step1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
- "\n",
- "=== Average Metrics ===\n",
- " Depth (2Q) Gates Time (s)\n",
- "Method \n",
- "AI 23.9 82.8 0.248\n",
- "Basic 29.8 91.0 0.012\n",
- "Depth-LNN-KMS 70.8 531.6 0.017\n",
- "\n",
- "Best Non-AI Method (based on least average depth): Basic\n",
- "\n",
- "=== Comparison of AI vs Best Non-AI Method ===\n"
+ "Created 25 circuits with qubit counts: [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# -------------------------Step 1-------------------------\n",
+ "num_circuits_hw = 25\n",
+ "depth_hw = 8\n",
+ "qubit_range_hw = list(range(26, 51))\n",
+ "\n",
+ "circuits_hw = [\n",
+ " # We have only two qubit gates, as those test how well the transpiler can optimize the circuit.\n",
+ " random_circuit(\n",
+ " num_qubits=n,\n",
+ " depth=depth_hw,\n",
+ " max_operands=2,\n",
+ " num_operand_distribution={2: 1},\n",
+ " seed=seed + i,\n",
+ " )\n",
+ " for i, n in enumerate(qubit_range_hw)\n",
+ "]\n",
+ "\n",
+ "print(\n",
+ " f\"Created {len(circuits_hw)} circuits with qubit counts: {qubit_range_hw}\"\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "hw-step2",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[26q] Default: depth=155, gates=2420, time=0.072s | AI: depth=119, gates=2694, time=1.925s\n",
+ "[27q] Default: depth=146, gates=2421, time=0.070s | AI: depth=118, gates=2736, time=1.872s\n",
+ "[28q] Default: depth=147, gates=2706, time=0.082s | AI: depth=151, gates=3297, time=2.398s\n",
+ "[29q] Default: depth=190, gates=3038, time=0.088s | AI: depth=149, gates=3297, time=2.421s\n",
+ "[30q] Default: depth=174, gates=3027, time=0.079s | AI: depth=138, gates=3220, time=2.471s\n",
+ "[31q] Default: depth=170, gates=3247, time=0.101s | AI: depth=150, gates=3559, time=2.659s\n",
+ "[32q] Default: depth=152, gates=3366, time=0.095s | AI: depth=176, gates=4150, time=3.185s\n",
+ "[33q] Default: depth=197, gates=3487, time=0.101s | AI: depth=163, gates=3675, time=3.082s\n",
+ "[34q] Default: depth=224, gates=3861, time=0.110s | AI: depth=167, gates=4417, time=3.186s\n",
+ "[35q] Default: depth=212, gates=4051, time=0.119s | AI: depth=190, gates=4381, time=3.297s\n",
+ "[36q] Default: depth=208, gates=4189, time=0.102s | AI: depth=194, gates=5017, time=3.579s\n",
+ "[37q] Default: depth=225, gates=4222, time=0.120s | AI: depth=200, gates=4840, time=3.642s\n",
+ "[38q] Default: depth=194, gates=4112, time=0.117s | AI: depth=184, gates=4622, time=3.769s\n",
+ "[39q] Default: depth=201, gates=4597, time=0.112s | AI: depth=188, gates=5074, time=4.071s\n",
+ "[40q] Default: depth=215, gates=4951, time=0.129s | AI: depth=201, gates=5498, time=4.236s\n",
+ "[41q] Default: depth=253, gates=5322, time=0.138s | AI: depth=215, gates=5617, time=4.596s\n",
+ "[42q] Default: depth=188, gates=4639, time=0.119s | AI: depth=202, gates=5756, time=4.580s\n",
+ "[43q] Default: depth=227, gates=5093, time=0.140s | AI: depth=212, gates=6039, time=5.674s\n",
+ "[44q] Default: depth=299, gates=5835, time=0.176s | AI: depth=228, gates=6398, time=5.719s\n",
+ "[45q] Default: depth=223, gates=5666, time=0.154s | AI: depth=221, gates=6627, time=5.783s\n",
+ "[46q] Default: depth=260, gates=6124, time=0.157s | AI: depth=238, gates=6944, time=6.005s\n",
+ "[47q] Default: depth=301, gates=6691, time=0.169s | AI: depth=236, gates=7425, time=5.937s\n",
+ "[48q] Default: depth=280, gates=6664, time=0.161s | AI: depth=233, gates=7455, time=6.598s\n",
+ "[49q] Default: depth=263, gates=6228, time=0.192s | AI: depth=227, gates=7326, time=6.279s\n",
+ "[50q] Default: depth=330, gates=6969, time=0.195s | AI: depth=260, gates=7842, time=7.754s\n"
]
},
{
@@ -931,130 +792,201 @@
" \n",
" \n",
" | \n",
+ " Default (mean +/- std) | \n",
+ " AI (mean +/- std) | \n",
+ " AI % improvement | \n",
+ "
\n",
+ " \n",
" | Metric | \n",
- " AI | \n",
- " Basic | \n",
- " Improvement (AI vs Best Non-AI) | \n",
+ " | \n",
+ " | \n",
+ " | \n",
"
\n",
" \n",
" \n",
" \n",
- " | 0 | \n",
- " Depth (2Q) | \n",
- " 23.900 | \n",
- " 29.800 | \n",
- " -5.900 | \n",
+ " Depth 2Q | \n",
+ " 217.4 +/- 50.4 | \n",
+ " 190.4 +/- 38.5 | \n",
+ " +11.5% +/- 10.3% | \n",
"
\n",
" \n",
- " | 1 | \n",
- " Gates | \n",
- " 82.800 | \n",
- " 91.000 | \n",
- " -8.200 | \n",
+ " Gate Count | \n",
+ " 4517.0 +/- 1393.3 | \n",
+ " 5116.2 +/- 1588.8 | \n",
+ " -13.3% +/- 5.3% | \n",
"
\n",
" \n",
- " | 2 | \n",
- " Time (s) | \n",
- " 0.248 | \n",
- " 0.012 | \n",
- " 0.236 | \n",
+ " Time (s) | \n",
+ " 0.1 +/- 0.0 | \n",
+ " 4.2 +/- 1.6 | \n",
+ " -3198.3% +/- 453.5% | \n",
"
\n",
" \n",
"\n",
""
],
"text/plain": [
- " Metric AI Basic Improvement (AI vs Best Non-AI)\n",
- "0 Depth (2Q) 23.900 29.800 -5.900\n",
- "1 Gates 82.800 91.000 -8.200\n",
- "2 Time (s) 0.248 0.012 0.236"
+ " Default (mean +/- std) AI (mean +/- std) AI % improvement\n",
+ "Metric \n",
+ "Depth 2Q 217.4 +/- 50.4 190.4 +/- 38.5 +11.5% +/- 10.3%\n",
+ "Gate Count 4517.0 +/- 1393.3 5116.2 +/- 1588.8 -13.3% +/- 5.3%\n",
+ "Time (s) 0.1 +/- 0.0 4.2 +/- 1.6 -3198.3% +/- 453.5%"
]
},
- "execution_count": 13,
+ "execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
- "# Calculate averages for each metric\n",
- "average_metrics = results_df.groupby(\"Method\")[\n",
- " [\"Depth (2Q)\", \"Gates\", \"Time (s)\"]\n",
- "].mean()\n",
- "average_metrics = average_metrics.round(3) # Round to two decimal places\n",
- "print(\"\\n=== Average Metrics ===\")\n",
- "print(average_metrics)\n",
- "\n",
- "# Identify the best non-AI method based on least average depth\n",
- "non_ai_methods = [\n",
- " method for method in results_df[\"Method\"].unique() if method != \"AI\"\n",
- "]\n",
- "best_non_ai_method = average_metrics.loc[non_ai_methods][\n",
- " \"Depth (2Q)\"\n",
- "].idxmin()\n",
- "print(\n",
- " f\"\\nBest Non-AI Method (based on least average depth): {best_non_ai_method}\"\n",
+ "# -------------------------Step 2-------------------------\n",
+ "pm_default = generate_preset_pass_manager(\n",
+ " optimization_level=3,\n",
+ " backend=backend,\n",
+ " seed_transpiler=seed,\n",
")\n",
"\n",
- "# Compare AI to the best non-AI method\n",
- "ai_metrics = average_metrics.loc[\"AI\"]\n",
- "best_non_ai_metrics = average_metrics.loc[best_non_ai_method]\n",
- "\n",
- "comparison = {\n",
- " \"Metric\": [\"Depth (2Q)\", \"Gates\", \"Time (s)\"],\n",
- " \"AI\": [\n",
- " ai_metrics[\"Depth (2Q)\"],\n",
- " ai_metrics[\"Gates\"],\n",
- " ai_metrics[\"Time (s)\"],\n",
- " ],\n",
- " best_non_ai_method: [\n",
- " best_non_ai_metrics[\"Depth (2Q)\"],\n",
- " best_non_ai_metrics[\"Gates\"],\n",
- " best_non_ai_metrics[\"Time (s)\"],\n",
- " ],\n",
- " \"Improvement (AI vs Best Non-AI)\": [\n",
- " ai_metrics[\"Depth (2Q)\"] - best_non_ai_metrics[\"Depth (2Q)\"],\n",
- " ai_metrics[\"Gates\"] - best_non_ai_metrics[\"Gates\"],\n",
- " ai_metrics[\"Time (s)\"] - best_non_ai_metrics[\"Time (s)\"],\n",
- " ],\n",
- "}\n",
+ "results_hw = []\n",
+ "\n",
+ "for i, qc in enumerate(circuits_hw):\n",
+ " n = qubit_range_hw[i]\n",
+ "\n",
+ " qc_default, m_default = transpile_with_metrics(pm_default, qc)\n",
+ "\n",
+ " # Create a fresh AI pass manager each iteration to avoid stale layout state\n",
+ " pm_ai = generate_ai_pass_manager(\n",
+ " optimization_level=1,\n",
+ " ai_optimization_level=3,\n",
+ " backend=backend,\n",
+ " )\n",
+ " qc_ai, m_ai = transpile_with_metrics(pm_ai, qc)\n",
+ "\n",
+ " results_hw.append(\n",
+ " {\n",
+ " \"Qubits\": n,\n",
+ " \"Depth 2Q (Default)\": m_default[\"depth_2q\"],\n",
+ " \"Depth 2Q (AI)\": m_ai[\"depth_2q\"],\n",
+ " \"Gate Count (Default)\": m_default[\"gate_count\"],\n",
+ " \"Gate Count (AI)\": m_ai[\"gate_count\"],\n",
+ " \"Time (Default)\": m_default[\"time_s\"],\n",
+ " \"Time (AI)\": m_ai[\"time_s\"],\n",
+ " }\n",
+ " )\n",
+ "\n",
+ " print(\n",
+ " f\"[{n:2d}q] Default: depth={m_default['depth_2q']:3d}, gates={m_default['gate_count']:4d}, time={m_default['time_s']:.3f}s | AI: depth={m_ai['depth_2q']:3d}, gates={m_ai['gate_count']:4d}, time={m_ai['time_s']:.3f}s\"\n",
+ " )\n",
"\n",
- "comparison_df = pd.DataFrame(comparison)\n",
- "print(\"\\n=== Comparison of AI vs Best Non-AI Method ===\")\n",
- "comparison_df"
+ "df_hw = pd.DataFrame(results_hw)\n",
+ "summary_table(df_hw)"
]
},
{
- "cell_type": "markdown",
- "id": "e1ba3767-5ce1-4663-803b-73ccfc22f03b",
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "hw-step2-plot",
"metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
"source": [
- "The results demonstrate that the AI transpiler outperforms all other Qiskit synthesis methods for this set of random permutation circuits. Key findings include:\n",
- "\n",
- "1. Depth: The AI transpiler achieves the lowest average depth, indicating superior optimization of circuit layouts.\n",
- "2. Gate count: It significantly reduces the number of gates compared to other methods, improving execution fidelity and efficiency.\n",
- "3. Transpilation time: All methods run very quickly at this scale, making them practical for use. However, the AI transpiler does has a notable runtime increase compared to traditional methods due to the complexity of the AI models used.\n",
- "\n",
- "These results establish the AI transpiler as the most effective approach for this benchmark, particularly for depth and gate count optimization."
+ "plot_metrics_and_pct(df_hw, \"Large-Scale Random Circuits\")"
]
},
{
- "cell_type": "markdown",
- "id": "dbaab943-5fd7-4720-98bf-8602b2ab4473",
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "hw-step3",
"metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Mirror circuit (Default): depth 1564, gates 9708\n",
+ "Mirror circuit (AI): depth 1145, gates 11488\n",
+ "Job submitted: d7cagn15a5qc73dofeq0\n"
+ ]
+ }
+ ],
"source": [
- "Plot the results to compare the performance of the AI synthesis passes against the generic synthesis methods."
+ "# -------------------------Step 3-------------------------\n",
+ "# Build mirror circuits from the 26-qubit case\n",
+ "idx_26q = qubit_range_hw.index(26)\n",
+ "\n",
+ "qc_26q = circuits_hw[idx_26q]\n",
+ "qc_default_26q, _ = transpile_with_metrics(pm_default, qc_26q)\n",
+ "\n",
+ "pm_ai = generate_ai_pass_manager(\n",
+ " optimization_level=1,\n",
+ " ai_optimization_level=3,\n",
+ " backend=backend,\n",
+ ")\n",
+ "qc_ai_26q, _ = transpile_with_metrics(pm_ai, qc_26q)\n",
+ "\n",
+ "mirror_default_hw = build_mirror_circuit(qc_default_26q, simulate=False)\n",
+ "mirror_ai_hw = build_mirror_circuit(qc_ai_26q, simulate=False)\n",
+ "\n",
+ "# Re-transpile to basis gates (the inverse can introduce gates like sxdg)\n",
+ "pm_basis = generate_preset_pass_manager(\n",
+ " optimization_level=0,\n",
+ " backend=backend,\n",
+ ")\n",
+ "mirror_default_hw = pm_basis.run(mirror_default_hw)\n",
+ "mirror_ai_hw = pm_basis.run(mirror_ai_hw)\n",
+ "\n",
+ "print(\n",
+ " f\"Mirror circuit (Default): depth {mirror_default_hw.depth()}, gates {mirror_default_hw.size()}\"\n",
+ ")\n",
+ "print(\n",
+ " f\"Mirror circuit (AI): depth {mirror_ai_hw.depth()}, gates {mirror_ai_hw.size()}\"\n",
+ ")\n",
+ "\n",
+ "# Submit to real hardware\n",
+ "sampler_hw = SamplerV2(mode=backend)\n",
+ "sampler_hw.options.environment.job_tags = [\"TUT-AITI\"]\n",
+ "\n",
+ "shots_hw = 500000\n",
+ "job_hw = sampler_hw.run([mirror_default_hw, mirror_ai_hw], shots=shots_hw)\n",
+ "print(f\"Job submitted: {job_hw.job_id()}\")"
]
},
{
"cell_type": "code",
- "execution_count": 14,
- "id": "a326f268-0115-442c-8563-968676b66670",
+ "execution_count": 15,
+ "id": "hw-step4",
"metadata": {},
"outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Default P(|00...0>) = 0.0077 (3870/500000)\n",
+ "AI P(|00...0>) = 0.0001 (47/500000)\n"
+ ]
+ },
{
"data": {
"text/plain": [
- ""
+ ""
]
},
"metadata": {},
@@ -1062,81 +994,70 @@
}
],
"source": [
- "methods = results_df[\"Method\"].unique()\n",
- "\n",
- "fig, axs = plt.subplots(1, 3, figsize=(18, 5))\n",
- "\n",
- "# Pivot the DataFrame and reorder columns to ensure AI is first\n",
- "pivot_depth = results_df.pivot(\n",
- " index=\"Pattern\", columns=\"Method\", values=\"Depth (2Q)\"\n",
- ")[[\"AI\", \"Depth-LNN-KMS\", \"Basic\"]]\n",
- "pivot_gates = results_df.pivot(\n",
- " index=\"Pattern\", columns=\"Method\", values=\"Gates\"\n",
- ")[[\"AI\", \"Depth-LNN-KMS\", \"Basic\"]]\n",
- "pivot_time = results_df.pivot(\n",
- " index=\"Pattern\", columns=\"Method\", values=\"Time (s)\"\n",
- ")[[\"AI\", \"Depth-LNN-KMS\", \"Basic\"]]\n",
- "\n",
- "pivot_depth.plot(kind=\"bar\", ax=axs[0], legend=False)\n",
- "axs[0].set_title(\"Circuit Depth Comparison\")\n",
- "axs[0].set_ylabel(\"Depth\")\n",
- "axs[0].set_xlabel(\"Pattern\")\n",
- "axs[0].tick_params(axis=\"x\", rotation=45)\n",
- "pivot_gates.plot(kind=\"bar\", ax=axs[1], legend=False)\n",
- "axs[1].set_title(\"2Q Gate Count Comparison\")\n",
- "axs[1].set_ylabel(\"Number of 2Q Gates\")\n",
- "axs[1].set_xlabel(\"Pattern\")\n",
- "axs[1].tick_params(axis=\"x\", rotation=45)\n",
- "pivot_time.plot(\n",
- " kind=\"bar\", ax=axs[2], legend=True, title=\"Legend\"\n",
- ") # Show legend on the last plot\n",
- "axs[2].set_title(\"Time Comparison\")\n",
- "axs[2].set_ylabel(\"Time (seconds)\")\n",
- "axs[2].set_xlabel(\"Pattern\")\n",
- "axs[2].tick_params(axis=\"x\", rotation=45)\n",
- "fig.suptitle(\n",
- " \"Benchmarking AI Synthesis Methods vs Non-AI Synthesis Methods For Random Permutations Circuits\",\n",
- " fontsize=16,\n",
- " y=1,\n",
- ")\n",
+ "# -------------------------Step 4-------------------------\n",
+ "result_hw = job_hw.result()\n",
+ "\n",
+ "survival_probs_hw = {}\n",
+ "for i, method in enumerate([\"Default\", \"AI\"]):\n",
+ " counts = result_hw[i].data.meas.get_counts()\n",
+ " mirror = [mirror_default_hw, mirror_ai_hw][i]\n",
+ " all_zeros = \"0\" * mirror.num_qubits\n",
+ " survival = counts.get(all_zeros, 0) / shots_hw\n",
+ " survival_probs_hw[method] = survival\n",
+ " print(\n",
+ " f\"{method:8s} P(|00...0>) = {survival:.4f} ({counts.get(all_zeros, 0)}/{shots_hw})\"\n",
+ " )\n",
"\n",
+ "fig, ax = plt.subplots(figsize=(6, 4))\n",
+ "ax.bar(\n",
+ " survival_probs_hw.keys(),\n",
+ " survival_probs_hw.values(),\n",
+ " color=[\"steelblue\", \"coral\"],\n",
+ ")\n",
+ "ax.set_ylabel(\"P(|0...0>)\")\n",
+ "ax.set_title(f\"Mirror Circuit Fidelity (26-qubit, {backend.name})\")\n",
+ "ax.set_ylim(0, 1)\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
- "id": "03a9af42-42a7-4344-b834-0d2b506d4d78",
+ "id": "hw-analysis",
"metadata": {},
"source": [
- "This graph highlights the individual results for each circuit (`qc_1` to `qc_10`) across different synthesis methods:\n",
+ "### Analysis\n",
"\n",
- "While these results underscore the AI transpiler’s effectiveness for permutation circuits, it is important to note its limitations. The AI synthesis method is currently only available for certain coupling maps, which may restrict its broader applicability. This constraint should be considered when evaluating its usage in different scenarios.\n",
+ "The large-scale results reinforce the trends observed in the small-scale example, now at a more demanding scale.\n",
"\n",
- "Overall, the AI transpiler demonstrates promising improvements in depth and gate count optimization for these specific circuits while maintaining comparable transpilation times."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "41b1405d-fa90-48b6-9ce2-933f05358778",
- "metadata": {},
- "source": [
- "## Step 3: Execute using Qiskit primitives\n",
- "As this tutorial focuses on transpilation, no experiments will be executed on the quantum device. The goal is to leverage the optimizations from Step 2 to obtain a transpiled circuit with reduced depth or gate count."
+ "**Two-qubit depth:** The AI transpiler continues to deliver noticeably lower two-qubit depth across the full range of circuit sizes. Depth optimization is one of the primary objectives the AI routing model is trained on, and the advantage is more pronounced at larger qubit counts where the routing problem becomes harder for heuristic methods.\n",
+ "\n",
+ "**Gate count:** The default transpiler (SABRE) consistently produces circuits with fewer gates across all circuit sizes in this range. SABRE's heuristic is specifically designed to minimize gate count, and at this scale the advantage is clear and uniform.\n",
+ "\n",
+ "**Transpilation time:** The gap in transpilation time widens at larger scales. SABRE remains nearly constant, while the AI transpiler's runtime grows more steeply. Despite this, the AI transpiler runtime remains practical for most workflows.\n",
+ "\n",
+ "**Mirror circuit fidelity:** Both methods produce survival probabilities very close to zero at this scale. With total gate counts around 10,000 and two-qubit depths exceeding 1,000, the depolarizing noise accumulated across the mirror circuit overwhelms any signal. This highlights a key limitation of the mirror circuit approach: while it is simple and requires no classical simulation, it does not scale well to large or deep circuits where the noise floor dominates regardless of transpilation quality."
]
},
{
"cell_type": "markdown",
- "id": "3d942ee4-e4d7-4e87-8c8a-17c662d5379f",
+ "id": "next-steps",
"metadata": {},
"source": [
- "## Step 4: Post-process and return result in desired classical format\n",
- "Since there is no execution for this notebook, there are no results to post-process."
+ "## Next steps\n",
+ "If you found this work interesting, you might be interested in the following material:\n",
+ "\n",
+ "\n",
+ "- [Qiskit transpiler service](https://quantum.cloud.ibm.com/docs/en/guides/qiskit-transpiler-service)\n",
+ "- [Transpilation optimizations with SABRE](https://quantum.cloud.ibm.com/docs/en/tutorials/transpilation-optimizations-with-sabre)\n",
+ "- [Compilation methods for Hamiltonian simulation circuits](https://quantum.cloud.ibm.com/docs/en/tutorials/compilation-methods-for-hamiltonian-simulation-circuits)\n",
+ "\n",
+ ""
]
},
{
"cell_type": "markdown",
- "id": "3b21bb06-7a2b-4181-af59-734c89435d45",
+ "id": "survey",
"metadata": {},
"source": [
"## Tutorial survey\n",
@@ -1167,5 +1088,5 @@
}
},
"nbformat": 4,
- "nbformat_minor": 4
+ "nbformat_minor": 5
}
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/76a3e847-0808-4413-bd0c-c760cd2df3f4-0.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/76a3e847-0808-4413-bd0c-c760cd2df3f4-0.avif
deleted file mode 100644
index 96f953742d5..00000000000
Binary files a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/76a3e847-0808-4413-bd0c-c760cd2df3f4-0.avif and /dev/null differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/79b8d5d9-0f9d-42ca-9583-8bec17430014-0.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/79b8d5d9-0f9d-42ca-9583-8bec17430014-0.avif
deleted file mode 100644
index 567e72db183..00000000000
Binary files a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/79b8d5d9-0f9d-42ca-9583-8bec17430014-0.avif and /dev/null differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/84dff2c2-a496-4828-bb8e-08d373816a36-0.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/84dff2c2-a496-4828-bb8e-08d373816a36-0.avif
deleted file mode 100644
index bb27f5272b3..00000000000
Binary files a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/84dff2c2-a496-4828-bb8e-08d373816a36-0.avif and /dev/null differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/a326f268-0115-442c-8563-968676b66670-0.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/a326f268-0115-442c-8563-968676b66670-0.avif
deleted file mode 100644
index cc6ab659490..00000000000
Binary files a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/a326f268-0115-442c-8563-968676b66670-0.avif and /dev/null differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/c6e9c2c0-e02c-4276-bae8-d5692e60b6b8-0.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/c6e9c2c0-e02c-4276-bae8-d5692e60b6b8-0.avif
deleted file mode 100644
index 786dbc4b8de..00000000000
Binary files a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/c6e9c2c0-e02c-4276-bae8-d5692e60b6b8-0.avif and /dev/null differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step2-plot-0.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step2-plot-0.avif
new file mode 100644
index 00000000000..e6fa5d67b3e
Binary files /dev/null and b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step2-plot-0.avif differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step2-plot-1.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step2-plot-1.avif
new file mode 100644
index 00000000000..131a32150a8
Binary files /dev/null and b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step2-plot-1.avif differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step4-1.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step4-1.avif
new file mode 100644
index 00000000000..adcf295a16c
Binary files /dev/null and b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/hw-step4-1.avif differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step1-code-1.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step1-code-1.avif
new file mode 100644
index 00000000000..8eeb373d131
Binary files /dev/null and b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step1-code-1.avif differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step2-plot-0.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step2-plot-0.avif
new file mode 100644
index 00000000000..a98fc5b1bac
Binary files /dev/null and b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step2-plot-0.avif differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step2-plot-1.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step2-plot-1.avif
new file mode 100644
index 00000000000..9c4f5f0eba8
Binary files /dev/null and b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step2-plot-1.avif differ
diff --git a/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step4-code-0.avif b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step4-code-0.avif
new file mode 100644
index 00000000000..977847d203f
Binary files /dev/null and b/public/docs/images/tutorials/ai-transpiler-introduction/extracted-outputs/sim-step4-code-0.avif differ