Blog

How Quantum Computing Could Supercharge Artificial Intelligence

May 8, 2025

How Quantum Computing Could Supercharge Artificial Intelligence

Unless you’ve been living under a rock, you’ve probably heard of ChatGPT, Gemini, and other AI chatbots. They can draft emails, debug code, even write song lyrics. These tools are powered by large language models (LLMs), trained on massive datasets and run on the good old classical computers we use today. Yes, unfortunately, I have to call them “classical” now—because there’s a new kid on the block that could change computing as we know it. 

Whenever I mention quantum computing, the room suddenly goes quiet. Cue puzzled faces—and in more extreme cases, heated debates about String Theory. 

Over the past year, I’ve been on a personal mission to understand why. I’ve devoured research papers, experimented with quantum programming languages like Qiskit and Q#, explored demos, and had deep conversations with some of the folks building these machines. What follows is a plain-English guide to what I’ve learned—no equations, just the essentials. 

Here’s your fair warning: quantum computing isn’t just about going faster. It’s a fundamentally different way of thinking. Concepts like superposition and entanglement seem almost magical because they defy our everyday logic of black-or-white, 0-or-1 reasoning. In the quantum world, a bit (called a qubit) can be both black and white at the same time — boom, mind blown! 

If that mental leap doesn’t scare you off, then buckle up—we’re about to explore one of the most fascinating frontiers in tech today. 

Why pair AI and quantum? 

A decade ago, a schoolage child asking a phone, “What’s the capital of Iceland?” felt like science fiction. Today that exchange is mundane—and it happens millions of times every second. Behind the scenes, the artificialintelligence (AI) models that power voice assistants, route delivery vans, and spot tumors in Xrays consume warehouses of classical chips and enough electricity to power a small town. Yet they still choke on problems like inventing a truly climatefriendly battery or mapping every protein’s shape. 

Enter quantum computing. Instead of pushing more current through eversmaller transistors, quantum machines tap into the uncanny rules that govern atoms. In principle they can weigh many answers at once, giving AI the fresh horsepower it needs to crack humanity’s hardest puzzles. 

Artificial Intelligence in plain English 

Onesentence definition: AI is software that learns from experience so it can make smarter decisions next time. 

A home voice assistant improves at recognizing your accent after a few mornings of practice. A spam filter picks up new grifts and blocks them automatically. An art generator studies thousands of corgi photos and then paints its own cartoon corgi in a tophat. All these feats run on classical computers—chips whose tiny switches are either on (1) or off (0) billions of times per second. 

How does learning happen? In a loop: 

  1. See: the model is shown data—photos, conversations, traffic videos. 
  1. Guess: it labels the input (“dog”/“not dog”). 
  1. Compare: it checks that guess against the right answer and measures the error. 
  1. Tweak: it nudges internal “weights” to shrink that error next time. After millions of loops, the model gets good. Stanford’s AI100 project wraps this up neatly: AI is the quest to make machines “function appropriately and with foresight in their environment.” [1] 

Quantum Computing in plain English 

Classical bits are either 0 or 1. Quantum bits, or qubits, can be both 0 and 1 at the same time thanks to superposition. Stranger still, two qubits can become entangled so that nudging one instantly nudges the other—no matter how far apart they are. Those two tricks let quantum processors explore many routes through a problem simultaneously. 

Quantum rule Everyday analogy What it means for computing 
Superposition – a qubit can be 0 and 1 at once A dimmer switch that is both fully on and fully off—until you peek The machine can weigh lots of possible answers in parallel 
Entanglement – qubits share a linked fate Two magic coins that always land on opposite sides, even across oceans Fixing one piece of the puzzle instantly updates the others 

Example 1 — The cointoss phone book 
Picture rifling through 10 000 messy pages to find one name. A classical computer flips them onebyone. A quantum trick called Grover’s search can check roughly √10 000 ≈ 100 pages—like 100 magic coin tosses resolving the whole book in a blink. 

Example 2 — Cracking tough locks 
Modern encryption banks on the fact that factoring a 2048digit number would take classical supercomputers longer than the age of the universe. A quantum algorithm known as Shor’s could, in theory, do it in hours once large, errorfree machines exist. 

Example 3 — Simulating the weather inside a battery 
Chemists dream of watching electrons swirl through a nextgen battery in real time. Classical models bog down after a handful of atoms. Quantum simulators speak the electrons’ native language, making such movies—data gold for AI—possible. The U.S. National Institute of Standards and Technology (NIST) calls current hardware “rudimentary,” but the upside for specialised tasks is enormous. [2] 

How quantum could leapfrog AI — the longer view 

AI’s biggest headaches today—and how quantum might ease them 

Current bottleneck Quantumera boost 
Training drag – Tweaking billions of neuralnetwork weights can tie up fleets of GPUs for weeks. Quantum linearalgebra routines (e.g., fast matrix inversion) could slash some training jobs from days to minutes for image or language tasks. [3] 
Inference overload – Hunting for needles in enormous haystacks (for example, “Which candidate drug molecule best binds this protein?”) still stumps AI. Quantum sampling treats probability as a native operation, promising sharper answers for drug design, climate risk, and finance. [2] 
Combinatorial explosions – Routing 1 000 delivery vans has more possible schedules than atoms in the universe. Hybrid algorithms like the Quantum Approximate Optimisation Algorithm (QAOA) already test small logistics puzzles on cloudbased quantum chips. [3] 
Limited data – Classical AI can only learn from things humans have already measured. Quantum simulators generate new data—exotic molecules, novel magnetic materials—feeding AI fresh knowledge unattainable in ordinary labs. [2] 

Early theoretical work in journals such as Nature points to exponential savings in memory or compute for specific patternrecognition tasks once machines reach a few thousand stable qubits. [3] 

Where are we now? Early signs it’s happening (2024 – 2025 snapshot) 

IBM’s 1,121-qubit “Condor” processor (Dec 2023) is the first superconducting chip to cross the 1,000-qubit mark. Designed as a research platform, Condor is not yet broadly available, but it represents a major step in scaling quantum hardware. It’s already enabling early-stage experiments in quantum workflows, including quantum chemistry simulations. 

Google’s 105-qubit “Willow” chip (Dec 2024) achieved a landmark benchmark: solving a computation in five minutes that would take classical supercomputers an estimated 10 septillion years. The breakthrough is credited to advanced quantum error correction techniques, pointing to real progress in making quantum advantage practical. 

Microsoft’s “Majorana 1” prototype (Feb 2025) entered cryogenic testing with its first-ever eight topological qubits. Based on Majorana zero modes, this exotic architecture aims to make future quantum processors significantly more stable than today’s superconducting designs—potentially revolutionizing error correction and fault tolerance. 

Quantinuum’s H2-1 trapped-ion system (2024) expanded from 32 to 56 fully connected qubits, setting new records in circuit fidelity. This makes it one of the most accurate quantum machines available today, particularly well-suited for running complex quantum algorithms thanks to its all-to-all qubit connectivity. 

The roadblocks (and a rough timeline) 

Stage What must be solved? Plausible arrival 
Noisyintermediatescale (NISQ) Calibrate and stabilise 100to1 000qubit chips; refine hybrid algorithms; prove a clear economic win. Happening now through 2028 
Faulttolerant era Build millions of physical qubits plus errorcorrection layers to create thousands of logical qubits. Late 2020s to early 2030s 
Scaled, applicationspecific era Specialpurpose quantumAI accelerators baked into cloud and edge devices. 2030s and beyond 

Ethical and societal questions 

  1. Security shock – Quantumready codebreakers will make today’s banking passwords obsolete; governments are already racing to deploy postquantum encryption. 
  1. Skill gaps – Marrying AI and quantum demands new expertise—“quantumML engineer” was unheardof five years ago. 
  1. Greener compute? – Quantum gates draw little power, but cooling qubits to nearabsolutezero and running classical control hardware is energyintensive. Lifecycle studies will be key to real sustainability gains. 

Bottom line 

AI gives machines intuition; quantum gives them a microscope into nature. The handshake has begun. If engineers can tame the noise and scale up qubits, tomorrow’s AIs may not just learn from the world—they could discover worlds we’ve never seen. 

References 

[1] Stanford University, AI100 Project Report (2016). 
[2] U.S. National Institute of Standards and Technology, “Quantum Computing Explained” factsheet (2025). 
[3] Nature, “The AI–Quantum Computing MashUp” (2023). 
[4] IBM Research Blog, “The hardware and software for the era of quantum utility is here” (Dec 2023). 
[5] Reuters, “Google says it has cracked a quantum computing challenge with new chip” (Dec 9 2024). 

Share this article

Need Help or Have Questions?