Quantum phenomena have puzzled and delighted scientists for over a century, revealing unique, counter-intuitive characteristics of matter like superposition and entanglement. For four decades, the U.S. National Science Foundation has worked to enable breakthroughs in quantum information science and engineering that harness what researchers have learned about quantum phenomena to develop technologies like quantum computers, sensors, and communications. These quantum technologies will have enormous consequences for the national and global economy. To unleash that potential, researchers must overcome several major, fundamental challenges in quantum information science and engineering.

With these unresolved questions in mind, NSF launched the Quantum Leap Challenges Institutes program. And today, NSF, in partnership with the White House Office of Science and Technology Policy, is announcing $75 million for three new institutes designed to have a tangible impact in solving these problems over the next five years.

These institutes are a central piece of NSF’s response to key federal initiatives to advance quantum information science, including the National Quantum Initiative Act of 2018 and the White House’s ongoing focus on American leadership in emerging technologies. Quantum Leap Challenge Institutes also form the centerpiece of NSF’s Quantum Leap, an ongoing, agency-wide effort to enable quantum systems research and development.

“Quantum information science has the potential to change the world. But to realize that potential, we must first answer some fundamental research questions,” said NSF Director Sethuraman Panchanathan. “Through the Quantum Leap Challenge Institutes, NSF is making targeted investments. Within five years, we are confident these institutes can make tangible advances to help carry us into a true quantum revolution.”

“America’s future depends on our continued leadership in the most cutting-edge industries of tomorrow. With the announcement of three new quantum institutes, the Trump Administration is making a bold statement that the United States will remain the global home for QIS research. Our new Quantum Leap Challenge Institutes will advance America’s long history of breakthrough discoveries and generate critical advancements for years to come,” said Michael Kratsios, U.S. Chief Technology Officer.

NSF is establishing three institutes:

NSF Quantum Leap Challenge Institute for Enhanced Sensing and Distribution Using Correlated Quantum States. Quantum sensors that can measure everything from radiation levels to the effects of gravity will be more sensitive and accurate than classical sensors. This institute, led by the University of Colorado, will design, build, and employ quantum sensing technology for a wide variety of applications in precision measurement.

NSF Quantum Leap Challenge Institute for Hybrid Quantum Architectures and Networks. Developing more robust quantum processors is a significant challenge in quantum information science and engineering. This institute, led by the University of Illinois, Urbana-Champaign, will build interconnected networks of small-scale quantum processors and test their functionality for practical applications.

NSF Quantum Leap Challenge Institute for Present and Future Quantum Computing. Today’s quantum computing prototypes are rudimentary, error-prone, and small-scale. This institute, led by the University of California, Berkeley, plans to learn from these to design advanced, large-scale quantum computers, develop efficient algorithms for current and future quantum computing platforms, and ultimately demonstrate that quantum computers outperform even the best conceivable classical computers.

The institutes comprise an interconnected community of 16 core academic institutions, 8 national laboratories, and 22 industry partners. Through integrating the perspectives and resources of multiple disciplines and sectors, they promote a sustainable ecosystem for innovation. In addition to their research, these centers will also make strides in training and educating a diverse, quantum-ready U.S. workforce. They will develop new in-person and online curricula for students and teachers at all educational levels, from primary school to professionals.

Let’s say you’re interested in buying a car, so you visit a new car dealership. Imagine your disappointment if you learned that the only information the salesperson can give you is the number of seats in each vehicle.

How crazy would that be? Of course, to make sure a car fits your requirements, you need a lot more information.

What color is the car? Does it have the right accessories? And, what kind of gas mileage does it get? Ultimately, you would also like to know how it performs under different conditions, like driving in town and on the highway. Only when armed with more information could you make an informed decision about buying a car or making a comparison between different vehicles.

Until now, that’s pretty much how we have evaluated quantum computers. The focus has mainly been on the number of qubits in a quantum computer while ignoring many other important factors affecting its computational ability.

Before Google announced it had achieved Quantum Supremacy, the media speculated on how many qubits would be needed to outperform a classical computer. Ignoring the controversial technical aspects of Google’s accomplishment, it finally achieved the benchmark. Of course, most articles focused on the fact that Google used a 54-qubit quantum processor.

What’s next after quantum supremacy?

Beyond quantum supremacy, the next major benchmark, called quantum advantage, is on the distant horizon. Quantum advantage will exist when programmable NISQ quantum gate-based or circuit-based computers reach a degree of technical maturity that allows them to solve many, but not necessarily all, significant real-world problems that classical computers can’t solve, or problems that classical machines require an exponential amount of time to solve.

World-changing applications will be possible once we reach quantum advantage. The major applications will likely include optimization, chemistry, machine learning, and organic simulations.

From Quantum Supremacy to Quantum Advantage

Although the number of qubits is essential, so are the number of operations completed before qubits lose their quantum states. Qubits decohere either due to noise or because of their inherent properties. For those reasons, building quantum computers capable of solving deeper, more complex problems is not just a simple matter of increasing the number of qubits.

Before we can move from a hundred qubits to one with thousands of qubits, and eventual to one with millions of qubits, many significant technical issues remain to be solved. Moreover, none of the problems have overnight solutions. It will likely take another five or ten years of incremental research, experimentation, and steady technical improvements to find answers.

Configuration changes – subtle and significant – can dramatically affect the performance of a quantum computer. Determining the optimum quantum computer configuration is much like solving a Rubik’s Cube. In the process of trying to align all the blue tiles on one surface, the previously solved red surface becomes disrupted.

The power of Quantum Volume

It would be helpful if quantum researchers had a tool that allowed them to systematically measure and understand how incremental technology, configuration and design changes affected a quantum computer’s overall power and performance. Corporate users also need a way to compare the relative power of one quantum computer to another.

IBM foresaw the need for such a metric in 2017 when its researchers developed a full-system performance measurement called Quantum Volume.

Quantum Volume’s numerical value indicates the relative complexity of a problem that can be solved by the quantum computer. The number of qubits and the number of operations that can be performed are called the width and depth of a quantum circuit. The deeper the circuit, the more complex of an algorithm the computer can run. Circuit depth is influenced by such things as the number of qubits, how qubits are interconnected, gate and measurement errors, device cross talk, circuit compiler efficiency, and more.

That’s where Quantum Volume comes in. It analyzes the collective performance and efficiency of these factors then produces a single, easy-to-understand Quantum Volume number. The larger the number, the more powerful the quantum computer.

Demonstrated incremental improvements

Quantum Volume can be used to compare the power of one quantum computer to another. It can also play a significant role in ongoing development and research necessary to create bigger and better quantum computers to achieve a quantum advantage.

IBM tested Quantum Volume on three of its quantum computers. The test included the following: the Tenerife five-qubit computer released through the IBM Q Experience quantum cloud service in 2017, the 20-qubit Tokyo 2018 computer, and the 20-qubit 2019 IBM Q System One.

The Volume Growth Chart shows how Quantum Volume doubled each successive year.

Quantum Volume as a research tool

Jay Gambetta, IBM Vice President, Quantum Computing, posted graphs online showing progressive improvement in CNOT error rates as a result of various changes. In the post, he said: “With four revisions of our 20-qubit quantum system (chip name “Penguin”) plotted together, you can really see the progress in stability, scalability, and reduction of errors over the last two years.”

When asked for more detail on how Quantum Volume assisted in these improvements, the IBM research team said: “These plots are a great example of why qubit counts alone do not tell the whole story. Over several revisions of the architecture, the CNOT error rates improved significantly – which is shown in the plots; and that has a significant impact on improving quantum volume from revision to revision. There are many other performance metrics and architectural choices that impact the quantum volume of a system as well – for instance how the qubits are laid out and connected to one another. Maximizing quantum volume is about making trade-offs between these different parameters – and identifying new ways to push those boundaries. This is why the quantum volume metric is so important; otherwise it would be quite possible to drive a couple of parameters to report on publicly yet to the detriment of overall quantum volume as it can be measured in a single system.”

Optimizing with Quantum Volume

For the foreseeable future, quantum computers will use noisy qubits with relatively high error rates and short coherence times. We are still in the experimental stages of error correction. We know functional error correction will likely require a thousand or more error-correction qubits for every computational qubit. Decoherence of quantum states is a significant obstacle we will have to overcome to build scalable and reliable quantum computers.

Simple logic and the Quantum Growth Chart tells us that to reach quantum advantage by 2025, we need quantum computers with much higher Quantum Volumes, perhaps with a numerical value of 1000 or more.

Along these lines, in response to my earlier question, the IBM research team also said, “Maximizing quantum volume is about making trade-offs between these different parameters – and identifying new ways to push those boundaries. This is why the quantum volume metric is so important; otherwise it would be quite possible to drive a couple of parameters to report on publicly yet to the detriment of overall quantum volume as it can be measured in a single system.”

Quantum Volume and the quantum computing ecosystem

Most of today’s benchmarking is done at the component level for qubits and quantum logic gates. There have been other benchmarking methods investigated as described in this excerpt from IBM’s paper on Quantum Volume:

“Methods such as randomized benchmarking, state and process tomography, and gate set tomography are valued for measuring the performance of operations on a few qubits, yet they fail to account for errors arising from interactions with spectator qubits. Given a system such as this, whose individual gate operations have been independently calibrated and verified, how do we measure the degree to which the system performs as a general-purpose quantum computer? We address this question by introducing a single-number metric, the quantum volume, together with a concrete protocol for measuring it on near-term systems. Similar to how LINPACK and improved benchmarks are used for comparing diverse classical computers, this metric is not tailored to any particular system, requiring only the ability to implement a universal set of quantum gates.”IBM Research

Since quantum computing is now in a zone between quantum supremacy and quantum advantage, interim quantum computing performance benchmarks are now needed more than ever. The use of Quantum Volume offers many readily available operational and business benefits to gate-based and circuit-based quantum computer companies.

IBM’s published results demonstrate Quantum Volume’s benefits. It can facilitate research, help develop system roadmaps, and help to systematically optimize architectures needed to advance technology to quantum advantage and beyond.

Every researcher understands that funding for quantum computing research requires a favorable opinion of the technology by the general public. Unfortunately, because of quantum complexity and long intervals between significant announcements, the general public understands very little about quantum computing. Wider wide use of Quantum Volume would provide the public with a better feel for progress and generate more public interest.

Quantum Volume would also benefit the CEO or investor who lacks the in-depth technical knowledge necessary to make confident investment decisions in the technology. Additionally, reported variations in Quantum Volume from company to company would likely stimulate more articles by the media, which would serve to educate the general public further.

Summary

There should be no question that the development and ongoing support of Quantum Volume demonstrates IBM’s long-term commitment to the overall quantum computing ecosystem.

Organizations like the IEEE need to assist in the long-term evolution and development of quantum computing benchmarks. Unfortunately, it will take years to complete investigations, documentation, and negotiation of a final product.

The bottom line – there are many benefits to using Quantum Volume. Moor Insights & Strategy believes it is a powerful tool that should be adopted as an interim benchmarking tool by other gate-based quantum computer companies. When the time comes, it should also be considered by IEEE as a permanent standard.

In a surprising new study, University of California, Berkeley, researchers show that heat energy can travel through a complete vacuum thanks to invisible quantum fluctuations. To conduct the challenging experiment, the team engineered extremely thin silicon nitride membranes, which they fabricated in a dust-free clean room, and then used optic and electronic components to precisely control and monitor the temperature of the membranes when they were locked inside a vacuum chamber. (UC Berkeley photo by Violet Carter)

If you use a vacuum-insulated thermos to help keep your coffee hot, you may know it’s a good insulator because heat energy has a hard time moving through empty space. Vibrations of atoms or molecules, which carry thermal energy, simply can’t travel if there are no atoms or molecules around.

But a new study by researchers at the University of California, Berkeley, shows how the weirdness of quantum mechanics can turn even this basic tenet of classical physics on its head.

The study, appearing this week in the journal Nature, shows that heat energy can leap across a few hundred nanometers of a vacuum, thanks to a quantum mechanical phenomenon called the Casimir interaction.

Though this interaction is only significant on very short length scales, it could have profound implications for the design of computer chips and other nanoscale electronic components where heat dissipation is key. It also upends what many of us learned about heat transfer in high school physics.

In the experiment, the team showed that heat energy, in the form of molecular vibrations, can flow from a hot membrane to a cold membrane even in a complete vacuum. This is possible because everything in the universe is connected by invisible quantum fluctuations. (UC Berkeley image courtesy of the Zhang lab)

“Heat is usually conducted in a solid through the vibrations of atoms or molecules, or so-called phonons — but in a vacuum, there is no physical medium. So, for many years, textbooks told us that phonons cannot travel through a vacuum,” said Xiang Zhang, the professor of mechanical engineering at UC Berkeley who guided the study. “What we discovered, surprisingly, is that phonons can indeed be transferred across a vacuum by invisible quantum fluctuations.”

In the experiment, Zhang’s team placed two gold-coated silicon nitride membranes a few hundred nanometers apart inside a vacuum chamber. When they heated up one of the membranes, the other warmed up, too — even though there was nothing connecting the two membranes and negligible light energy passing between them.

“This discovery of a new mechanism of heat transfer opens up unprecedented opportunities for thermal management at the nanoscale, which is important for high-speed computation and data storage,” said Hao-Kun Li, a former Ph.D. student in Zhang’s group and co-first author of the study. “Now, we can engineer the quantum vacuum to extract heat in integrated circuits.”

No such thing as empty space

The seemingly impossible feat of moving molecular vibrations across a vacuum can be accomplished because, according to quantum mechanics, there is no such thing as truly empty space, said King Yan Fong, a former postdoctoral scholar at UC Berkeley and the study’s other first author.

“Even if you have empty space — no matter, no light — quantum mechanics says it cannot be truly empty. There are still some quantum field fluctuations in a vacuum,” Fong said. “These fluctuations give rise to a force that connects two objects, which is called the Casimir interaction. So, when one object heats up and starts shaking and oscillating, that motion can actually be transmitted to the other object across the vacuum because of these quantum fluctuations.”

The team used highly sensitive optics to monitor the temperature of the silicon nitride membranes during the experiment. (UC Berkeley photo by Violet Carter)

Though theorists have long speculated that the Casimir interaction could help molecular vibrations travel through empty space, proving it experimentally has been a major challenge. To do so, the team engineered extremely thin silicon nitride membranes, which they fabricated in a dust-free clean room, and then devised a way to precisely control and monitor their temperature.

They found that, by carefully selecting the size and design of the membranes, they could transfer the heat energy over a few hundred nanometers of a vacuum. This distance was far enough that other possible modes of heat transfer were negligible — such as energy carried by electromagnetic radiation, which is how energy from the sun heats up Earth.

Because molecular vibrations are also the basis of the sounds that we hear, this discovery hints that sounds can also travel through a vacuum, Zhang said.

“Twenty-five years ago, during my Ph.D. qualifying exam at Berkeley, one professor asked me ‘Why can you hear my voice across this table?’ I answered that, ‘It is because your sound travels by vibrating molecules in the air.’ He further asked, ‘What if we suck all air molecules out of this room? Can you still hear me?’ I said, ‘No, because there is no medium to vibrate,’” Zhang said. “Today, what we discovered is a surprising new mode of heat conduction across a vacuum without a medium, which is achieved by the intriguing quantum vacuum fluctuations. So, I was wrong in my 1994 exam. Now, you can shout through a vacuum.”

Co-authors of the paper include Rongkuo Zhao, Sui Yang and Yuan Wang of UC Berkeley.

This research was funded in part by the National Science Foundation (NSF) under grant 1725335, the King Abdullah University of Science and Technology Office of Sponsored Research (OSR) (award OSR-2016-CRG5-2950-03; OSR-2016-CRG5-2996) and the Ernest S. Kuh Endowed Chair in Engineering.

Quantum computing pioneer John Martinis may have moved on from Google’s quantum computing project, but he hasn’t moved on from his belief that quantum computers will take on real world challenges.

The University of California Santa Barbara physicist told the participants of the BCI Summit, a virtual conference on the latest advances in quantum computing held yesterday, that, while challenges remain, he was optimistic.

“This is an exciting time for many groups who are working on quantum computing,” said Martinis. “We are now building machines and building algorithms. Of course, having real applications is difficult because classical computers are so good. It’s a very healthy time right now.”

“THIS IS AN EXCITING TIME FOR MANY GROUPS WHO ARE WORKING ON QUANTUM COMPUTING.”

What’s After Quantum Supremacy?

Martinis was one of the researchers that spearheaded work that led to Google’s quantum supremacy announcement, alluding to this being helpful in getting Google executives belief and buy-in for further development. He added that the team has studies currently waiting to be published that may demonstrate that quantum computers can solve real-world problems.

‘Since then we have submitted two new papers in review using the sycamore chip where they are looking to solve real world problems,” said Martinis. “These are significant improvements of what anyone has been doing. One of the papers has to deal with optimization, second one has to do with quantum chemistry. The important result here is that even though we used a NISQ-era computer, we used techniques that improved the fidelity of the operations by more than a factor of 100. The paper will be coming out shortly.”

Error correction remains an area of considerable interest for the researcher.

“I am trying to work on a timeline of building a big error corrected machine,” said Martinis. “There are a lot of unknowns there. The machines have low enough errors that you will be able to do some significantly good science. When it becomes useful may take 10 years or more. If we’re lucky it could be 3-5 years. The quantum supremacy announcement represented a good milestone to get to the error corrected machine. We just need to get in the lab and get all that to work.”

Martinis has been a professor at UC Santa Barbara since 2004 and joined Google as a research scientist at Google in 2014. He recently announced that he resigned from Google’s quantum computing project.

“WE HAVE GONE PAST THIS STAGE WHERE THEY DID NOT EXIST. WE ARE IN THE EMERGENT ERA WHERE WE HAVE CROSSED THE DIVIDE FROM QUANTUM COMPUTERS NOT EXISTING TO EXISTING.”

Honeywell in The Emergent Era

Tony Uttley, president of Honeywell Quantum Solutions, who also spoke at the event, agreed with Martinis’ optimism and said the quantum era has begun.

“Do Quantum Computers work? Yes,” said Uttley. “We have gone past this stage where they did not exist. We are in the emergent era where we have crossed the divide from quantum computers not existing to existing.”

Honeywell, a dark horse that has emerged in the quantum computing race, recently announced its advances in creating a trapped ion-based quantum computer, one that is rumored to rival any current quantum device. The company remain confident on officially launching this in the coming months. Watch this space.

The summit also included Thierry L. Kahane, AI & Analytics practice leader, North America Fujitsu America and Bill Reichert, co-founder, MDGarage Tech Ventures and partner, Pegasus Tech Ventures.

The BCI network comprises top tier banks, active venture investors, Fortune 100s, leading corporations and government entities.

Today, in collaboration with the University of Waterloo, X, and Volkswagen, we announce the release of TensorFlow Quantum (TFQ), an open-source library for the rapid prototyping of quantum ML models. TFQ provides the tools necessary for bringing the quantum computing and machine learning research communities together to control and model natural or artificial quantum systems; e.g. Noisy Intermediate Scale Quantum (NISQ) processors with ~50 – 100 qubits.

Under the hood, TFQ integrates Cirq with TensorFlow, and offers high-level abstractions for the design and implementation of both discriminative and generative quantum-classical models by providing quantum computing primitives compatible with existing TensorFlow APIs, along with high-performance quantum circuit simulators.

What is a Quantum ML Model? A quantum model has the ability to represent and generalize data with a quantum mechanical origin. However, to understand quantum models, two concepts must be introduced – quantum data and hybrid quantum-classical models.

A technical, but key, insight is that quantum data generated by NISQ processors are noisy and are typically entangled just before the measurement occurs. However, applying quantum machine learning to noisy entangled quantum data can maximize extraction of useful classical information. Inspired by these techniques, the TFQ library provides primitives for the development of models that disentangle and generalize correlations in quantum data, opening up opportunities to improve existing quantum algorithms or discover new quantum algorithms.

The second concept to introduce is hybrid quantum-classical models. Because near-term quantum processors are still fairly small and noisy, quantum models cannot use quantum processors alone — NISQ processors will need to work in concert with classical processors to become effective. As TensorFlow already supports heterogeneous computing across CPUs, GPUs, and TPUs, it is a natural platform for experimenting with hybrid quantum-classical algorithms.

TFQ contains the basic structures, such as qubits, gates, circuits, and measurement operators that are required for specifying quantum computations. User-specified quantum computations can then be executed in simulation or on real hardware. Cirq also contains substantial machinery that helps users design efficient algorithms for NISQ machines, such as compilers and schedulers, and enables the implementation of hybrid quantum-classical algorithms to run on quantum circuit simulators, and eventually on quantum processors.

We’ve used TensorFlow Quantum for hybrid quantum-classical convolutional neural networks, machine learning for quantum control, layer-wise learning for quantum neural networks, quantum dynamics learning, generative modeling of mixed quantum states, and learning to learn with quantum neural networks via classical recurrent neural networks. We provide a review of these quantum applications in the TFQ white paper; each example can be run in-browser via Colab from our research repository.

How TFQ works TFQ allows researchers to construct quantum datasets, quantum models, and classical control parameters as tensors in a single computational graph. The outcome of quantum measurements, leading to classical probabilistic events, is obtained by TensorFlow Ops. Training can be done using standard Keras functions.

To provide some intuition on how to use quantum data, one may consider a supervised classification of quantum statesusing a quantum neural network. Just like classical ML, a key challenge of quantum ML is to classify “noisy data”. To build and train such a model, the researcher can do the following:

Prepare a quantum dataset – Quantum data is loaded as tensors (a multi-dimensional array of numbers). Each quantum data tensor is specified as a quantum circuit written in Cirq that generates quantum data on the fly. The tensor is executed by TensorFlow on the quantum computer to generate a quantum dataset.

Evaluate a quantum neural network model – The researcher can prototype a quantum neural network using Cirq that they will later embed inside of a TensorFlow compute graph. Parameterized quantum models can be selected from several broad categories based on knowledge of the quantum data’s structure. The goal of the model is to perform quantum processing in order to extract information hidden in a typically entangled state. In other words, the quantum model essentially disentangles the input quantum data, leaving the hidden information encoded in classical correlations, thus making it accessible to local measurements and classical post-processing.

Sample or Average – Measurement of quantum states extracts classical information in the form of samples from a classical random variable. The distribution of values from this random variable generally depends on the quantum state itself and on the measured observable. As many variational algorithms depend on mean values of measurements, also known as expectation values, TFQ provides methods for averaging over several runs involving steps (1) and (2).

Evaluate a classical neural networks model – Once classical information has been extracted, it is in a format amenable to further classical post-processing. As the extracted information may still be encoded in classical correlations between measured expectations, classical deep neural networks can be applied to distill such correlations.

Evaluate Cost Function – Given the results of classical post-processing, a cost function is evaluated. This could be based on how accurately the model performs the classification task if the quantum data was labeled, or other criteria if the task is unsupervised.

Evaluate Gradients & Update Parameters – After evaluating the cost function, the free parameters in the pipeline should be updated in a direction expected to decrease the cost. This is most commonly performed via gradient descent.

A key feature of TensorFlow Quantum is the ability to simultaneously train and execute many quantum circuits. This is achieved by TensorFlow’s ability to parallelize computation across a cluster of computers, and the ability to simulate relatively large quantum circuits on multi-core computers. To achieve the latter, we are also announcing the release of qsim (github link), a new high performance open source quantum circuit simulator, which has demonstrated the ability to simulate a 32 qubit quantum circuit with a gate depth of 14 in 111 seconds on a single Google Cloud node (n1-ultramem-160) (see this paper for details). The simulator is particularly optimized for multi-core Intel processors. Combined with TFQ, we have demonstrated 1 million circuit simulations for 20 qubit quantum circuit at a gate depth of 20 in 60 minutes on a Google Cloud node (n2-highcpu-80). See the TFQ white paper, Section II E on the Quantum Circuit Simulation with qsim for more information.

Looking Forward Today, TensorFlow Quantum is primarily geared towards executing quantum circuits on classical quantum circuit simulators. In the future, TFQ will be able to execute quantum circuits on actual quantum processors that are supported by Cirq, including Google’s own processor Sycamore.

To learn more about TFQ, please read our white paper and visit the TensorFlow Quantum website. We believe that bridging the ML and Quantum communities will lead to exciting new discoveries across the board and accelerate the discovery of new quantum algorithms to solve the world’s most challenging problems.

Acknowledgements This open source project is led by the Google AI Quantum team, and was co-developed by the University of Waterloo, Alphabet’s X, and Volkswagen. A special thanks to the University of Waterloo, whose students made major contributions to this open source software through multiple internship projects at the Google AI Quantum lab.