Apply now for FinQ Tech’s Quantum Computing Hackathon Team

Unleash Your Quantum Potential

We are excited to announce that FinQ Tech, a leading non-profit organization dedicated to fostering the growth and development of the quantum computing community, is inviting applications for its Quantum Computing Hackathon Team. As a member of this team, you will have the opportunity to participate in various quantum computing hackathons, benefit from the guidance of industry professionals, and collaborate with like-minded peers.

If you are passionate about quantum computing and eager to showcase your skills and ideas in a competitive environment, we encourage you to apply!

Why Join the FinQ Tech Quantum Computing Hackathon Team?

  1. Participate in multiple high-profile quantum computing hackathons, such as QHack (Xanadu), iQuHACK (MIT), NISQ Quantum Hackathon (dorahacks), and unitaryHACK (Unitary Fund).
  2. Receive mentorship from experienced professionals working at renowned quantum startups and companies, such as IonQ, AWS, and JPMC.
  3. Gain access to exclusive resources, including cutting-edge quantum computing hardware and software.
  4. Enhance your skills and expand your knowledge in the rapidly evolving field of quantum computing.
  5. Network with fellow quantum computing enthusiasts and professionals.

Eligibility Criteria

  1. Demonstrated interest and experience in quantum computing, including relevant coursework, projects, or research.
  2. Strong programming skills, preferably in Python or other quantum-related languages.
  3. Excellent problem-solving abilities and a collaborative mindset.
  4. Availability to commit to the team and actively participate in hackathons and team meetings.

How to Apply:

Please submit your application to info@finq.tech by Apr 30, 2023. Your application should include:

  1. A current CV/Resume, highlighting your relevant skills, experiences, and achievements.
  2. A cover letter (max. 300 words) describing your interest in quantum computing, what you hope to gain from being part of the team, and any previous hackathon experience (if applicable).
  3. If you want to be a hackathon team leader, please express your interest in your cover letter. The leader will be responsible for making development decisions, assigning tasks, and summoning meetings.

We look forward to receiving your application and to potentially welcoming you to the FinQ Tech Quantum Computing Hackathon Team. For any questions or inquiries, please also contact us at info@finq.tech.

silhouette photography of group of people jumping during golden time
Photo by Belle Co on Pexels.com

Researchers Claim They Developed a Room-Temperature Superconductor

Insider Brief

A team of South Korean researchers report on a room-temperature superconductor in the pre-print server ArXiv.

The superconductor is based on a modified lead-apatite structure.

Previous claims of room-temperature superconductors have not held up to scientific scrutiny, so this work has a long research journey ahead.

While it has yet to be peer-reviewed and likely faces a great deal of scrutiny, a team of scientists are reporting on the preprint server ArXiv that they have achieved room-temperature superconductor using a modified lead-apatite — LK-99 — structure

According to the paper, operating at ambient pressure, LK-99 exhibits superconductivity with a critical temperature greater than or equal to 400 K, or 127°C.

The researchers demonstrated LK-99’s superconducting properties through various key parameters, including zero-resistivity, critical current (Ic), critical magnetic field (Hc) and the Meissner effect. Unlike previous attempts, the scientists said that LK-99’s superconductivity arises from a minute structural distortion caused by a slight volume shrinkage of 0.48%. This distortion is induced by the substitution of Cu2+ ions for Pb2+(2) ions in the insulating network of Pb(2)-phosphate, generating internal stress.

The stress then transfers to Pb(1) of the cylindrical column, resulting in the distortion of the cylindrical column interface. The team said that this unique phenomenon creates superconducting quantum wells (SQWs) within the interface, contributing to LK-99’s superconducting capabilities.

Heat capacity measurements provided supporting evidence for the proposed model, reinforcing LK-99’s ability to maintain its superconducting state at room temperatures and ambient pressure, the researchers report.

The researchers include Sukbae Lee, CEO of Quantum Energy Research Centre, Ji-Hoon Kim, also of Quantum Energy Research Centre and Young-Wan Kwon, KU-KIST Graduate School of Converging Science and Technology.

In 2011, Lee withdrew a patent for a phase-transitional material.

A Holy Grail?

Room-temperature superconductivity is important because it has the potential to revolutionize multiple aspects of science and technology. One of the most significant advantages of room-temperature superconductors is the unprecedented energy efficiency they offer. Traditional superconductors require extremely low temperatures to function, making their practical applications limited and energy-intensive. However, with room-temperature superconductors, power transmission and distribution systems experience minimal energy losses due to virtually zero electrical resistance.

Additionally, the advent of room-temperature superconductivity could pave the way for groundbreaking advancements in transportation, such as high-speed trains that can travel without using much energy. Moreover, superconducting materials could be utilized in energy storage devices, enabling highly efficient and compact solutions for grid-scale storage and portable electronics.

Quantum computing would be a direct beneficiary of this work With room-temperature superconductivity, quantum computing could become more practical and accessible. Most quantum computers mow operate at ultra-low temperatures, approaching absolute zero, to minimize noise. This requirement for extreme cooling is not only technically challenging and costly but also limits the scalability of quantum computing systems. Room-temperature superconductors, with their ability to conduct electricity without resistance at ambient temperatures, could provide a stable and controlled environment for qubits without the need for elaborate cooling systems.

Next Steps

While this discovery sounds promising, it is important to approach the research with caution. Before it can be widely accepted scientifically, further rigorous and independent verification is needed. The scientific community must replicate the experiments and results to confirm the reproducibility and reliability of the findings.

Additionally, researchers need to conduct extensive studies to understand the fundamental mechanisms behind room-temperature superconductivity in LK-99. Exploring potential limitations and challenges, such as the stability and longevity of the superconducting state, is critical to assess the material’s practical applicability.

Peer review and scrutiny from experts in the field are also helpful in validating the claims made in the research.

It is also crucial to investigate the scalability and manufacturability for potential real-world applications. Assessing the cost, availability, and environmental impact of the materials used in its synthesis will be needed to determine if this approach can scale.

Read More 

Google Achieves Quantum Advantage: Completing a Computational Task 1.2 Million Times Faster with 70-Qubit Quantum Computer

Google scientists have achieved quantum advantage by completing a computational task in 200 seconds that would take a classical supercomputer 47 years. The task was carried out on the latest version of Google’s quantum computer, which features 70 qubits. This milestone highlights the potential power of quantum computing, although doubts about practicality remain.  

Read More 

FinQ Tech’s Successful Contribution to Unitary Hack 2023

We are delighted to share our journey as a sponsor of a team at the Unitary Hack 2023, a fantastic global event that encourages people to make contributions to the open source quantum ecosystem​​. The hackathon ran from May 26 to June 13, and our team made a significant contribution by adding a Variational Quantum Time Evolution Tutorial to the IBM Qiskit platform.

The Challenge

The challenge was to create a tutorial for the qiskit.algorithms package, specifically demonstrating how to use the new time_evolvers.variational package​​. The tutorial needed to introduce variational quantum imaginary and real-time evolution based on McLachlan’s variational principle. It also had to show how to leverage these principles using Qiskit classes, and to benchmark the default gradient/qgt methods with the new classically efficient gradients introduced in the qiskit.algorithms.gradients.reverse_gradient package​.

Our Contribution

We are proud to say that our team, composed of I-Chi (@ichen17), Kaixin (@huckstar), and Wenxiang (@wenxh0718), rose to the challenge and delivered an excellent tutorial that not only met the requirements but also provided great value to the community. The team’s Pull Request, titled “Add a Variational Quantum Time Evolution Tutorial (fixes #1391)” (#1470), ​was accepted and merged​.

In our tutorial, we have introduced the variational quantum imaginary and real-time evolution based on McLachlan’s variational principle. We have demonstrated how this can be leveraged using Qiskit classes and have also benchmarked the default gradient/qgt methods with the new classically efficient gradients introduced in the qiskit.algorithms.gradients.reverse_gradient package​.

The Outcome

The outcome has been extremely positive. Our contribution was accepted, and it is now part of the IBM Qiskit tutorials. You can check out the tutorial here.

As a company, FinQ Tech is committed to supporting quantum computing research and development. We believe events like Unitary Hack are instrumental in fostering innovation and collaboration in the quantum computing community. We are proud of our team and their contribution to the open source quantum ecosystem.

Stay tuned for more exciting updates from FinQ Tech!

NQI Advisory Committee: National Quantum Initiative Should Be ‘Reauthorized And Expanded’

Insider Brief

  • Findings for the first independent assessment of the National Quantum Initiative Act recommend the program should be reauthorized and expanded.
  • NQI has “increased the United States’ capacity in quantum information science and technology (QIST) R&D.”
  • Recommends all authorized QIST programs should be funded at the authorized levels.

PRESS RELEASE — the National Quantum Initiative Advisory Committee (NQIAC) published its first independent assessment of the National Quantum Initiative (NQI) program, including recommendations for enhancements to the Program, in a report titled, Renewing the National Quantum Initiative: Recommendations for Sustaining American Leadership in Quantum Information Science.

The report identifies three findings, four overarching recommendations, and nine detailed recommendations.


  1. In its first five years, the NQI has increased the United States’ capacity in quantum information science and technology (QIST) R&D.
  2. The development of QIST is critical to U.S. economic and national security.
  3. Key scientific, engineering, and systems integration challenges remain and must be solved for the United States to realize the full economic impacts and benefits of QIST.

Overarching Recommendations

  1. To ensure U.S. leadership in QIST, the NQI Act should be reauthorized and expanded. All authorized QIST programs in the NQI Act, the CHIPS and Science Act, and other relevant legislation should be funded at the authorized levels.
  2. To ensure that the United States leads in QIST discovery, innovation, and impact, efforts should be increased to attract, educate, and develop U.S. scientists and engineers in QIST-related fields, improve and accelerate pathways for foreign QIST talent to live and work in the United States, and increase support for research collaboration with partner nations.
  3. To safeguard the security and competitiveness of U.S. advances in QIST, the United States should develop policies that thoughtfully promote and protect U.S. leadership in QIST; expand domestic center-scale and single principal investigator QIST research activities and infrastructure; and evaluate and improve the reliability of global supply chains for QIST.
  4. To realize the potential of QIST for society, the NQI must accelerate the development of valuable technologies. This goal will require new programs in engineering research and systems integration that will enable a virtuous cycle of maturing and scaling of quantum systems to useful applications through multisector partnerships and engagement with end-users.


For detailed recommendations, go here.

Review of Google’s Quantum Computing Technology State in 2023


The story begins in the spring of 2013 when Google Research announced the Quantum AI Lab. Back then, the Lab’s launch was powered by the most advanced quantum computer commercially available at the time, D-Wave Two from D-Wave Systems.

Nearly a decade has passed since — the Quantum AI Lab has achieved a lot, which we will describe in more detail below.

Google Quantum Computing Background

A pioneer in quantum computing research and development, Google is one of the world’s leading companies in this area. It was announced in 2019 that Google had achieved “quantum supremacy,” which means it had demonstrated the ability of quantum computers to perform calculations (however contrived) beyond the capabilities of classical computers. Sycamore, Google’s quantum computer, performed a calculation in 200 seconds that would have taken the world’s fastest supercomputer 10,000 years.

Google has continued to push the boundaries of quantum computing since then.

As well as developing quantum algorithms and applications for fields such as chemistry, materials science and machine learning (ML), the company is striving to develop even more powerful quantum processors. Quantum algorithms and applications can also be built on Google’s cloud-based quantum computing platform, the Google Quantum AI (QAI) platform.

Google’s Core Quantum Technology

As of 2023, there are certain milestones that Google wants to reach in order to be able to build a commercially viable quantum computer. One was quantum advantage, or doing a calculation faster than a supercomputer with a quantum machine, with 2019 marking the completion of this goal.

This year, the company’s researchers said they had shown that a system using error-correcting code could detect and fix errors without affecting the information. This is the first demonstration of a logical qubit prototype, which shows that increasing the number of qubits reduces errors.

In a company blog post from February 2023, Sundar Pichai, CEO of Google and Alphabet, reported that for the first time ever Quantum AI researchers demonstrated that increasing the number of qubits can reduce errors. Published in Nature, the result of this achievement — where the researchers were able to make a logical qubit that performed better than one we made from 17 physical qubits — means the company is now able to operate quantum computers in a much more efficient way.

Google Quantum Offerings

Since the beginning of Google Quantum AI, the company has been making the products and tools it develops for its own research free to the public.

Here are some examples:


In 2022, Google Quantum AI released the Quantum Virtual Machine (QVM), developed to simulate the experience of programming one of the quantum computers in its lab, including circuit validation and processor fidelity.

The machine works by combining the measurements (qubit decay, dephasing, gate and readout errors) from its Sycamore processors with the models from the physics research team, which can then simulate quantum processor-like output using the company’s models.


Next up is Python-based Cirq, a software library for writing, manipulating and optimizing quantum circuits, which are then run on quantum computers and quantum simulators. It provides useful abstractions for dealing with today’s noisy intermediate-scale quantum computers, in which the hardware details are crucial to achieving state-of-the-art results.


Another area of interest to the company is in testing post-quantum cryptography (PQC) algorithms. Having worked with the security community for over a decade, Google has been exploring options beyond theoretical implementations for PQC algorithms.

A post-quantum experiment with Cloudflare was announced in 2019. Using Cloudflare’s TLS stack, Google implemented two post-quantum key exchanges and deployed them on edge servers and Chrome Canary clients, which resulted in providing the company with more insight into two post-quantum key agreements’ performance and feasibility in TLS — the results also meant that Google has incorporated this information into its technology roadmap and meant that two years later Google researchers were able to test a range of network products that were incompatible with post-quantum TLS and found that they were incompatible with post-quantum confidentiality. Experimenting early resulted in the prevention of this issue from arising in the future.

Currently, the company is implementing PQC that addresses both immediate and long-term risks of protecting sensitive information and ensuring that Google is PQC-ready.

Who are Google’s Main Competitors in Quantum Computing

In the field of quantum computing, Google’s main competitors are the likes of IBM, Microsoft, Amazon and Intel insofar as they are tech giants who have an interest in quantum. In reality, however, we see a significant amount of collaboration across the ecosystem.

In quantum computing, IBM has been a pioneer for some years. Now, IBM Quantum offers access to its own quantum processors through its cloud-based platform.

Microsoft offers Azure Quantum, the company’s cloud-based platform that provides access to its quantum hardware.

Another is AWS (Amazon) and Amazon Braket, a fully-managed quantum computing service that lets customers access hardware from a variety of vendors, including D-Wave, IonQ and Rigetti.In a blog post from April 2021, AWS released a paper to the world, Building a fault-tolerant quantum computer using concatenated cat codes, which was written by a team at the AWS Center for Quantum Computing and describes an architecture that combines elements of active QEC and passive or autonomous QEC. This paper proves researchers there have demonstrated the feasibility of cat qubits based on superconducting circuits, and their highly biased error rates can be utilized to design additional quantum error correction circuits.

Finally, there is D-Wave, a Canadian company that specializes in quantum annealing, a type of quantum computing. The company has developed its own quantum annealing processors, which are available through its cloud-based platform Leap.

How is Google’s Quantum Funded?

Google is primarily funded from its own balance sheet. In addition to its own resources, Google receives grants from government agencies like the National Science Foundation and partners with academic institutions in order to conduct quantum research.

Who Are Google’s Most Significant Quantum Customers?

To advance quantum research, Google has also collaborated with government agencies and academic institutions. CERN, NASA and the University of California, Santa Barbara are some of Google’s prominent partners in quantum computing.

Google’s Quantum Partnerships

In recent years, and as part of its research and development efforts, Google has sought out partnerships in quantum computing with several institutions/companies, some of which we will highlight below:

Google is exploring quantum computing’s potential in financial services with J.P. Morgan. In this partnership, new algorithms will be developed for the optimization of portfolios, the analysis of risk and the detection of fraud.

The European Organization for Nuclear Research (CERN) and Google announced a partnership in 2021 to collaborate on quantum computing. In order to solve some of the most challenging problems in particle physics, the partnership aims to develop new algorithms and tools.

With the goal of exploring quantum computing’s potential for aerospace applications, Google has partnered with Airbus, too. As part of the collaboration, algorithms are being developed for flight optimization, scheduling and routing, as well as for improving maintenance and safety on aircraft.

Another to note is Google’s collaboration with Volkswagen, designed to explore ways to use quantum computing in the automotive industry to enhance battery performance, optimize traffic flow and develop new materials for electric vehicles.

Where is Google Heading in the Years To Come?

As one of the leaders in quantum computing, Google has been heavily investing in the technology. As Google continues to explore quantum computing applications in areas such as optimization, cryptography and ML, its quantum computing efforts are likely to focus on developing more powerful quantum processors and algorithms. The Google quantum computing team has already achieved several significant milestones, such as demonstrating quantum supremacy, or the ability for a quantum computer to perform computations that are practically impossible for classical computers. A 53-qubit Sycamore processor developed by Google enabled this achievement.

Google is likely to make further advances in quantum computing in the future by developing even more powerful quantum processors and solving even more complex problems. As well as exploring new applications for quantum computing, the company is likely to explore drug discovery, materials science and financial modelling as these seem to be areas where quantum computing can help the most. For more detail on this, check out the company’s “Our quantum computing journey”, which explains Google’s quantum roadmap in simple terms.

Before quantum computing is widely used in practical applications, there are still many challenges that must be overcome that companies like Google and others are making significant advancements in.


As we have already mentioned, Google has already achieved a key milestone by using more qubits to reduce the error rate of quantum calculations for the first time, according to its researchers, so this is obviously a step in the right direction.


In a story reported by The Quantum Insider last year, research conducted by the Institute of Theoretical Physics at the Chinese Academy of Sciences, led by statistical physicist Pan Zhang, showed that the team beat Google’s quantum computer Sycamore at the computational task examined in Google’s 2019 study.

Google claimed back then a supercomputer would take about 10,000 years to complete the task. Whereas on a supercomputer, it took the Chinese researchers about 15 hours to complete the task.

In order to solve that problem, Zhang and his team took the problem from a slightly different angle by using a 3D tensor network with twenty layers, each layer held 53 dots, one for each qubit. The team connected the dots to represent the gates, with a tensor encoding each gate. According to Nature, multiplying the tensors was all the team needed to run the simulation.

Although this takes nothing away from Google’s achievement four years ago, Zhang and co expected classical algorithms to improve in performance due to the original experiment’s purpose of exploiting quantum computer strengths and classical computer weaknesses, and eventually expect to develop algorithms specifically designed for this calculation.

The researchers of the report also said that as quantum computers improve, a slight tweak in performance might soon put them outperforming classical supercomputers — and even classical algorithms.

So there is hope. Yet, the research showed that no technology is foolproof.

Google Quantum Computing Key Takeaways

Humanity is on the verge of creating a useful, error-correcting quantum computer, whether today, tomorrow or fifty years in the future and there will be obstacles on the way to error correction, that’s a given.

From Richard Feynman’s idea of creating universal quantum computers for simulating other quantum systems more than forty years ago to the state of play today in our industry, what Google achieved in 2019 has proven quantum computing is a practical reality rather than a theoretical possibility and has led us to a new era of computing: Noisy, Intermediate Scale Quantum (NISQ).

Google — with its own hard work and investment and collaborations with universities and research institutions — is developing theories and algorithms that we can apply to error-corrected computers in the future.

When we get to that place, Google could be one of the first.

How quantum computing could transform everything everywhere, but not all at once

Quantum computing could change our perspective on the cosmos. (Illustration: Harmonia Macrocosmica, 1660 / Microsoft, 2022 / Alan Boyle)

What does quantum computing have in common with the Oscar-winning movie “Everything Everywhere All at Once”? One is a mind-blowing work of fiction, while the other is an emerging frontier in computer science — but both of them deal with rearrangements of particles in superposition that don’t match our usual view of reality.

Fortunately, theoretical physicist Michio Kaku has provided a guidebook to the real-life frontier, titled “Quantum Supremacy: How the Quantum Computer Revolution Will Change Everything.”

“We’re talking about the next generation of computers that are going to replace digital computers,” Kaku says in the latest episode of the Fiction Science podcast. “Today, for example, we don’t use the abacus anymore in Asia. … In the future, we’ll view digital computers like we view the abacus: old-fashioned, obsolete. This is for the garbage can. That’s how the future is going to evolve.”https://embed.podcasts.apple.com/us/podcast/fiction-science/id1528078321

Computer scientists might take issue with Kaku’s digital doomsaying — but there’s little doubt that quantum computers will transform the field as much as artificial intelligence is transforming it today.

“Quantum computing could very well revolutionize what an Amazon Web Services or Microsoft Azure will want to do for the world in terms of computing,” says Louis Terminello, associate laboratory director for physical and computational sciences at the U.S. Department of Energy’s Pacific Northwest National Laboratory.

Kaku’s assessment of the potential impact goes a lot further: In his view, any problem that involves sifting through a multiverse worth of possibilities will become more solvable once the quantum revolution takes hold. Energy generation and storage, food production, climate modeling, disease treatment and genetic repair are all potential targets for quantum supremacy.

Why is that? In contrast to the rigid one-or-zero approach that serves as the foundation of classical computing, quantum computers would take advantage of the fact that quantum bits — better known as qubits — can represent multiple states when information is processed.

“Quantum computers, in principle, are infinitely more powerful than a digital computer that computes on zeros and ones, zeros and ones, because quantum computers are quantum mechanical,” he said. “The atom can spin in any direction. How many directions are there? An infinite number of directions.”

Tech titans haven’t yet settled on the best basis for quantum computing: Amazon, Google and IBM use superconducting circuits in their hardware. IonQ — which is creating a research and manufacturing facility in the Seattle area — favors a technology based on trapped ions. Other companies are taking advantage of the quantum properties of photons, or defects in silicon lattices. And Microsoft is placing its bets on topological superconducting nanowires.

Which technology will win out? Kaku says it’s too early to tell.

“How many quantum computer architectures are possible? An infinite number of them,” he says. “Now, of course, only a handful of them are practical and economical. But the point I’m raising is that Mother Nature has already devised millions of quantum mechanical systems, and we’re playing catch-up to Mother Nature. And so I think that one day, one or or a handful of these technologies will dominate the whole field, but we’re not sure yet.”

“Quantum Supremacy: How the Quantum Computer Revolution Will Change Everything,” by Michio Kaku (Doubleday / Penguin Random House)

Even though full-fledged quantum computers aren’t yet ready to prime time, researchers are already trying to figure out how to simulate the quantum mechanisms behind important biological processes such as photosynthesis and nitrogen fixation. Coming up with new molecular methods to perform those tasks could be worth billions of dollars.

“About 1% or so of the world’s energy goes to the process to refine nitrogen in the air to create fertilizer,” Kaku says. “But it’s very wasteful. … We need a quantum mechanical Green Revolution.”

On the energy frontier, quantum computers could help engineers design better reactors for generating fusion power — and help chemists design new types of materials for solar cells and batteries.

Kaku says chemistry is a prime target for the quantum revolution.

“Chemists who do not use quantum computers to model chemical reactions will go bankrupt,” he says. “They’ll be out of a job. They’ll be replaced by chemists who do use quantum computers. This means all medicine. All medicine can eventually be reduced to a quantum computer.”

Once quantum computers take hold, researchers could design synthetic molecules for medicines that address specific maladies.

“How do we find new drugs today? Trial and error,” Kaku says. “We have thousands of Petri dishes with chemicals in them. We tediously see whether or not they have any antibiotic properties. Why not do that in the memory of a quantum computer?”

Quantum calculations could also direct the course of gene-editing therapies with the potential of heading off diseases before they arise — an application that raises hopes as well as ethical concerns.

“Any discipline that requires the use of molecules and atoms can be helped by the quantum revolution, including cancer research, aging. Why do we die? Think about it for a moment: There are zero laws of physics that say that we have to die,” Kaku says.

Doesn’t immortality run counter to the Second Law of Thermodynamics? “If I have an open system and I use quantum computers to add extra energy from outside, I can begin the process of stopping the aging process,” Kaku says. “Think about that: the possibility of extending the human lifespan by reducing the buildup of errors in our DNA. … The applications are endless.”

He’s even hoping that next-generation computing will help him solve the mysteries of string theory and reveal the so-called Theory of Everything, which Kaku calls the God Equation. That hope is what led him to write “Quantum Supremacy” in the first place.

Kaku has been working on string theory for decades, and he’s the author of one of the leading textbooks about it. But he says the theory is “so complicated, with so many resonances, that the human mind has not been able to solve string theory.”

“What a frustrating thing,” he says. “So I said to myself, wait a minute. String theory is a quantum theory, like the atom. Why not use quantum computers to solve a quantum problem?”

By now, you’ve probably gotten the message that Kaku is bullish on the quantum revolution. Is he willing to admit there’s something that quantum computers can’t do? Yes, as a matter of fact.

If a movie like “Everything Everywhere All at Once” makes it look as if you can slip back and forth between quantum universes, Kaku says you should know that’s pure fiction. “It doesn’t work that way,” he says. “It turns out that it takes an enormous amount of energy and time to go between universes. So, believe it or not, it may be possible to go between universes, but it’s not for us.”

In other words, not even the quantum computer revolution can change everything everywhere all at once.


Five Worlds of AI (a joint post with Boaz Barak)

Artificial intelligence has made incredible progress in the last decade, but in one crucial aspect, it still lags behind the theoretical computer science of the 1990s: namely, there is no essay describing five potential worlds that we could live in and giving each one of them whimsical names.  In other words, no one has done for AI what Russell Impagliazzo did for complexity theory in 1995, when he defined the five worlds Algorithmica, Heuristica, Pessiland, Minicrypt, and Cryptomania, corresponding to five possible resolutions of the P vs. NP problem along with the central unsolved problems of cryptography.

In this blog post, we—Scott and Boaz—aim to remedy this gap. Specifically, we consider 5 possible scenarios for how AI will evolve in the future.  (Incidentally, it was at a 2009 workshop devoted to Impagliazzo’s five worlds co-organized by Boaz that Scott met his now wife, complexity theorist Dana Moshkovitz.  We hope civilization will continue for long enough that someone in the future could meet their soulmate, or neuron-mate, at a future workshop about our five worlds.)

Like in Impagliazzo’s 1995 paper on the five potential worlds of the difficulty of NP problems, we will not try to be exhaustive but rather concentrate on extreme cases.  It’s possible that we’ll end up in a mixture of worlds or a situation not described by any of the worlds.  Indeed, one crucial difference between our setting and Impagliazzo’s, is that in the complexity case, the worlds corresponded to concrete (and mutually exclusive) mathematical conjectures.  So in some sense, the question wasn’t “which world will we live in?” but “which world have we Platonically always lived in, without knowing it?”  In contrast, the impact of AI will be a complex mix of mathematical bounds, computational capabilities, human discoveries, and social and legal issues. Hence, the worlds we describe depend on more than just the fundamental capabilities and limitations of artificial intelligence, and humanity could also shift from one of these worlds to another over time.

Without further ado, we name our five worlds “AI-Fizzle,” “Futurama,” ”AI-Dystopia,” “Singularia,” and “Paperclipalypse.”  In this essay, we don’t try to assign probabilities to these scenarios; we merely sketch their assumptions and technical and social consequences. We hope that by making assumptions explicit, we can help ground the debate on the various risks around AI.

AI-Fizzle. In this scenario, AI “runs out of steam” fairly soon. AI still has a significant impact on the world (so it’s not the same as a “cryptocurrency fizzle”), but relative to current expectations, this would be considered a disappointment.  Rather than the industrial or computer revolutions, AI might be compared in this case to nuclear power: people were initially thrilled about the seemingly limitless potential, but decades later, that potential remains mostly unrealized.  With nuclear power, though, many would argue that the potential went unrealized mostly for sociopolitical rather than technical reasons.  Could AI also fizzle by political fiat?

Regardless of the answer, another possibility is that costs (in data and computation) scale up so rapidly as a function of performance and reliability that AI is not cost-effective to apply in many domains. That is, it could be that for most jobs, humans will still be more reliable and energy-efficient (we don’t normally think of low wattage as being key to human specialness, but it might turn out that way!).  So, like nuclear fusion, an AI which yields dramatically more value than the resources needed to build and deploy it might always remain a couple of decades in the future.  In this scenario, AI would replace and enhance some fraction of human jobs and improve productivity, but the 21st century would not be the “century of AI,” and AI’s impact on society would be limited for both good and bad.

Futurama. In this scenario, AI unleashes a revolution that’s entirely comparable to the scientific, industrial, or information revolutions (but “merely” those).  AI systems grow significantly in capabilities and perform many of the tasks currently performed by human experts at a small fraction of the cost, in some domains superhumanly.  However, AI systems are still used as tools by humans, and except for a few fringe thinkers, no one treats them as sentient.  AI easily passes the Turing test, can prove hard theorems, and can generate entertaining content (as well as deepfakes). But humanity gets used to that, just like we got used to computers creaming us in chess, translating text, and generating special effects in movies.  Most people no more feel inferior to their AI than they feel inferior to their car because it runs faster.  In this scenario, people will likely anthropomorphize AI less over time (as happened with digital computers themselves).  In “Futurama,” AI will, like any revolutionary technology, be used for both good and bad.  But as with prior major technological revolutions, on the whole, AI will have a large positive impact on humanity. AI will be used to reduce poverty and ensure that more of humanity has access to food, healthcare, education, and economic opportunities. In “Futurama,” AI systems will sometimes cause harm, but the vast majority of these failures will be due to human negligence or maliciousness.  Some AI systems might be so complex that it would be best to model them as potentially behaving  “adversarially,” and part of the practice of deploying AIs responsibly would be to ensure an “operating envelope” that limits their potential damage even under adversarial failures. 

AI-Dystopia. The technical assumptions of “AI-Dystopia” are similar to those of “Futurama,” but the upshot could hardly be more different.  Here, again, AI unleashes a revolution on the scale of the industrial or computer revolutions, but the change is markedly for the worse.  AI greatly increases the scale of surveillance by government and private corporations.  It causes massive job losses while enriching a tiny elite.  It entrenches society’s existing inequalities and biases.  And it takes away a central tool against oppression: namely, the ability of humans to refuse or subvert orders.

Interestingly, it’s even possible that the same future could be characterized as Futurama by some people and as AI-Dystopia by others–just like how some people emphasize how our current technological civilization has lifted billions out of poverty into a standard of living unprecedented in human history, while others focus on the still existing (and in some cases rising) inequalities and suffering, and consider it a neoliberal capitalist dystopia.

Singularia.  Here AI breaks out of the current paradigm, where increasing capabilities require ever-growing resources of data and computation, and no longer needs human data or human-provided hardware and energy to become stronger at an ever-increasing pace.  AIs improve their own intellectual capabilities, including by developing new science, and (whether by deliberate design or happenstance) they act as goal-oriented agents in the physical world.  They can effectively be thought of as an alien civilization–or perhaps as a new species, which is to us as we were to Homo erectus.

Fortunately, though (and again, whether by careful design or just as a byproduct of their human origins), the AIs act to us like benevolent gods and lead us to an “AI utopia.”  They solve our material problems for us, giving us unlimited abundance and presumably virtual-reality adventures of our choosing.  (Though maybe, as in The Matrix, the AIs will discover that humans need some conflict, and we will all live in a simulation of 2020’s Twitter, constantly dunking on one another…) 

Paperclipalypse.  In “Paperclipalypse” or “AI Doom,” we again think of future AIs as a superintelligent “alien race” that doesn’t need humanity for its own development.  Here, though, the AIs are either actively opposed to human existence or else indifferent to it in a way that causes our extinction as a byproduct.  In this scenario, AIs do not develop a notion of morality comparable to ours or even a notion that keeping a diversity of species and ensuring humans don’t go extinct might be useful to them in the long run.  Rather, the interaction between AI and Homo sapiens ends about the same way that the interaction between Homo sapiens and Neanderthals ended. 

In fact, the canonical depictions of such a scenario imagine an interaction that is much more abrupt than our brush with the Neanderthals. The idea is that, perhaps because they originated through some optimization procedure, AI systems will have some strong but weirdly-specific goal (a la “maximizing paperclips”), for which the continued existence of humans is, at best, a hindrance.  So the AIs quickly play out the scenarios and, in a matter of milliseconds, decide that the optimal solution is to kill all humans, taking a few extra milliseconds to make a plan for that and execute it.  If conditions are not yet ripe for executing their plan, the AIs pretend to be docile tools, as in the “Futurama” scenario, waiting for the right time to strike.  In this scenario, self-improvement happens so quickly that humans might not even notice it.  There need be no intermediate stage in which an AI “merely” kills a few thousand humans, raising 9/11-type alarm bells.

Regulations. The practical impact of AI regulations depends, in large part, on which scenarios we consider most likely.  Regulation is not terribly important in the “AI Fizzle” scenario where AI, well, fizzles.  In “Futurama,” regulations would be aimed at ensuring that on balance, AI is used more for good than for bad, and that the world doesn’t devolve into “AI Dystopia.”  The latter goal requires anti-trust and open-science regulations to ensure that power is not concentrated in a few corporations or governments.  Thus, regulations are needed to democratize AI development more than to restrict it.  This doesn’t mean that AI would be completely unregulated.  It might be treated somewhat similarly to drugs—something that can have complex effects and needs to undergo trials before mass deployment.  There would also be regulations aimed at reducing the chance of “bad actors” (whether other nations or individuals) getting access to cutting-edge AIs, but probably the bulk of the effort would be at increasing the chance of thwarting them (e.g., using AI to detect AI-generated misinformation, or using AI to harden systems against AI-aided hackers).  This is similar to how most academic experts believe cryptography should be regulated (and how it is largely regulated these days in most democratic countries): it’s a technology that can be used for both good and bad, but the cost of restricting its access to regular citizens outweighs the benefits.  However, as we do with security exploits today, we might restrict or delay public releases of AI systems to some extent.

To whatever extent we foresee “Singularia” or “Paperclipalypse,” however, regulations play a completely different role.  If we knew we were headed for “Singularia,” then presumably regulations would be superfluous, except perhaps to try to accelerate the development of AIs!  Meanwhile, if one accepts the assumptions of “Paperclipalypse,” any regulations other than the most draconian might be futile.  If, in the near future, almost anyone will be able to spend a few billion dollars to build a recursively self-improving AI that might turn into a superintelligent world-destroying agent, and moreover (unlike with nuclear weapons) they won’t need exotic materials to do so, then it’s hard to see how to forestall the apocalypse, except perhaps via a worldwide, militarily enforced agreement to “shut it all down,” as Eliezer Yudkowsky indeed now explicitly advocates.  “Ordinary” regulations could, at best, delay the end by a short amount–given the current pace of AI advances, perhaps not more than a few years.  Thus, regardless of how likely one considers this scenario, one might want to focus more on the other scenarios for methodological reasons alone!


Quantum entanglement of photons doubles microscope resolution

Using a “spooky” phenomenon of quantum physics, Caltech researchers have discovered a way to double the resolution of light microscopes.

In a paper appearing in the journal Nature Communications, a team led by Lihong Wang, Bren Professor of Medical Engineering and Electrical Engineering, shows the achievement of a leap forward in microscopy through what is known as quantum entanglement. Quantum entanglement is a phenomenon in which two particles are linked such that the state of one particle is tied to the state of the other particle regardless of whether the particles are anywhere near each other. Albert Einstein famously referred to quantum entanglement as “spooky action at a distance” because it could not be explained by his relativity theory.

According to quantum theory, any type of particle can be entangled. In the case of Wang’s new microscopy technique, dubbed quantum microscopy by coincidence (QMC), the entangled particles are photons. Collectively, two entangled photons are known as a biphoton, and, importantly for Wang’s microscopy, they behave in some ways as a single particle that has double the momentum of a single photon.

Since quantum mechanics says that all particles are also waves, and that the wavelength of a wave is inversely related to the momentum of the particle, particles with larger momenta have smaller wavelengths. So, because a biphoton has double the momentum of a photon, its wavelength is half that of the individual photons.

This is key to how QMC works. A microscope can only image the features of an object whose minimum size is half the wavelength of light used by the microscope. Reducing the wavelength of that light means the microscope can see even smaller things, which results in increased resolution.

Quantum entanglement is not the only way to reduce the wavelength of light being used in a microscope. Green light has a shorter wavelength than red light, for example, and purple light has a shorter wavelength than green light. But due to another quirk of quantum physics, light with shorter wavelengths carries more energy. So, once you get down to light with a wavelength small enough to image tiny things, the light carries so much energy that it will damage the items being imaged, especially living things such as cells. This is why ultraviolet (UV) light, which has a very short wavelength, gives you a sunburn.

QMC gets around this limit by using biphotons that carry the lower energy of longer-wavelength photons while having the shorter wavelength of higher-energy photons.

“Cells don’t like UV light,” Wang says. “But if we can use 400-nanometer light to image the cell and achieve the effect of 200-nm light, which is UV, the cells will be happy, and we’re getting the resolution of UV.”

To achieve that, Wang’s team built an optical apparatus that shines laser light into a special kind of crystal that converts some of the photons passing through it into biphotons. Even using this special crystal, the conversion is very rare and occurs in about one in a million photons. Using a series of mirrors, lenses, and prisms, each biphoton — which actually consists of two discrete photons — is split up and shuttled along two paths, so that one of the paired photons passes through the object being imaged and the other does not. The photon passing through the object is called the signal photon, and the one that does not is called the idler photon. These photons then continue along through more optics until they reach a detector connected to a computer that builds an image of the cell based on the information carried by the signal photon. Amazingly, the paired photons remain entangled as a biphoton behaving at half the wavelength despite the presence of the object and their separate pathways.

Wang’s lab was not the first to work on this kind of biphoton imaging, but it was the first to create a viable system using the concept. “We developed what we believe a rigorous theory as well as a faster and more accurate entanglement-measurement method. We reached microscopic resolution and imaged cells.”

While there is no theoretical limit to the number of photons that can be entangled with each other, each additional photon would further increase the momentum of the resulting multiphoton while further decreasing its wavelength.

Wang says future research could enable entanglement of even more photons, although he notes that each extra photon further reduces the probability of a successful entanglement, which, as mentioned above, is already as low as a one-in-a-million chance.

The paper describing the work, “Quantum Microscopy of Cells at the Heisenberg Limit,” appears in the April 28 issue of Nature Communications. Co-authors are Zhe Heand Yide Zhang, both postdoctoral scholar research associates in medical engineering; medical engineering graduate student Xin Tong (MS ’21); and Lei Li (PhD ’19), formerly a medical engineering postdoctoral scholar and now an assistant professor of electrical and computer engineering at Rice University.

Funding for the research was provided by the Chan Zuckerberg Initiative and the National Institutes of Health.


QC WORKSHOP 24: Introduction to Variational Quantum Circuit and Quantum Neural Networks


Quantum computing has undergone rapid development over recent years: from first conceptualization in the 1980s, and early proof of principles for hardware in the 2000s, quantum computers can now be built with hundreds of qubits. While the technology remains in its infancy, the fast progress of quantum hardware has led many to assert that so-called Noisy-Intermediate Scale Quantum (NISQ) devices could outperform conventional computers shortly. Particularly, the Variational Quantum Eigensolver (VQE) was put forth to be the most promising algorithm on NISQ devices because VQE admits only a small number of qubits and shows some degree of noise resilience. The VQE mechanisms are often cast as hybrid algorithms that practically allow a Variational Quantum Circuit (VQC) with classical machine learning models. In this report, for one thing, we will characterize the VQC-based quantum neural networks (QNN) on NISQ devices in terms of theoretical and empirical study; for another, we aim at investigating new applications of VQC-based QNN for automatic speech recognition and natural language processing.


Dr. Jun Qi is currently a Tenure-Track Assistant Professor in the Department of Electronic Engineering at Fudan University, Shanghai, China. Dr. Jun Qi received his Ph.D. degree from the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA in May 2022. Dr. Qi’s research focuses on quantum machine learning theory, quantum optimization algorithms, and quantum speech and NLP applications. Dr. Qi authored and co-authored many published papers and book chapters in the fields of Quantum Technologies, Speech Recognition, and Signal Processing. Moreover, Dr. Qi was the recipient of 1st prize in Xanadu AI Quantum Machine Learning Competition in 2019, and his ICASSP paper about quantum speech recognition was nominated as the best paper candidate in 2022. Dr. Qi gave tutorials on Quantum Machine Learning for Speech and Language Processing at the venues of IJCAI’21, ICASSP’22, and ISCSLP’22.


You will learn:

  • VQC-based QNN on NISQ devices
  • Noise resilience of VQC
  • VQC-based QNN for automatic speech recognition and natural language processing.

TIME & AGENDA: (Date/time and activities involved)

Oct 22, 2022 (Saturday)
9 PM (U.S. East Coast)
6 PM (U.S. West Coast)

Oct 23, 2022 (Sunday)
9 AM (Beijing Time)

This is a one-hour event including a 45 min presentation followed by a 15 min discussion. The presentation will be held in Mandarin.


You can JOIN THE EVENT via either using the Zoom link below, or join us on our Wechat group we will post a reminder link on the day of the event.

Join Zoom Meeting

Meeting ID: 832 0443 3684
Passcode: 392721
One tap mobile
+16469313860,,87345750794#,,,,*776888# US
+19292056099,,87345750794#,,,,*776888# US (New York)

Wechat Group

Join our Wechat group, we will make an announcement when the event is about to start!



Get 15% off at our new merch shop!