News

12 December 2018 Hugues Vincelette
For his PhD

Quantum error correction according to Pavithran Iyer Sridharan

Definition of noise according to Merriam Webster : irrelevant or meaningless data or output occurring along with desired information.

For Pavithran Iyer Sridharan, this is the subject of his recent successful doctoral thesis, which earned him a doctoral degree from the Department of Physics at the Université de Sherbrooke: A critical analysis of quantum error correction methods for realistic noise processes

Born in Chennai, a city of more than 7 million people in southern India, Pavithran discovered Sherbrooke seven years ago while attending a summer school. He tells us about his first contact: “It was the first time I left my native country and the summer school in which I participated played a decisive role. I was looking for a place to continue my studies at the graduate level and I saw Professor David Poulin’s profile. I discover that he is looking for students. I was then learning quantum computing and I thought it was a great opportunity, I came to the summer school, I met David and I really liked my contact to the point of wanting to continue my studies in Sherbrooke.”

Not only had Pavithran found a university and a director, he also discovered a city that he described as quiet and peaceful.

Simulate noise

Pavithran illustrates error correction, his research subject, with an example from everyday life: “Let us imagine a very simple scenario: you and me talking with cell phones. As I speak, my words are converted into 1 and 0 and passes through the atmosphere, finally arriving at your device where it is converted back into words. As the binary information passes through the atmosphere, due to several factors unbeknownst to us such as thunder or lightning, the long chain of zeros and ones can change. For instance, I can imagine that a 1 can suddenly become a 0. The aftermath of this error can be some undesirable word that you might hear at your end. I must therefore design my technology in such a way that despite the presence of errors your device is actually able to decipher the correct information.»

With a quantum computer, the question of error correction is all the more relevant since quantum information, unlike a chain of ones and zeros, is extremely sensitive to environmental effects and therefore is vulnerable to a phenomenally wide variety of noise.

Several schemes are being tested to account for noise in a quantum computation process. A vast majority of these schemes use redundancy in several forms. The simplest of them is to repeat the same operation several times to validate its content. But all this process only adds to the size of the computer and more so in a quantum computing context, increases its implementation costs. Pavithran conducted his research project with the idea of lowering these incurred costs.

Characterize the noise

Before we attempt to correct for errors in a quantum computer, it is essential to characterize noise and determine its effects on a quantum computer. Traditionally, this step is called noise-modelling — We use a mathematical model as our comprehension of the noise process, instead of the actual noise process. With a noise model, one can tailor error-correcting protocols to a particular hardware device. These protocols can be developed in an adapted way by performing numerical simulations that test the efficacy of a protocol, which, thanks to a powerful supercomputer like Mammoth at the Université de Sherbrooke, may take only a few days.

But, for Pavithran these traditional approaches, involving noise modelling and numerical simulations, have serious limitations: “Firstly, the modelling typically oversimplifies the noise process to an extent that is nowhere close to the actual scenario. Secondly, the numerical simulation technique that evaluates a proposed error-correcting protocol can itself have intrinsic errors. These can overshadow our trust in the evaluation of the underlying scheme. Bypassing these difficulties, we suggested calibrating a small but actual quantum computer with the actual noise process to see how it would behave if it was built in large scale.”

Let us talk about problems in noise characterization. Majority of research efforts focus on optimizing a figure of merit for a hardware device, a popular choice is referred to as the “fidelity”. So, the closer the fidelity of a device or a qubit is to 100%, the more reliable it is considered to be.

Here again, Pavithran is skeptical: “In our opinion, if noise is reduced to the sole concern of just one of its parameters being as close as possible to 100%, you are not reproducing the real scenario. Tell me the best fidelity you can get, and I will present you two different noise scenarios where in one the noise can be corrected while in the other, error-correcting protocols do a very poor job. So, in a real scenario where any type of noise process is possible, these figures of merit give very coarse information on how well the noise can be removed from a quantum computer. In other words, a scheme that you might think is great for a specific hardware might be useless for another hardware with the same fidelity. Giving only the fidelity can be misleading information. So, we think a critical analysis of current methods in designing error-correcting protocols is very important because many people seem to be concerned about fidelity right now. This is the central goal of this thesis. We basically showed that noise is so complex that if you just use one quantifier of noise you’re going to be misled on what effect this noise can have on a quantum computer. » Pavithran’s findings came as an unpleasant surprise to the research community working on building quantum computers. In almost every scientific paper describing an experimental implementation of quantum operations, the authors specify how accurately each operation was realized by reporting its fidelity. This is the standard way of comparing the relative quality of different hardware. What Pavithran showed is that this fidelity is largely irrelevant to how well a quantum computer would function. Even worst, he showed that quantum noise is so complex that no matter how you quantify the quality of the hardware, it will be largely irrelevant to the accuracy with which a quantum computer would operate.

So, if these current figures of merit are not conclusive, what could work? Pavithran answers: “So the underlying question is what figure of merit can give us a good prediction of how well a noise process can be corrected. The figure of merit could be some complex property of the noise process, mathematically speaking, some function of the noise model. We performed numerical simulations of the effects of various noise processes on a quantum computer. These cases serve as examples which we ought to learn from. Following this line of thought, we used machine learning techniques that can look at the examples and choose a property of the noise process which is crucial to determining the success of an error-correcting protocol. This new property “learned” by the machine will serve as a new figure of merit.” To our dissatisfaction, we did not find a good enough figure of merit.

As far as the ability to characterize noise, what this says is that there is not one universal number to rate the quality of a hardware device or a qubit because of the wide variety of noise processes it can be vulnerable to. Putting it this way, has not surprised any of the experts in the field. However, the fact that fidelity is not a good predictor of how well error-correcting protocols work remains a huge surprise for many.

Numerical simulation

Let us talk about numerical simulations as such. Numerically simulating noise is like trying to accurately predict a bird’s path, it’s like trying to reproduce nature, there are so many parameters one has to keep track. An interesting example, at another university, researchers observed strange fluctuations in their experimental readings of a qubit’s frequency. The source of which turned out to be a train passing at 3 p.m., it took them weeks to discover it.

Numerical simulations are required to validate an error correction protocol. They simply try to reproduce several trials of using the error-correcting protocol. Typically, you need a large number of these instances (simulations). “For example, if you want to claim that a problem that occurs once in a thousand times, you must do at least a thousand simulations to ensure that. If we did only say 100 simulations, the problem will never show up and we will incorrectly conclude the problem never occurs.” Now to put it in perspective, error-correcting protocols are required not to fail more than once in about 1000 trillion times. Just one simulation of an error-correcting protocol already takes a few milliseconds, so it would be about 30 years before we find one failure event. However, if there was some way of confining our attention to only the failure events, we might be able to estimate the failure-rate quicker. However, there is no clear identification for when these failure events occur, so we are as good as shooting in the dark.

The question persists, how do we identify cases of a noise process on which an error-correcting protocol fails? “We used some tools from studying rare events in statistical analysis. The relevant tool is known as importance sampling, it is often used to economics to study the crash of stock markets. We adapted it to our problem in evaluating an error-correcting protocol. This is certainly not a technique that works by proof, but we have reason to believe that this technique could very well work without requiring years of calculation. The scope of importance sampling techniques in evaluating error-correcting protocols is vast and unexplored, we only studied one case in the thesis. Although there is something to gain in terms of efficiency in numerical simulations using our importance sampling technique, the gains are nowhere substantial. Pushing these techniques further can be a promising area of future research.”

After reviewing the known methods, what is the conclusion, can we correct the noise in a quantum computer?  “The conventional computer is not suitable for testing a quantum computer. Because the quantum computer opens up a whole new universe of possibilities, it may take a small quantum computer to test a more powerful one. »

After the PhD ?

Pavithran will pursue his research further through a postdoctoral fellowship at the Institute for Quantum Computing in Waterloo, Ontario. He is also involved in the start-up company Quantum Benchmark.

“The purpose of this undertaking is very similar to what I did as part of my PhD. We want to evaluate computer equipment by comparing it, offering merit factors that are tailored to the equipment we have to evaluate.  The idea is not so much the amount of qubit there can be in computers, but to make sure they work properly. »

We would like to congratulate Pavithran on his doctorate and offer him the best.

Stay connected