New UpdatesTechnology

Attacks In A Post-Quantum World

Attacks In A Post-Quantum World


Attacks In A Post-Quantum World



In the future, quantum computers could crack conventional encryption. That’s why programmers around the world are racing against the clock to protect our data in the future.

In cybersecurity circles, it’s called Q-Day: the day when quantum computers will destroy the internet. In almost everything we do online, cryptographic algorithms work constantly and mostly unnoticed in the background. These systems encrypt data to protect our privacy, verify credentials and secure payments. And they work well: Even with the best supercomputers, it would be futile to crack the codes that run the online world.

But machines that harness the intricacies of quantum physics threaten the entire system. In the future, quantum computers could crack current encryption algorithms exponentially faster than the best supercomputers. “A real quantum computer would be extremely dangerous,” warns Eric Rescorla, chief technology officer of the Firefox browser team at Mozilla in San Francisco.


The whole thing is like a typical time travel story: the machines that don’t yet exist endanger not only our future data, but also the current and past ones. Some experts fear that hackers are already intercepting traffic and collecting encrypted information, only to be able to decrypt it in a few years’ time, once quantum computers become available. Then they would be able to learn everything about a person, from their medical history to old bank records. “Suppose a quantum computer is developed in 2024,” explains Rescorla. “Then everything you did on the internet before 2024 is at risk.”

But even the most optimistic proponents of quantum computers stress that we still have to wait a while before the machines are powerful enough to crack current encryption techniques. Many doubt that will happen in this decade – if at all.


Still, there is a risk. As such, it only makes sense to prepare for a redesign to limit potential damage on a Q-Day. That means moving to stronger cryptographic methods. Fortunately, decades of research in theoretical computer science have produced a fair number of candidates. These post-quantum encryptions seem unassailable: Even with quantum algorithmic approaches, experts have not found a way to overcome them in a reasonable time.

Which of these algorithms wins the race could depend in large part on a decision that the US National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland, will announce soon. In 2016, it called on computer scientists worldwide to submit post-quantum codes to check their quality with the help of the entire crypto community.

The reason for the competition was an unusual statement by the National Security Agency (NSA) in 2015, which warned of vulnerable encryption systems and advised US companies and the government to replace them. Of the 65 programs submitted, 15 are now left. Over the next few months, NIST will select the winners and publish official versions of the algorithms. However, the US is not alone in its efforts; similar organizations in other countries will soon make their own recommendations.

However, this is just the beginning of a long process to update ciphers around the world. While the change will affect every aspect of our digital lives, it will hopefully be imperceptible to ordinary users. However, initial tests by companies like Google have not always gone smoothly. ‘We know how to do it; it’s just not clear if we’ll make it in time,” mathematician Peter Shor of the Massachusetts Institute of Technology in Cambridge told Nature in 2020 .

Even if Q-Day never happens, the mere theoretical possibility that quantum computers could crack our codes has already shaped computer science—particularly the field of cryptography. “Most people I know are thinking about quantum-resistant programs,” said Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley.


The birth of modern cryptography


Although the digital age is still young, the history of cryptography goes back a long way: the military and spies have always been interested in transmitting messages securely, even if a channel (whether it is a carrier pigeon or a radio link) is being bugged. This is possible through encryption. Until the 1970s, the interlocutors had to agree in advance on a common secret code.

But in 1976, the three US computer scientists Whitfield Diffie, Martin Hellman and Ralph Merkle developed the revolutionary concept of public key cryptography, which allows information to be exchanged securely without prior consultation. The idea is based on a mathematical trick with two numbers: one corresponds to the public key and is used to encode a text, the other is the private key that is needed for decryption.

In order to receive confidential messages, one can reveal one’s public key, for example by printing it in a newspaper. Anyone can use this code to encrypt something and then publish it, but only the recipient has the private key to read the information.

In practice, however, the public key method is not used to encode the data directly, but to securely exchange a conventional symmetric key. Both parties can then use this to send confidential content back and forth. From the mid-1990s, the most commonly used algorithm for exchanging public keys was the RSA method, named after its inventors Ron Rivest, Adi Shamir, and Leonard Adleman.

The method is based on prime numbers that are only divisible by one and themselves. The public key is the product of at least two prime numbers (say, 221 = 13 × 17), with only one party knowing the factors (13 and 17) that make up the private key. The protection is provided by the fact that while multiplying two large numbers is easy, it is extremely difficult to factor any given result into its prime factors.


»A real quantum computer would be extremely dangerous«

Eric Rescorla, Chief Technology Officer of the Firefox browser team


Recently, people have moved away from the RSA method, which is now even susceptible to classic attacks (i.e. without quantum computers). In 2018, the consensus-based virtual organization Internet Engineering Task Force (IETF), which controls the adoption of security standards on a global scale, proposed another public-key system as a replacement: elliptic curve cryptography. It is based on a branch of 19th-century geometry dealing with elliptic curves.

This encryption technique involves computing the nth power of an integer associated with a point on an elliptic curve. Only one party knows the value of n , which is the private key. Raising a number to the power is easy, but given the result, figuring out what n was is extremely difficult. As it turned out, this method is faster and more secure than RSA.

All sorts of digital devices, from cell phones to cars, use public key encryption to connect. The technology has even spread beyond the Internet, using elliptic curve algorithms in credit card chips and security badges.


Attack of the quantum computers

With that, the online world seemed secure. But just as the number of Internet users worldwide – and thus the use of public key methods – began to grow exponentially, Shor, then at AT&T n Bell Laboratories in Murray Hill, laid the foundation for the demise of these algorithms. In 1994, he proved that future quantum computers can factor large numbers into their prime factors much faster than conventional computers – and part of Shor’s algorithm can efficiently crack an elliptic curve key.

Shor had thus shown that quantum computers can also solve practical problems. At the time, it was largely a theoretical result, as the devices were a long way off. However, in the same decade, IBM researchers were the first to implement quantum calculations by manipulating molecules in strong magnetic fields. In 2001 they even showed that the system can run Shor’s algorithm – but only for small numbers. They calculated the prime factors of 15, i.e. 3 and 5. Since then, enormous progress has been made in the field, but Shor’s algorithm for large numbers is still a long way off.

Nevertheless, after Shor’s breakthrough, the research world of cryptography increasingly dealt with the possibility of a Q-Day. Experts had already looked into alternative public-key methods, and the news attracted a lot of talent to the field, says Goldwasser.


Grid systems as protection


Most of the algorithms that made the NIST final list draw directly or indirectly on a branch of cryptography that was developed in the 1990s using mathematical lattices. These are sets of points located at the intersections of straight lines. As in school, the points can be added together using vectors; some can also be decomposed into sums of smaller vectors.

This sounds easy at first, but when the grid has many dimensions, say 500, it is very time consuming to do such calculations. It’s like prime numbers: If you know the shortest vectors that span the grid, you can use them as a private key. For outsiders, on the other hand, it is very difficult to determine them.

Since the 1990s, researchers have designed numerous public-key schemes that either work directly with lattices or are related to them in some way. The usual way to show that a cryptographic method is trustworthy is to show that it’s at least as hard to crack as a known problem—in this case, the lattice problem. The NTRU program, developed in 1996, is one of the earliest such algorithms. Although the keys consist of polynomials with integer coefficients, they can be reduced to the lattice problem.


‘We know how to do it; it’s just not clear if we’ll make it in time«

Another popular approach to lattice-based cryptography is Learning with Errors, or LWE for short, which forms the basis of several NIST finalists. In its simplest form, the problem, introduced in 2005 by New York University computer scientist Oded Regev, is just simple arithmetic. First, the recipient chooses a large, secret number – the private key.

Then one forms multiples of this number again and again, adding random small numerical values, the “errors,” each time. The resulting list is the public key. To send a message (a number), you add it to a certain selection from the list and send the result. In order to decode the result, the recipient must divide it by the secret key.

Germany’s first quantum computer | The country’s first quantum computer is located in Ehningen.

Four years after its publication, Regev was able to prove something amazing: If the LWE method can be cracked, then the lattice problem, which appears to be much more complicated, can also be solved. That means LWE is just as safe as lattice models – but it doesn’t need multidimensional vectors, which can cause difficulties, says Goldwasser. “It’s great and makes the job a lot easier.” Ironically, Regev got the idea for the LWE method while trying to find a quantum algorithm that would efficiently solve the lattice problem, but without success. “Sometimes failure is success,” he says.

Since then, researchers have worked to address a weakness in grid-based systems. “The public keys are simply huge,” explains computer scientist Yu Yu from Shanghai Jiao Tong University in China. While they are the length of a tweet in the methods currently used, lattice-based keys are sometimes more than one megabyte in size. With the help of structured grids, the public keys can be made smaller, but this makes the systems more vulnerable. The best algorithms must strike a balance between security and efficiency.

At the same time as the competition for post-quantum algorithms, NIST 2016 also called for candidates to submit digital signatures. Web servers use these systems to check the identity of a user, for example to prevent fraudsters from stealing passwords. The same mathematical methods that make public key exchange possible are usually suitable for this task. Therefore, the currently used signatures are also vulnerable to quantum attacks.

»Star Trek« and »Lord of the Rings« as inspiration

Teams from universities and companies from four dozen countries on six continents then submitted a total of 82 algorithms (for both encryption and digital signatures), of which NIST accepted 65. As befits nerds, many of the programs had names from »Star Wars«, »Star Trek« or »Lord of the Rings«, such as: FrodoKEM, CRYSTALS-DILITHIUM or New Hope. Candidates are judged on both their security and efficiency, meaning execution speed and public key length also play a role. In addition, all algorithms standardized by NIST must be license-free.

And the hunt was on: cryptographers tried to hack the codes of others. Several systems were cracked shortly after publication. “I think people had a lot of fun tackling these algorithms,” says Moody.

»Sometimes failure is success«

Oded Regev


Although NIST is a US government agency, the broader cryptographic community has also contributed to the work. “It’s a global challenge,” says mathematician Philip Lafrance of computer security firm ISARA Corporation in Waterloo, Canada. “The world will generally endorse the NIST standards.” He is part of a working group overseeing the selection on behalf of the European Telecommunications Standards Institute (ETSI). “We assume that the standard we have created will be widely used internationally,” says Moody.

However, because crypto touches on sensitive national interests, other countries are watching developments very closely — and some are cautious. “The post-quantum algorithms are not yet fully developed, many aspects are still in the research stage,” warns encryption expert Mélissa Rossi from the National Cyber ​​Security Authority in France. Still, she adds, post-quantum systems should be pushed further to support current techniques.

The organization Chinese Association for Cryptologic Research has already held a competition for post-quantum algorithms and announced the results in 2020. This has led some experts in other countries to mistakenly believe that the Chinese government has already made an official decision on future encryption.


Persistent system update


Of the 15 remaining NIST candidates , 9 are encryption algorithms and 6 are digital signatures. Finalists include implementations of NTRU and LWE, as well as another proven system based on error correction techniques. These “code-based” algorithms store data with a level of redundancy that allows an original file to be reconstructed, even if it has been slightly corrupted. In the cryptographic application, the algorithm that secures content corresponds to the public key. The secret is then required to recover an original message.

In the coming months, the institute will select two algorithms each for public key procedures and digital signatures. Then NIST experts develop standards for one of the two candidates and keep the other as a backup in case an unexpected attack renders the first unusable.

However, selection and standardization are not the end of the story. “It’s certainly important to approve a candidate, but then you have to see how an algorithm can be integrated into existing protocols,” says cryptographer Nick Sullivan of Internet service provider Cloudflare in New York City.

Both Cloudflare and Google are already testing, often collaborating, post-quantum algorithms by incorporating them into some Chrome browser betas and server software. Such tests are crucial, because it is not enough for the server and browser to be perfectly compatible for communication on the Internet to function smoothly.

Because the data also passes through network devices that could block the data traffic if they classify unknown encryption protocols as unusual. Antivirus software can cause similar problems. The difficulties exist “on a broader, Internet-wide level. Like some countries that track what users do,” Sullivan said.

In 2016, Google implemented New Hope, a structured version of LWE named after the Star Wars movie, into a beta version of the Chrome browser, which ran without any problems. However, a larger experiment Google ran in 2021 with a different algorithm didn’t go as smoothly. Some devices blocked the connection because the protocol used seemed unusual. The problem could be that the public key was larger than expected.

Algorithms that block the internet in this way must be put on hold until the problems are solved. “Sometimes a single mesh element behaves incorrectly when you add something new,” says Rescorla. Getting vendors to customize their products — which can often be done with a simple software update — takes some effort, he says.

Still, Rescorla is optimistic, at least when it comes to browsers. Since only a few companies control the most popular browsers and servers, only they need to change their encryption systems. “Everyone’s pretty confident that once NIST and the IETF come up with new standards, we’ll be able to roll them out fairly quickly,” Rescorla said.

The transition could be more difficult with the countless networked objects such as cars, security cameras and all kinds of smart home devices, which involve many intertwined protocols – especially those that have security functions built into their chips. “It takes five to seven years to develop a vehicle and it will be on the road for a decade,” says Lafrance. “Will it still be safe in ten years?”

In any case, one thing is clear: the first implementations of post-quantum encryption will be hybrid systems that are used in addition to the existing codes. Computer scientist Vadim Lyubashevsky at IBM in Zurich, whose team counts two lattice-based algorithms among the NIST finalists, thinks post-quantum cryptography should be used with current methods for about a decade before using the new ciphers alone.

If all goes according to plan, the internet will be well into the post-quantum era by the time Q-Day arrives. This post-quantum internet could one day be followed by a quantum internet, a network that exchanges information using quantum mechanical principles. The advantage of this would be that such systems are not hackable.


The dream of the quantum internet

But back to our current Internet: According to expert estimates, in order to crack current encryption systems, quantum computers would have to have around 1000 times more computing components (qubits) than today. “There’s a very good chance that quantum computers will be able to do all the positive things first, long before they’re able to break our cryptographic mechanisms,” says Lyubashevsky.

However, that is no reason to sit back and relax. Rescorla estimates that the full transition to post-quantum systems will take at least five years. When Q-Day comes, there’s a high probability that hidden devices exist somewhere that are still vulnerable, he says. “Even if we do our best, a real quantum computer will bring about an incredible upheaval – and not just in a positive way.”

For More Updates Click Here.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button