The Year in Computer Science

featured image

SERIES

Computer scientists this year learned how to transmit perfect secrets, why transformers seem so good at everything, and how to improve on decades-old algorithms (with a little help from AI).

Myriam Wares for Quanta Magazine

Introduction

As computer scientists tackle a greater range of problems, their work has grown increasingly interdisciplinary. This year, many of the most significant computer science results also involved other scientists and mathematicians. Perhaps the most practical involved the cryptographic questions underlying the security of the internet, which tend to be complicated mathematical problems. One such problem — the product of two elliptic curves and their relation to an abelian surface — ended up bringing down a promising new cryptography scheme that was thought to be strong enough to withstand an attack from a quantum computer. And a different set of mathematical relationships, in the form of one-way functions, will tell cryptographers if truly secure codes are even possible.

Computer science, and quantum computing in particular, also heavily overlaps with physics. In one of the biggest developments in theoretical computer science this year, researchers posted a proof of the NLTS conjecture, which (among other things) states that a ghostly connection between particles known as quantum entanglement is not as delicate as physicists once imagined. This has implications not just for our understanding of the physical world, but also for the myriad cryptographic possibilities that entanglement makes possible. 

And artificial intelligence has always flirted with biology — indeed, the field takes inspiration from the human brain as perhaps the ultimate computer. While understanding how the brain works and creating brainlike AI has long seemed like a pipe dream to computer scientists and neuroscientists, a new type of neural network known as a transformer seems to process information similarly to brains. As we learn more about how they both work, each tells us something about the other. Perhaps that’s why transformers excel at problems as varied as language processing and image classification. AI has even become better at helping us make better AI, with new “hypernetworks” helping researchers train neural networks faster and at a lower cost. So now the field is not only helping other scientists with their work, but also helping its own researchers achieve their goals.

Red particles with varying spins and some entanglement

Kristina Armitage for Quanta Magazine

Introduction

Entangled Answers

When it came to quantum entanglement, a property that intimately connects even distant particles, physicists and computer scientists were at an impasse. Everyone agreed that a fully entangled system would be impossible to describe fully. But physicists thought it might be easier to describe systems that were merely close to being fully entangled. Computer scientists disagreed, saying that those would be just as impossible to calculate — a belief formalized into the “no low-energy trivial state” (NLTS) conjecture. In June a team of computer scientists posted a proof of it. Physicists were surprised, since it implied that entanglement is not necessarily as fragile as they thought, and computer scientists were happy to be one step closer to proving a seminal question known as the quantum probabilistically checkable proof theorem, which requires NLTS to be true. 

This news came on the heels of results from late last year showing that it’s possible to use quantum entanglement to achieve perfect secrecy in encrypted communications. And this October researchers successfully entangled three particles over considerable distances, strengthening the possibilities for quantum encryption. 

An illustration showing an orange and blue network of lines focus into a clear pyramid, emerging as a white light traveling into a clear eye.

Avalon Nuovo for Quanta Magazine

Introduction

Transforming How AI Understands

For the past five years, transformers have been revolutionizing how AI processes information. Developed originally to understand and generate language, the transformer processes every element in its input data simultaneously, giving it a big-picture understanding that lends it improved speed and accuracy compared to other language networks, which take a piecemeal approach. This also makes it unusually versatile, and other AI researchers are putting it to work in their fields. They have discovered that the same principles can also enable them to upgrade tools for image classification and for processing multiple kinds of data at once. However, these benefits come at the cost of more training than non-transformer models need. Researchers studying how transformers work learned in March that part of their power comes from their ability to attach greater meaning to words, rather than simply memorize patterns. Transformers are so adaptable, in fact, that neuroscientists have begun modeling human brain functions with transformer-based networks, suggesting a fundamental similarity between artificial and human intelligence.

Kristina Armitage for Quanta Magazine

Introduction

Breaking Down Cryptography

The safety of online communications is based on the difficulty of various math problems — the harder a problem is to solve, the harder a hacker must work to break it. And because today’s cryptography protocols would be easy work for a quantum computer, researchers have sought new problems to withstand them. But in July, one of the most promising leads fell after just an hour of computation on a laptop. “It’s a bit of a bummer,” said Christopher Peikert, a cryptographer at the University of Michigan. 

The failure highlights the difficulty of finding suitable questions. Researchers have shown that it’s only possible to create a provably secure code — one which could never fall — if you can prove the existence of “one-way functions,” problems that are easy to do but hard to reverse. We still don’t know if they exist (a finding that would help tell us what kind of cryptographic universe we live in), but a pair of researchers discovered that the question is equivalent to another problem called Kolmogorov complexity, which involves analyzing strings of numbers: One-way functions and real cryptography are possible only if a certain version of Kolmogorov complexity is hard to compute. 

Olivia Fields for Quanta Magazine

Introduction

Machines Help Train Machines 

In recent years, the pattern recognition skills of artificial neural networks have supercharged the field of AI. But before a network can get to work, researchers must first train it, fine-tuning potentially billions of parameters in a process that can last for months and requires huge amounts of data. Or they could get a machine to do it for them. With a new kind of “hypernetwork” — a network that processes and spits out other networks — they may soon be able to. Named GHN-2, the hypernetwork analyzes any given network and provides a set of parameter values that were shown in a study to be generally at least as effective as those in networks trained the traditional way. Even when it didn’t provide the best possible parameters, GHN-2’s suggestions still offered a starting point that was closer to the ideal, cutting down the time and data required for full training. 

This summer, Quanta also examined another new approach to helping machines learn. Known as embodied AI, it allows algorithms to learn from responsive three-dimensional environments, rather than static images or abstract data. Whether they’re agents exploring simulated worlds or robots in the real one, these systems learn fundamentally differently — and, in many cases, better — than ones trained using traditional approaches.

Introduction

Improved Algorithms

This year, with the rise of more sophisticated neural networks, computers made further strides as a research tool. One such tool seemed particularly well suited to the problem of multiplying two-dimensional tables of numbers called matrices. There’s a standard way to do it, but it becomes cumbersome as matrices grow larger, so researchers are always looking for a faster algorithm that uses fewer steps. In October, researchers at DeepMind announced that their neural network had discovered faster algorithms for multiplying certain matrices. But experts cautioned that the breakthrough represented the arrival of a new tool for solving a problem, not an entirely new era of AI solving these problems on its own. As if on cue, a pair of researchers built on the new algorithms, using traditional tools and methods to improve them. 

Researchers in March also published a faster algorithm to solve the problem of maximum flow, one of the oldest questions in computer science. By combining past approaches in novel ways, the team created an algorithm that can determine the maximum possible flow of material through a given network “absurdly fast,” according to Daniel Spielman of Yale University. “I was actually inclined to believe … algorithms this good for this problem would not exist.”

Mark Braverman, in an orange shirt, stands on a path lined with trees.

Sasha Maslov for Quanta Magazine

Introduction

New Avenues for Sharing Information

Mark Braverman, a theoretical computer scientist at Princeton University, has spent more than a quarter of his life working on a new theory of interactive communication. His work allows researchers to quantify terms like “information” and “knowledge,” not just allowing for a greater theoretical understanding of interactions, but also creating new techniques that enable more efficient and accurate communication. For this achievement and others, the International Mathematical Union this July awarded Braverman the IMU Abacus Medal, one of the highest honors in theoretical computer science. 

Next article

The Year in Biology

Read More

Share on Google Plus
    Blogger Comment
    Facebook Comment

0 Comments :

Post a Comment