A theoretical computer scientist and his MIT colleagues are finding ways to reduce the energy used in computation—a change that could lead to laptops and mobile devices that are smaller and lighter, generate less heat, and perform complicated calculations with unprecedented speed. The researchers have proved mathematically that relatively simple hardware modifications could cut in half the energy consumed in running today’s standard software procedures. And they have shown that coordinated changes in software and hardware could increase the energy efficiency of computing by a million times. The researchers have already written new energy-efficient algorithms for everyday tasks such as searching and sorting that—when run on specially adapted computer hardware—should deliver substantial energy savings. And even greater savings will come with new energy-efficient procedures for processing big data, for example, during web searches.
Most developers of computer software and hardware focus on solving problems with maximum speed and minimum storage space. But energy use for computing is an increasing concern, according to Erik D. Demaine, professor of electrical engineering and computer science. Worldwide, 3 billion personal computers use more than 1% of all energy consumed, and 30 million computer servers use an added 1.5% of all electricity at an annual cost of $14 billion to $18 billion. Expanded use of the Internet, smart phones, and computation in general is causing all of those numbers to escalate. Making computation more energy-efficient would save money, reduce energy use, and permit batteries that provide power in mobile devices to run longer or be smaller, says Demaine. In addition, computers would generate less heat, so calculations could run faster. Indeed, cutting energy use in half would halve heat output or double the processing speed.
Over the past 70 years, the energy efficiency of computers has increased steadily. In central processing units (CPUs)—the main brain of the computer—computations per kilowatt-hour have doubled every 1.5 years or so. And graphics processing units have improved even more quickly. But that trend will end. “Landauer’s principle”—proved theoretically in 1961 by Rolf Landauer and recently reconfirmed experimentally—places a fundamental upper limit on how many computations can be performed per kilowatt-hour. “With current approaches to computation, we estimate that we’ll hit a wall in a few decades, and efficiency won’t get any higher,” says Demaine. “So we need to develop a new way to think about computation.” He and his colleagues in the MIT Computer Science and Artificial Intelligence Laboratory have been doing just that.
Finding ways to reduce energy use requires looking at the basic processes that go on inside a computer. The basic unit of information in all computation is a “bit” (short for “binary digit”). Each bit can have only one of two values—a 1 or a 0—and they are differentiated by voltage. A 1 is represented as a positive voltage—say, 1 volt—and a 0 by the absence of voltage. Running a computation involves turning 1’s into 0’s and vice versa, and some of those transitions waste energy.
The diagram to the left illustrates the problem. The symbol is a typical “logic gate,” an elementary building block of a digital circuit. Assume the question being asked is: Are these two inputs different? In the top example, the inputs are 0 and 1, so the answer is yes, which by convention is represented by a 1. In this case, the same number of 1’s came out as went in. In the lower example, the inputs are two 1’s, so the answer is no, which is represented by a 0. Since 0 is represented by the absence of voltage, the 2 volts that were invested in creating those two 1’s are literally thrown away, and that wasted energy is dissipated as heat. (Conversely, one can focus on supplying voltage to make the 1’s, but chipmakers are generally more concerned about destroying 1’s because waste heat creates problems.)
So finding a way to conserve 1’s and 0’s throughout every computation would cut wasted energy, reduce heat loss, speed up processing, and increase energy efficiency, potentially even beyond Landauer’s theoretical limit.
According to Demaine, a step in the right direction is through “conservative computing.” This strategy requires only hardware modifications—no need to change current software procedures— and it seeks to preserve only the 1’s. The number of 0’s is of no concern.
To implement conservative computing, Demaine and graduate student Jayson R. Lynch of electrical engineering and computer science developed an approach based on “dual-rail logic,” an idea for circuit design that has been around for 20 years but has never been put into practice. In dual-rail logic, 1’s and 0’s are not represented by single numbers but rather by pairs: 1-0 and 0-1. So the key is the order in which the numbers appear, not their individual values.
In the top example to the right, the inputs are 1-0 and 0-1, and the output is 1-0. That setup is wasteful: An incoming 1 is lost during the computation. The researchers solve that problem by retaining the extra inputs as “garbage bits” that carry useless information (see the bottom example). The 1-0 order doesn’t matter, but now the number of 1’s is preserved after the computation. Since a 1 can never be thrown out, the 1’s in the garbage bits must be recycled to some other decision point in the processor. “So it’s kind of like a hot potato,” says Demaine. “Eventually that 1—that volt—will be needed somewhere, but it might move around awhile until it finds a place where it is actually useful.”
“Because we only have to conserve the number of 1’s and not all the information, recycling garbage in this case is relatively easy,” says Demaine. “And what we proved is that it can always be done.” He and Lynch showed mathematically that with relatively simple hardware modifications, it’s possible to make any computation conservative.
This novel approach won’t eliminate all the wasted energy in today’s chips, but their analyses suggest that it should cut energy use and heat loss in half. Demaine hopes to test that estimate by performing simulations or—better still—by building chips with his electrical engineering colleagues at MIT.
One downside of their approach is that the hardware modification would require more wires and transistors. The added area could increase heat production, offsetting some of the gains from their reduction in wasted energy. But Demaine notes another benefit: The change could increase security. One way to “attack” a computer is to take external voltage measurements, figure out how much and where energy is consumed, and then reverse engineer the processes going on inside. “With conservative computing, energy will be more uniformly distributed, so computations will be harder to observe from the outside,” says Demaine. “Companies may want to [implement our approach] for security reasons—and then as a benefit they’d also get the energy improvement.”
Even with conservative computing, energy-efficiency gains will stall in several decades. According to Demaine, the key to breaking Landauer’s limit is to rethink algorithms—the step-by-step procedures at the core of all software programs. Various algorithms may solve a given computational problem, but they may differ substantially in the amount of time, memory, and energy they use to do it. In the past, the “algorithms community” has not focused explicitly on minimizing energy use. As a result, in many standard algorithms, every decision point involves replacing all the inputs with the newly computed outputs. “So the inputs—millions of bits—are simply being thrown away at every step,” says Demaine. “There’s a ton of wastage and lots of room for improvement in today’s software.”
To tackle that problem, Demaine and his colleagues developed a new field of study called energy-efficient algorithms. By changing both the software and the hardware, they’re trying to ensure that algorithms are optimal (or roughly optimal) with respect to energy as well as time and memory. On the energy front, the goal is to conserve all information—not just the 1’s but the 0’s as well.
Their approach is based on “reversible computing,” an idea first proposed in the 1970s. A reversible algorithm is one that can run either forward or backwards. “It’s a sort of logical notion that you could compute the inputs from the output just as well as you could compute the output from the inputs because you haven’t lost anything,” Demaine explains. “You don’t actually have to run it backwards; it just needs to be possible.”
He offers a simple physical analogy. Two billiard balls are coming at each other in a frictionless world. The first hits the second, transferring all its energy as it does so. Because no energy has been lost (in this perfect world), the second ball could then return and hit the first, transferring all the energy back again, and the billiard balls would go back to their original positions. That’s a reversible event—even though it may not happen.
Reversible algorithms behave the same way. “If you can play everything backwards, then no energy has escaped during your computation,” says Demaine. “That’s good news. It means we can effectively sidestep Landauer’s principle.” While conservative computing may enable programs to run twice as fast, reversible computing could enable them to run millions of times faster.
Using specially devised theoretical models, Demaine and Lynch have spent the past six months analyzing basic algorithms to see whether they can be made reversible—or more reversible. (There’s a fundamental limit on how reversible some algorithms can be.) Already they’ve found more-efficient replacements for some algorithms used in everyday computational tasks such as sorting, searching, and finding the shortest path between two points in a network. One example is “binary search trees,” which are procedures for organizing data so the data can be retrieved quickly. According to Demaine, binary search trees are used in nearly every computer ever made, and they involve millions of functions and a lot of energy consumption. “But with a couple of tricks, we got energy use down to zero,” he says. “With the new algorithms, we require only the energy needed to store the data, no additional energy to organize it.”
Demaine is pleased with their progress. “It’s like starting over,” he says. “Take all the algorithms you learned in your undergraduate class and throw them out the window, or look at all existing algorithms and say, OK, this is bad, let’s make it better.” And their results so far “just scratch the surface of what’s possible,” he says, noting in particular the huge potential for energy savings in procedures used for processing big data, such as when running network routers or performing web searches.
There is one catch with the new algorithms: They need to run on a reversible computer—a computer in which all chips and circuits perform reversible functions with no transfer of heat to or from their surroundings. In the 1990s, a group at MIT built preliminary hardware proving such “adiabatic” computing possible, and recently a company called AMD made a CPU that has a reversible component in it, specifically, the “clock,” which tells the computer to take some action 4 billion times each second.
Demaine finds release of the new reversible clock encouraging. “We think if we can show theoretically that there are really big wins on the software side, then lots of people will begin to build reversible chips,” he says. “Given the opportunities for improvement, our goal is to have computers spend a million times less energy. That’s pretty exciting.”
This research was supported by a seed grant from the MIT Energy Initiative (MITEI). A new MITEI seed grant focuses on developing energy-efficient procedures for processing big data. Publications are forthcoming.
This article appears in the Spring 2013 issue of Energy Futures.