ST. LOUIS — A new method of designing and building computer chips could lead to blisteringly quick processing at least 1,000 times faster than the best existing chips are capable of, researchers say.
The new method, which relies on materials called carbon nanotubes, allows scientists to build the chip in three dimensions.
The 3D design enables scientists to interweave memory, which stores data, and the number-crunching processors in the same tiny space, said Max Shulaker, one of the designers of the chip, and a doctoral candidate in electrical engineering at Stanford University in California. [10 Technologies That Will Transform Your Life]
Reducing the distance between the two elements can dramatically reduce the time computers take to do their work, Shulaker said Sept. 10 here at the "Wait, What?" technology forum hosted by the Defense Advanced Research Projects Agency, the research wing of the U.S. military.
Progress slowing
The inexorable advance in computing power over the past 50 years is largely thanks to the ability to make increasingly smaller silicon transistors, the three-pronged electrical switches that do the logical operations for computers.
According to Moore's law, a rough rule first articulated by semiconductor researcher Gordon E. Moore in 1965, the number of transistors on a given silicon chip would roughly double every two years. True to his predictions, transistors have gotten ever tinier, with the teensiest portions measuring just 5 nanometers, and the smallest functional ones having features just 7 nanometers in size. (For comparison, an average strand of human hair is about 100,000 nanometers wide.)
The decrease in size, however, means that the quantum effects of particles at that scale could disrupt their functioning. Therefore, it's likely that Moore's law will be coming to an end within the next 10 years, experts say. Beyond that, shrinking transistors to the bitter end may not do much to make computers faster.
Long commute time
The main roadblock to faster computers is not flagging processor speed, but a memory problem, Shulaker said.
Big-data analysis requires the computer to draw some tiny piece of data from some previously unknown spot in truly staggering troves of data. Then, the computer must shuttle that information via an electrical signal back and forth across the (relatively) vast inches of wire between the computer's memory (typically a hard drive) and the processors, facing the speed bump of electrical resistance along the entire path. [Super-Intelligent Machines: 7 Robotic Futures]
"If you try to run that in your computer, you would spend over 96 percent of the time just being idle, doing absolutely nothing," Shulaker said. "You're wasting an enormous amount of power." While the Central Processing Unit (CPU) waits for a piece of data to make the return trip from the memory, for instance, the computer is still hogging power, even though it's not calculating a thing.
Solving the memory-CPU "commute time," however, is tricky. The two components can't be put in the same wafer because silicon-based wafers must be heated to about 1,800 degrees Fahrenheit (1,000 degrees Celsius), while many of the metal elements in hard drives (or solid state drives) melt at those temperatures, Shulaker said.
Carbon nanotubes
To get around this issue, Shulaker and his advisers at Stanford University, Subhasish Mitra and H.-S. Philip Wong, looked to a completely different material: carbon nanotubes, or miniscule mesh rods made of carbon atoms, which can be processed at low temperatures. Carbon nanotubes (CNTs) have electrical properties similar to those of conventional silicon transistors.
In a head-to-head competition between a silicon transistor and a CNT transistor, "hands down, the CNT would win," Shulaker told Live Science. "It would be a better transistor; it can go faster; it uses less energy."
However, carbon nanotubes grow in a disorderly manner, "resembling a bowl of spaghetti," which is no good for making circuits, Shulaker said. As such, the researchers developed a method to grow nanotubes in narrow grooves, guiding the nanotubes into alignment.
But there was another hurdle. While 99.5 percent of the nanotubes become aligned, a few stragglers will still be out of position. To solve this problem, the researchers figured out that drilling holes at certain spots within the chip can ensure that even a chip with wayward tubes would work as expected.
Another problem is that while most CNTs have the properties of a semiconductor (like silicon), a few act just like an ordinary conducting metal, with no way to predict which tubes will misbehave. Those few conducting tubes can ruin an entire chip, and having to toss even a fraction of the chips wouldn't make financial sense, Shulaker added. As a remedy, Shulaker and his colleagues essentially "turn off" all the semiconducting CNTs, leaving huge jolts of current to circulate through the remaining conducting nanotubes. The high current heats up and breaks down only the conducting nanotubes, which blow like nano-scale fuses, Shulaker said.
In 2013, the team built a CNT computer, which they described in the journal Nature. That computer, however, was slow and bulky, with relatively few transistors.
Now, they have created a system for stacking memory and transistor layers, with tiny wires connecting the two. The new 3D design has slashed the transit time between transistor and memory, and the resulting architecture can produce lightning-fast computing speeds up to 1,000 times faster than would otherwise be possible, Shulaker said. Using the new architecture, the team has built a variety of sensor wafers that can detect everything from infrared light to particular chemicals in the environment.
The next step is to scale the system further, to make even bigger, more complicated chips.
Follow Tia Ghose on Twitterand Google+. Follow Live Science @livescience, Facebook & Google+. Original article on Live Science.