ELECTRICAL ENGINEERING:
New Memory Cell Could Boost Computer Speeds
Robert F. Service

In its relentless pursuit of faster machines with more memory, the computer industry has found ways to squeeze ever more transistors and capacitors onto a silicon chip, like carving up a big building into smaller and smaller apartments. But apartments--and capacitors--can become only so cramped. By 2005, companies expect to have reached the size limit for capacitors, the memory storage cells vital to the "working" memory used to store data temporarily as a computer runs programs. Now, however, scientists present a bold new strategy that may break the size barrier: reinventing the capacitor


Packing more punch. A new design for a working memory chip (right), featuring transistors posing as capacitors, could potentially shrink memory chips and make computers boot up and run faster. 
                                                                SOURCE: HITACHI EUROPE LTD

In the 13 May issue of Electronics Letters, researchers from Cambridge University and the Japanese electronics giant Hitachi describe a new chip architecture that does away with traditional capacitors, slashing the real estate of each memory cell by more than half. The capacitor's job is taken over by a novel type of transistor, recast as a data storage bin. The new design should prove easy to integrate with number-crunching processor chips and should retain working memory even when a computer is off--advantages lacking in the current chip architecture, called dynamic random access memory (DRAM). Such chips could allow computer users to begin work instantly after turning on a machine, rather than waiting for it to call up information from the magnetic hard disk.

The new approach is "excellent work," says Stephen Chou, an electrical engineer at Princeton University in New Jersey. Hitachi is so enamored with the early results that it has already begun pushing the experimental design into commercial development.

Although upstart architectures have tried to unseat DRAMs before, drawbacks--such as being bulky or slow--have curtailed their takeover prospects. Starting from scratch, the Cambridge-Hitachi team, a collaboration underwritten by the company, sought to figure out how to duplicate the ability of DRAMs to store data as 1's and 0's, but in less space. In standard DRAM chips, capacitors are coupled with metal oxide semiconductor field-effect transistors, or MOSFETs, which act like doorways that open when writing and reading data. The open-sesame moment happens when a voltage is applied to a gate electrode, which increases electrical conductivity between two other electrodes, the "source" and the "drain." In a DRAM, the capacitor is wired to the drain: When data are written, electrons stream from source to drain, onto the capacitor. When data are read, electrons flow the reverse route, back to the source. A state-of-the-art DRAM has 256 million capacitor-MOSFET pairs that are constantly shuffling electrons during calculations.

Transistors can shuttle single electrons, so their size presents no obstacle to shrinking a chip. To tackle the real problem--space-hogging capacitors--the researchers had to devise a novel way to store charge. What they came up with would make the International House of Pancakes proud: a stack of four silicon pads. The top and bottom pads, doped with phosphorus to conduct like a metal, are the source and drain. The undoped pads in the middle act as a channel for electrons. To further coax the transistor to act like a capacitor, the channel contains insulating layers of silicon nitride between each of the pancakes in the stack, to prevent current from slowly leaking to the drain, as happens in conventional transistors. Surrounding the stack is a gate electrode; the entire array is positioned atop a MOSFET that detects charge in the bottom pancake, or storage bin.

In their new setup, the researchers write data by applying a voltage to the gate. The current rearranges electrons in the undoped pads, effectively increasing the channel's positive charge. This, in turn, draws electrons from the source through the stack to the drain. "The drain gets charged up," says Cambridge team leader Haroon Ahmed. "That's the memory node." Charge pooling in the drain tickles the MOSFET, but not enough to trigger the gate to open.

To read the data, a second, smaller voltage is applied to the gate. If the drain is empty (the off, or 0, state), the voltage blip has no effect. But if the drain is charged (the on, or 1, state), the voltage gives a big enough nudge to overcome the MOSFET's gate threshold, triggering a flood of electrons to cascade from the MOSFET's source to drain.

The new setup can read and write data in billionths of a second, as fast as DRAM--one of the traditional architecture's greatest strengths. It also could eliminate some of DRAM's shortcomings. For one, DRAM capacitors are wired to metal contacts that siphon off charge, even when the MOSFET doorway is closed. Thus when a computer is on, DRAM capacitors must be recharged continually, and when it is off, all their stored data are lost. The new memory cell, in theory, can hang onto charge for 10 years or more, allowing it to retain memory with the power off, says Hitachi team member David Williams. Also unlike DRAMs, he says, the new technology contains components of a similar size to those on logic chips, the computer's brains, meaning that it should be possible to better integrate memory and logic chips and boost processing speeds.

For now, Williams says, there appear to be no showstoppers in scaling up for commercial use. If all goes well, he says, the new chips could be on the market for personal computers within a few years.

Volume 284, Number 5419, Issue of 28 May 1999, p. 1444. 
Copyright © 1999 by The American Association for the Advancement of Science.