Researcher Parthasarathy Ranganathan, a distinguished technologist at HP Labs in Palo Alto, CA, has been wondering what future computing systems will look like. One important clue to this may be that the amount of data being created is exploding, growing significantly faster than Moore’s Law. For example, the amount of online data indexed by Google is estimated to have increased from 5 exabytes (exabyte = million trillion) in 2002 to 280 exabytes in 2009—a 56-fold increase in seven years. In contrast, an equivalent Moore’s law growth in computing for the corresponding time would deliver a 16-fold increase. And, a recent estimate indicates that 24 hours of video are uploaded on YouTube every minute. At HD rates of 2 to 5 Mbits/s, that is 45 to 75 Tbytes of data per day.
Ranganathan sees current trends suggesting that technologies like phase-change memory (PCM) and memristors especially when viewed in the context of advances like 3D die stacking, multicores, and improved networking can induce fundamental architectural changes for data-intensive computing. What his team calls “nanostores” offer one such way to leverage this confluence of application necessity and technology trends. “Nanostores” refers to duality of the evolution to nanotechnology and the emphasis on data instead of compute.
The key property of nanostores is the colocation of processors with nonvolatile storage, eliminating many intervening levels of the storage hierarchy. All data is stored in a single-level NV memory datastore that replaces traditional disk and DRAM layers—disk use is relegated to archival backups.
For example, a single nanostore chip would consist of multiple 3D-stacked layers of dense silicon NV memories such as PCMs or memristors, with a top layer of power-efficient compute cores. Through-silicon vias are used to provide wide, low-energy datapaths between the processors and the datastores. Each nanostore can act as a full-fledged system with a network interface. Individual such nanostores are networked through onboard connectors to form a large-scale distributed system or cluster akin to current large-scale clusters for data-centric computing. The system can support different network topologies, including traditional fat trees or recent proposals like HyperX. “From our research, we see such nanostore-based future designs being a real disruptor with potential for an order of magnitude or higher performance gain at better energy efficiency for future data-centric workloads,” said Dr. Ranganathan.
For more information see the white paper “From Microprocessors to Nanostores: Rethinking Data-Centric Systems.” You can find it at www.hpl.hp.com/news/2011_IEEEComputer_nanostores.pdf.
Jim Harrison
Learn more about Hewlett-Packard