Research Menu

Skip Search Box

The Next Wave | Vol. 20 | No. 2 | 2013

Energy-efficient superconducting computing coming up to speed

Marc A. Manheimer

Power and energy use by large-scale computing systems is a significant and growing problem. The growth of large, centralized computing facilities is being driven by several factors including cloud computing, support of mobile devices, Internet traffic growth, and computation-intensive applications. Classes of large-scale computing systems include supercomputers, data centers, and special purpose machines. Energy-efficient computers based on superconducting logic may be an answer to this problem.


Supercomputers are also known as high-performance or high-end systems. Information about the supercomputers on the TOP500 list is readily available [1, 2]. The cumulative power demand of the TOP500 supercomputers was about 0.25 gigawatts (GW) in 2012. The Defense Advanced Research Projects Agency and the Department of Energy have both put forth efforts to improve the energy efficiency of supercomputers with the goal of reaching 1 exaflops for 20 megawatts (MW) by 2020. The flops metric (i.e., floating point operations per second) is based on Linpack, which uses double-precision floating point operations, and 1 exaflops is equivalent to 1018 flops.

Data centers numbered roughly 500,000 worldwide in 2011 and drew an estimated 31 GW of electric power [3–5]. Information about data centers is harder to find than that of supercomputers, as there is no comprehensive list and much of the information is not public. Exceptions include colocation data centers [6], which are available for hire and include about 5% of data centers by number, and the Open Compute Project led by Facebook [7]. Part of the Open Compute Project, Facebook's first European data center under construction in Lulea, Sweden will be three times the size of its Prineville, Oregon data center, which has been using an average of 28 MW of power [8, 9]. Facebook has been a leader in efforts to reduce power consumption in data centers and Lulea's location just below the Arctic Circle with an average temperature of 1.3°C helps with cooling, but average power usage is still expected to exceed 50 MW.

A 2010 study by Bronk et al. projected that US data center energy use would rise from 72 to 176 terawatt hours (TWh) between 2009 and 2020, assuming no constraints on energy availability [10]. The potential benefit to the US of technology that reduces energy requirements by a factor of 10 is on the order of $15 billion annually by the year 2020, assuming an energy cost of $0.10 per kilowatt hour (kWh). Note that this counts only the benefit of energy savings and does not include the potential economic benefits resulting from increased data center operation or savings due to reduced construction costs.

Conventional computing technology based on semiconductor switching devices and normal metal interconnects may not be able to increase energy efficiency fast enough to keep up with increasing demand for computing. Superconducting computing is an alternative that makes use of low temperature phenomena with potential advantages. Superconducting switches based on the Josephson effect switch quickly (i.e., ~1 picosecond), dissipate very little energy per switch (i.e., less than 10-19 joules), and produce small current pulses which travel along superconducting passive transmission lines at about one third the speed of light with very low loss. Superconducting computing circuits typically operate in the 4–10 kelvin temperature range.

Earlier technologies for superconducting computing were not competitive due to the lack of adequate cryogenic memory, interconnects between the cryogenic and room temperature environments capable of high data transmission rates, and fabrication capability for superconducting electronic circuits.

Superconducting computing

Recent developments in superconducting computing circuits include variants with greatly improved energy efficiency [11]. Prospects for cryogenic memories have also improved with the discovery of memory elements which combine some of the features of Josephson junctions and magnetic random access memory (MRAM). The ability to operate both logic and memory within the cold environment, rather than with the main memory out at room temperature, decreases demands on the interconnects to room temperature to the point that engineering solutions can be found.

Superconducting computers are being evaluated for potential energy efficiency benefits relative to conventional technology. The total benefit of such an energy-saving technology would scale as the number of systems multiplied by the energy savings per system.

My group at NSA's Laboratory for Physical Sciences conducted a feasibility study of a range of superconducting computer systems from petascale to exascale (1015–1018 flops) for performance, computation efficiency, and architecture. Our results indicate that a superconducting processor might be competitive for supercomputing [11]. Figure 1 shows a conventional computer in comparison with a conceptual superconducting computer with the same computing performance, but much better energy efficiency. On the left is Jaguar, the supercomputer that held the performance record on the TOP500 list from 2009 to 2010. The conceptual superconducting supercomputer shown on the right is much smaller and uses much less power (i.e., 25 kW versus over 7 MW).

FIGURE 1. The Jaguar XT5 supercomputer at Oak Ridge National Laboratory (on left) and the conceptual superconducting supercomputer (on right) both perform at 1.76 petaflops, but the Jaguar XT5 consumes over 7 MW; whereas, the superconducting one consumes 25 kW. (Jaguar XT5 image credit: Cray Inc.)


Superconducting computing shows promise for large-scale applications. The technologies required to build such computers are under development in the areas of memories, circuit density, computer architecture, fabrication, packaging, testing, and system integration. The Intelligence Advanced Research Projects Activity (IARPA) recently initiated the Cryogenic Computing Complexity (C3) Program with the goal of demonstrating a scalable, energy-efficient superconducting computer [12]. The results of this program should tell us if superconducting computing can live up to its promise.

About the author

Marc Manheimer is a physicist at NSA's Laboratory for Physical Sciences. His research interests include magnetic materials and devices, and cryogenic phenomena, devices, and systems. He recently became interested in superconducting computing as a solution to the power-space-cooling problem facing supercomputing. Manheimer is currently serving as the program manager for the new C3 program at IARPA.


[1] TOP500. Available at:

[2] The Green500. Available at:

[3] Koomey JG. "Worldwide electricity used in data centers." Environmental Research Letters. 2008;3(3). doi: 101.1088/1748-9326/3/3/034008.

[4] Koomey JG, Belady C, Patterson M, Santos A, Lange K-D. "Assessing trends over time in performance, costs, and energy use for servers." 2009 Aug 17. Final report to Microsoft Corporation and Intel Corporation. Available at:

[5] Koomey JG. "Growth in data center electricity use 2005 to 2010." 2011 Aug 1. Oakland, CA: Analytics Press. Available at:

[6] Information on collocation data centers is available at:

[7] Open Compute Project. Available at:

[8] Ritter K. "Facebook data center to be built in Sweden." The Huffington Post. 2011 Oct 27. Available at:

[9] McDougall D. "Facebook keeps your photos in the freezer: Arctic town now world data hub." The Sun. 2013 Jan 24. Available at:

[10] Bronk C, Lingamneni A, Palem K. "Innovation for sustainability in information and communication technologies (ICT)." James A. Baker III Institute for Public Policy, Rice University. 2010 Oct 26. Available at: ICT-102510.pdf.

[11] Holmes DS, Ripple AL, Manheimer MA. "Energy-efficient superconducting computing—power budgets and requirements." IEEE Transactions on Applied Superconductivity. 2013;23(3). doi: 10.1109/TASC.2013.2244634.

[12] IARPA. Cryogenic Computing Complexity (C3) Program. Available at:

View PDF version of this article (458 KB)


Date Posted: Nov 1, 2013 | Last Modified: Nov 1, 2013 | Last Reviewed: Nov 1, 2013