Home | Program | Keynotes | Tutorials | Videos | Committees | Sponsors | Previous Conferences | Contact
PLATINUM PATRONS
GOLD PATRONS
MEDIA PATRONS

Keynote 1

Speaker

Philip Heidelberger, Research Staff Member, IBM

Title

The IBM Blue Gene/Q Interconnection Network and Message Unit

Abstract

This talk gives an overview of the IBM Blue Gene/Q supercomputer and then describes its interconnection network and message unit in more detail. The Blue Gene/Q system is the third generation in the IBM Blue Gene line of massively parallel, energy efficient supercomputers. The Blue Gene/Q architecture scales to tens of Petaflops. The network and the highly parallel message unit, which provides the functionality of a network interface card, are integrated onto the same chip as the processors and cache memory and occupy only 8% of the chip's area, including IO cells. The chip has 11 ports; each port can transmit data at 2 GB/s and simultaneously receive at 2 GB/s for a total bandwidth of 44 GB/s. The network consists of a five dimensional compute node torus with a subset of compute nodes using the 11th port to connect to I/O nodes. The five dimensional torus provides both excellent nearest neighbor and bisection bandwidth; a Blue Gene/Q machine has approximately 46 times the bisection bandwidth than that of a first generation Blue Gene/L machine with the same number of nodes. It can be partitioned into non-interfering sub-machines. A single network supports point-to-point and collective traffic such as MPI all reduces at near link bandwidth. The collectives can be over an entire partition or any rectangular subset of a partition. The network provides bit reproducible, single pass floating point collectives. The message unit has multiple injection and multiple reception engines that parallelize the injection and reception of messages, permitting full utilization of all the links on the torus. Measured hardware network performance results will be presented.

Bio

Philip Heidelberger received his B.A. in Mathematics from Oberlin College in 1974 and his Ph.D. in Operations Research from Stanford University in 1978. He has been a Research Staff Member at the IBM T.J. Watson Research Center in Yorktown Heights, New York since 1978. Prior to 2000, his research was primarily focused on developing highly efficient algorithms for rare event and parallel discrete event simulations. Since 2000, he has been a member of IBM Research's Blue Gene supercomputer hardware team where he was initially responsible for developing a parallel, cycle-accurate simulation model of the first generation Blue Gene/L interconnection network that was used as the basis for network architectural decisions. He has been involved in many aspects of Blue Gene, including logic verification, hardware bringup, network software interfaces and efficient communications algorithms. He was a lead architect of the second generation Blue Gene/P Direct Memory Access device that interfaces to the network and currently leads the group focused on future Blue Gene networks and their memory interfaces.

Dr. Heidelberger has co-authored over 115 papers, seven of which have won outstanding paper awards including the 2006 ACM/IEEE Gordon Bell Prize. He was an Associate Editor of Operations Research (1983-1990), Editor-in-Chief of the ACM Transactions on Modeling and Computer Simulation (1996-1997), Program Chair of the 1988 Winter Simulation Conference, Program Co-Chair of the 1992 ACM Sigmetrics Conference, General Chair of the Sigmetrics/Performance 2001 Conference, and Vice-President of ACM Sigmetrics (2004-2007). He is a co-inventor on 30 U.S. Patents and has received seven IBM Outstanding Technical Achievement Awards. He is a member of the IBM Academy of Technology, and a Fellow of the IEEE and ACM.


Keynote 2

Speaker

Greg Papadopoulos, Venture Partner, New Enterprise Associates.
Formerly CTO, Sun Microsystems.

Title

The Computer that is the Network: A Future History of Big Systems

Abstract

A computer of any real size today is built around from computing components (the things we used to call servers), storage components (the things we used to call filers) and data-center-scale backplane (the thing we used to call network switches). Mostly, the construction of these systems are left as an Exercise for the User, but that's changing rapidly. Patterns around compute-storage-network virtualization are emerging, and are apt to coelesce, finally, into some coherent view of a interconnect-centered system, with fundamental concepts of balance and having a "real" O/S.
We'll take an historical view of the evolution of computers whose backplane is in fact a network -- from both an interconnect and software systems view. Then we'll speculate wildly on the future of network-scale systems, and hope to identify What's Important in the design of big systems and their interconnection.

Bio

With more than twenty years' experience in the technology industry, Greg Papadopoulos has held several executive positions, most recently serving as Chief Technology Officer at Sun Microsystems, where he directed the company's $2B R&D portfolio. Along with having been a practicing engineer with HP, Honeywell and Thinking Machines, Greg has also help found a number of his own companies, from video conferencing (PictureTel) to computational fluid dynamics (Exa Corporation). Greg was also an Associate Professor of Electrical Engineering and Computer Science at MIT, where he conducted research in scalable systems, multithreaded/data flow processor architecture, functional and declarative languages, and fault-tolerant computing. He holds a bachelor's degree in systems science from the University of California at San Diego, as well as master's and doctoral degrees in electrical engineering and computer science from MIT.


Keynotes 3

Speaker

Shekhar Borkar, Intel Fellow and director of Extreme-scale Technologies at Intel Labs

Title

Will interconnect help or limit the future of computing ?

Abstract

More than ten years ago, it was envisioned that the interconnects will be the limiters for continued increase in compute performance. Now we know that it's not the interconnects but power and energy that have been the limiter. As technology scaling continues providing abundance of transistors, and new architectures to continue to deliver performance in a given power envelope, we need to revisit the role of interconnects. This talk will touch on technology outlook, future architectures and design directions for continued performance, and the role of interconnects-whether they will help or hinder !

Bio

Shekhar Y. Borkar is an Intel Fellow, Intel Labs and director of Extreme-scale Technologies. Borkar is responsible for directing extreme-scale research in technologies for Intel's future microprocessors.
Borkar joined Intel in 1981. He worked on the design of the 8051 family of microcontrollers, iWarp multicomputer and high-speed signaling technology for Intel supercomputers. Borkar is an adjunct member of the faculty of the Oregon Graduate Institute. He has published over 100 articles and holds 50 patents.
Borkar was born in Mumbai, India. He received a master's degree in Electrical Engineering from the University of Notre Dame in 1981, and a master and bachelor degrees in Physics from the University of Bombay in 1979.
SILVER PATRONS