2 Panel Discussions listed below

What are the future trends in high-performance interconnects for parallel computers?
Panelists: Jose' Moreira, IBM ; TJ Watson; Jarek Nieplocha, Pacific Northwest National Laboratory; Mark Seager, Lawrence Livermore National Laboratory; Craig Stunkel, IBM; TJ Watson; Greg Thorson, SGI; Paul Terry, Cray; Srinidhi Varadarajan, Virginia Tech
  Ever-increasing demand for computing capability is driving the construction of ever-larger computer clusters, comprising compute nodes integrated by a high-performance interconnect. This network is in most cases the heart of the parallel computer and defines its functionalities, influences the design of the system software and determines the actual performance of the machine.

The panelists will discuss the major design trends in high performance
Most interconnection network provide some form of "intelligence" in the
network interface. Do you expect this to become a central feature in the
future? Will it be possible to implement "network operating systems" in the
network interface?
Do you expect that optical networks will become widespread?
What are the trends in latency and bandwidth? Networks as Quadrics Elan4
and Cray XD1's already deliver 1.5 microseconds at MPI level. Is bandwidth
technologically free? Will the I/O interface be the bottleneck?
Will native support for collective communication be a central feature of a
high performance network?
BlueGene/L has shown that a thermally-aware supercomputer can be packaged
in a small space, with a chip that integrates processors and network
interface. Will future network interfaces be integrated with the processors
in the same chip?
How can a high-performance network help to achieve fault-tolerance in a
large-scale machine?

Network Processors: Prospects for Practical Deployment
James Sterbenz and Bryan Lyles
Panelists: Asgeir Eiriksson, Chelsio Communications; Steve Klinger, AMCC; Christoph Schuba, Sun Microsystems; Jonathan Turner, Washington University; Raj Yavatkar, Intel
  Network processors (NPs) are embedded controllers specialised for packet processing, and fill the performance gap between hardware and general-purpose microcontroller software. They are ideally suited for switch port input processing (e.g. address lookup and packet classification) and output processing (e.g. per-flow scheduling). Due to their flexibility, they can reduce design time and costs, since it is far easier to reprogram a NP than rerun an ASIC fab and reinstall chips. Additionally, if switch vendors were to provide open programmable interfaces, NPs could be the vehicle to enable adaptive, extensible networks in which network service providers could easily deploy new protocols and services. Furthermore, the research community might finally have the ability to perform network-layer protocol research using real (non-toy) switches.

NPs have received much attention over the last several years, with significant research activity, major product lines from vendors such as Intel, IBM, and Motorola; the IETF ForCES working group; and the NPF industry forum to specify hardware and software standard interfaces. Recently, however, there is reason for concern. There has been a lull in NP announcements and rollout, and one of the three major public vendors has left the market. It appears that a major switch vendor will not use commodity chips in an attempt to increase clone-resistance, and is unlikely to open programming interfaces. These trends could kill what appeared to be a promising new technology for network research and programmability. This panel will discuss these issues and trends.