REGISTER NOW! | VOLUNTEER | BECOME A PATRON! | DONATE | COMMITTEES | PARKING & DIRECTIONS | CAMPUS HOUSING | ARCHIVE | CONTACT

 

   
  Keynote: Wednesday, August 25, 2004

Buffers: How we fell in love with them, and why we need a divorce

Nick McKeown

Associate Professor of Electrical Engineering and Computer Science, Department of Electrical Engineering, Stanford University

Delay and jitter are a really annoying fact of life on the Internet. They are the price we pay for packet switching and >>statistical multiplexing. Users constantly complain about delay and jitter, and will switch operators to improve performance.

It's the switch/router buffers that cause queueing delay and delay-variance; when they overflow they cause packet loss, and when they underflow they degrade throughput. Arguably, switch and router buffers are the single biggest contributor to uncertainty in the Internet. Given the significance of their role, we might reasonably expect the dynamics and sizing of router buffers to be well understood, based on a well-grounded theory, and supported by extensive simulation and experimentation. This is not so.

There is lots of basic and interesting research still to be done to understand the size and dynamics of packet buffers. While we tend to assume they are cheap and mostly harmless, I'll argue that buffers are usually much too big, they frequently degrade user performance, and they make routers overly complex and limit their scalability. And if we continue our love affair with buffers, then it will be hard to introduce optical packet processing into the network.

We'd usually be better off by cutting them drastically or removing them entirely. The question is: Could we make an interesting network with little or no buffers?
   
REGISTER NOW! | VOLUNTEER | BECOME A PATRON! | DONATE | COMMITTEES | PARKING & DIRECTIONS | CAMPUS HOUSING | ARCHIVE | CONTACT