Home| Program | Keynotes | Tutorials | Videos | Attendees | Committees | Sponsors | Previous Conferences | Contact



Tutorials will be held at Cisco Building C, 150 West Tasman Dr.

AM: T1 in Loire (1st floor), T2 in Rogue (2nd floor)
PM: T3 in Rogue (2nd floor), T4 in Loire (1st floor)

Tutorial 1


Accelerating Big Data with Hadoop and Memcached Using High Performance Interconnects: Opportunities and Challenges


Dhabaleswar K. (DK) Panda and Xiaoyi Lu, The Ohio State University


Apache Hadoop is gaining prominence in handling Big Data and analytics. Similarly, Memcached in Web 2.0 environment is becoming important for large-scale query processing. These middleware are traditionally written with sockets and do not deliver best performance on modern clusters with high-performance interconnects. In this tutorial, we will provide an in-depth overview of the architecture of Hadoopcomponents (HDFS, MapReduce, HBase, RPC, etc.) and Memcached. We will examine the challenges in re-designing the networking and I/O components of these middleware with modern interconnects and protocols (such as InfiniBand, iWARP, RoCE, and RSocket) with RDMA. Using the publicly available Hadoop-RDMA (http://hadoop-rdma.cse.ohio-state.edu) software package, we will provide case studies of the new designs for several Hadoop components and their associated benefits. Through these case studies, we will also examine the interplay between high performance interconnects, storage systems (HDD and SSD), and multi-core platforms to achieve the best solutions for these components.


  1. Introduction to Big Data Applications and Analytics
  2. Overview of Apache Hadoop Architecture and its Components
    • HDFS
    • MapReduce
    • HBase
    • RPC
  3. Overview of Web 2.0 Architecture and Memcached
  4. Overview of Benchmarks and Applications using Hadoop and Memcached
  5. Overview of High Performance Interconnects and Protocols
  6. Challenges in Accelerating Hadoop and Memcached
  7. Acceleration Case Studies and In-Depth Performance Evaluation
    • HDFS
    • MapReduce
    • HBase
    • RPC
    • Memcached
  8. Optimizations and Tuning of Accelerated Designs on Modern Clusters
  9. Opportunities for Additional Enhancements and Accelerations
  10. Conclusions and Q&A


Please see speaker's bio here

Tutorial 2


Openstack & SDN - A Hands on Tutorial


Ramesh Durairaj, Oracle and Edgar Magana, PLUMgrid


The objective of this tutorial session is to provide you

Part 1:

  1. Introduction to Openstack - Technical architectural Overview of Openstack, premier Opensource Cloud IaaS framework.
  2. Technical Deep Dive of Openstack/Neutron (formerly called Quantum) Network Susbsystem
Part 2:
  1. Hands -on bring up a local Openstack Coud instance in your laptop
  2. Hands on - bringup of Openstack Neutron Service
  3. Developers overview of Openstack Neutron


Ramesh (Ram) Durairaj

Ram Durairaj is an Architect and Technologist with over 18 years of experience in Data Center Technologies and Data Center Network Architectures. Ram has extensive experience in Programmable Networks, Grid Computing and Cloud Computing paradigms and expert in converged datacenter network architectures. While working at Cisco, Ram founded and engineering lead for Openstack@Cisco incubation project in Cisco CTO office - Cloud Computing.

Ram has been participating and represented Cisco in Openstack Inaugural Summit and Design Summits since its inception in July 2010. He is also one of the founding members of Openstack/Quantum(now known as Neutron) Project and was a Core Developer.

His past work experiences include Fabric7 Systems, Nortel Networks and Intergraph. Currently Ram is working as Senior Director at Oracle and leading Oracle's Software Defined Networking project and its applications in Cloud computing frameworks.

Edgar Magana

Edgar Magana is currently a Sr. Member of the Technical Staff at PLUMgrid. He is in charge of the integration efforts between OpenStack Neutron and PLUMgrid Platform. Edgar worked over five years for the Chief Technology Office (CTO) of Cisco Systems as a Technical Leader and Researcher. He received his Ph.D. and M.Sc. in Computer Science from Universitat Politecnica of Catalunya, Spain. Currently, Edgar is a core member of the Neutron development team in OpenStack. He has an extensive experience on Cloud and Grid Computing, Policy-based Management Systems, Monitoring and Scheduling of network and computational resources on distributed networks. His research interest is related to Cloud Computing, Software Defined Networks (SDN), IaaS, PaaS and SaaS.

Tutorial 3


The role of optical interconnects in data-center networking, and WAN optimization


Loukas Paraschis, Cisco


The advent of virtualization of large shared clusters of compute and storage infrastructure has increased extensively the importance of "east-west" traffic flows inside a data-center. To optimize around these new traffic patterns, the DC architectures have been evolving towards a flatter hierarchy of more-densely interconnected switches in "fat tree" designs that can adjusting capacity more quickly, with more deterministic performance and greater manageability, using software-defined networking (SDN) abstractions. This inter-DC architecture evolution has been combined with new requirements for inter-DC networking systems with higher capacity, and higher port density. Optical technologies have increasingly become the main intra-DC interconnection solution for such high capacity, longer distance (>10s m) links, and a critical factor in the cost-performance optimization of the intra-DC networking fabric.

At the same time, the expanding availability of faster and more reliable broadband network connectivity has been enabling a wider proliferation of data-center based applications through an Internet-based service delivery model, referred collectively as "cloud" services. As a result, data-centers have become one of the largest contributors to the increased Internet traffic in the WAN. The DC networking has thus been evolving to meet the "cloud" service delivery requirements, leveraging an equally important, yet less often reviewed, amount of innovation in the inter-DC transport architectures. More specifically, new converged IP/MPLS with flexible DWDM transport architectures leverage advancements in routing, and photonics technologies, combined with multi-layer control-plane SDN automation, and WAN controller optimization, to improve operation, provisioning, restoration, and infrastructure utilization.

In this presentation, we review the key innovations in technology, system, and network architectures that enable intra-DC connectivity, and inter-DC transport to cost-effectively scale to the "cloud-era" requirements for more highly meshed networks with higher capacity, and more flexible SDN provisioning. Future network evolution, emerging standards, and related research topics will also be discussed.


Loukas (Lucas) Paraschis is senior solution architect in cisco's Americas next generation network group, primarily responsible for the evolution of converged transport architectures, WAN optimization, routing and optical technologies, business models, and market development efforts in Service Providers, large Enterprise, and Public Sector infrastructure. Prior to his current role, Loukas worked as an R&D engineer, product manager, technical leader, and business development manager for cisco's optical networking and core routing. He has been (co)author in next-generation transport networks of more than 50 peer-reviewed publications, invited, and tutorial presentations, two book chapters, two patents, and was an IEEE Distinguished Lecturer on this topic. Loukas received his Ph.D. from Stanford University, is a senior member of IEEE, and a Fellow of OSA.

Tutorial 4


Flow and Congestion Controls for Multitenant Datacenters: Virtualization, Transport and Workload Impact


Mitch Gusat and Keshav Kamble, IBM


  1. A Layer 2 to 5 Flow and congestion control (FCC) Framework for DCNs Covers Ethernet/CEE and IBA fabrics seen from the FCC angle. Comparisons with TCP, in practice and theory, also including Incast and workload impact.
    Where is FCC best introduced: On link layer (CEE/IBA), transport layer (TCP et al.), app. L5+ (HPC)? Which FCC schemes are 'better': Credits, PFC, window or rate controls?
  2. Physical DCN: L2 Fabrics IEEE 802 and IBA standardization results, translated into plain English. Why and how did IBA and Ethernet standard groups have chose their respective FCC schemes: credits, PFC, CCA and QCN? What are their pros and cons? Flow Ctrl) PFC vs. credits; Cong. Ctrl) QCN vs. CCA;
  3. Practical issues How are these schemes to be practically implemented by designers? How about configuration and tuning by users? Interaction between PFC and QCN? Comparison with TCP and ECN?
  4. Virtual DCN: FCC in SDNs, from zero to virtualized CEE? Overlay Networks: Currently all the virtual switches, vNICs, hypervisors are lossy. Is this a feature or a bug? What happens to the application performance by introducing a lossless vSwitch?
    We prove with simulations and testbed platforms the challenges confronting today the DCN and SDN architects: Ranging from simple hotspot congestion baseline scenarios (IBA, 802) to HOL-blocking (low and high order), from input- (Hadoop-like) and output-generated (priority modulation, PFC and TES) congestion, to saturation trees. Also looks inside overlay networks, vSwitches and vNICs.
  5. Outlook: Next Generation FCC for Physical and Virtual Datacenter Fabrics and Overlays.
    a) FCC for Tbps CEE
    b) FCC for SDN / Overlay Nets.


Mitch Gusat is full time researcher at IBM Research Zurich. His current focus is on datacenter and Cloud fabrics, virtual networking, modelling of large distributed systems and lossless datacenter networks beyond 100Gbps. In this area he has contributed to the standardization of IEEE Converged Enhanced Ethernet, InfiniBand and RapidIO - while also advising Master and PhD students from several European universities. His other research interests include switching, SDN, HPC interconnection networks, shared (virtual) memory, real-time scheduling, high performance protocols and IO acceleration. Previously he was a Research Associate at the University of Toronto where he contributed to NUMAchine, a 64-way cache-coherent HPC system. In a former lifetime, Mitch was student and then researcher at the "Politehnica" University of Timisoara. He holds Masters in CE, resp. EE, from the above universities. He is member of ACM and IEEE, and holds a few dozen patents in switching, flow and congestion control.