Home| Program | Keynotes | Tutorials | Registration | Venue | Committees | Sponsors | Travel Awards | Archive | Contact

HOSTED BY:


DIAMOND PATRON
PLATINUM PATRON
MEDIA SPONSOR
SISTER
CONFERENCES
You can find Tutorials location here

Tutorial 1

Title

Hands-on tutorial on the OpenDaylight SDN platform

Speakers

Srini Seetharaman and Anirudh Ramachandran (Deutsche Telekom)

Abstract

Software-defined Networking has introduced a new way of providing support for higher level computing workload by using a decoupled network OS that manages the network through open APIs. The most popular network OS today is OpenDaylight, developed by a consortium of companies (like Cisco and Brocade), and managed by the Linux foundation. We see a strong ecosystem of controller applications coming up that cater to several diverse value propositions.

The goal of this tutorial is to prepare you for app development for the OpenDaylight controller platform. This tutorial will start with a very brief introduction to OpenFlow/SDN, and then describe the OpenDaylight SDN controller platform and its various components. We will, then, dive into a hands-on tutorial on OpenDaylight where we will set up the controller platform, walk through an OpenFlow-based learning switch application, and build a stateless loadbalancer application, all within an emulation environment called Mininet.

Bio

Srini Seetharaman

Srini Seetharaman is a contributor to sdnhub.org and the Technical Lead for Software-defined Networking (SDN) at Deutsche Telekom Innovation Center. Previously he was a member of the OpenFlow/SDN team at Stanford, between 2008-2011, where he lead the SDN deployments in several nation-wide campus enterprise networks, including Stanford. He holds a Ph.D. in Computer Science from the Georgia Institute of Technology.

Anirudh Ramachandran

Anirudh Ramachandran is a Senior Research Scientist at the Deutsche Telekom Innovation Laboratories in Mountain View, CA. His research interests lie at the intersection of networking and security. He received his Ph.D. in Computer Science from Georgia Tech, and his Bachelors in Computer Science and Engineering from IIT Madras. Previously, he founded a Y combinator-funded startup that allows users to secure their cloud data. His honors include the ACM SIGCOMM Best Student Paper Award in 2006 and the Georgia Tech College of Computing's Best Dissertation Award in 2011.




Tutorial 2

Title

Accelerating Big Data Prosessing with Hadoop and Memcached over High-Performance Interconnects

Speaker

Dhabaleswar K. (DK) Panda and Xiaoyi Lu (Ohio State University)

Abstract

Apache Hadoop is gaining prominence in handling Big Data and analytics. Similarly, Memcached in Web 2.0 environ- ment is becoming important for large-scale query processing. These middleware are traditionally written with sockets and do not deliver best performance on modern high-performance computing systems with high-performance interconnects. In this tutorial, we will provide an in-depth overview of the architecture of Hadoop components (MapReduce, HDFS, HBase, Hive, Spark, RPC, etc.) and Memcached. We will examine the challenges in re- designing the networking and I/O com- ponents of these middleware with modern interconnects, protocols (such as InfiniBand, iWARP, RoCE, and RSocket) with RDMA and storage architecture. Using the publicly available RDMA for Apache Hadoop (http://hadoop-rdma.cse.ohio- state.edu) software package, case studies of the new designs for several Hadoop components and their associated benefits will be presented. Through these case studies, the interplay among high-performance interconnects, storage systems (HDD and SSD), and multi-core platforms to achieve the best solutions for these components will be examined. A comprehensive study on research and development for popular Big Data processing middleware, benchmarks, and applications in the literature will also be summarized and presented in the tutorial.

Bio

Dhabaleswar K. (DK) Panda

Dhabaleswar K. (DK) Panda is a Professor of Computer Science and Engineering at the Ohio State University. His research interests include parallel computer architecture, high performance networking, InfiniBand, Exascale computing, Big Data, programming models, GPUs and accelerators, high performance file systems and storage, virtualization and cloud computing. He has published over 350 papers in major journals and international conferences related to these research areas. Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, High-Speed Ethernet and RDMA over Converged Enhanced Ethernet (RoCE). The MVAPICH2 (High Performance MPI over InfiniBand, iWARP and RoCE) and MVAPICH2-X (Hybrid MPI and PGAS (OpenSHMEM and UPC)) software packages, developed by his research group (http://mvapich.cse.ohio-state.edu), are currently being used by more than 2,150 organizations worldwide (in 72 countries). This software has enabled several InfiniBand clusters to get into the latest TOP500 ranking during the last decade. More than 211,000 downloads of this software have taken place from the project's website alone. This software package is also available with the software stacks of many network and server vendors, and Linux distributors.

Recently, Dr. Panda and his team have also developed a high performance RDMA-enabled Apache Hadoop software package (http://hadoop-rdma.cse.ohio- state.edu) to accelerate Hadoop with RDMA for Big Data. This package is currently being used by more than 60 organizations to harness the benefits of accelerated Hadoop. Dr. Panda's research has been supported by funding from US National Science Foundation, US Department of Energy, and several industry including Intel, Cisco, Cray, SUN, Mellanox, QLogic, NVIDIA and NetApp. He is an IEEE Fellow and a member of ACM. More details about Prof. Panda areavailable at http://www.cse.ohio-state.edu/~panda.

Xiaoyi Lu

Dr. Xiaoyi Lu received the Ph.D. degree in Computer Science from Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China, in 2012. He is a postdoctoral researcher in the Department of Computer Science and En- gineering at the Ohio State University, USA, since July 2012. His current research interests include high performance interconnects and protocols, Big Data, Hadoop Ecosystem, Parallel Computing Models (MPI/PGAS), Virtualization and Cloud Computing. He has published over 20 papers in international journals and conferences related to these research areas. He has been actively involved in various professional activities (Reviewer, PC Member, Session Chair, PC Co- Chair) in academic journals and conferences. Recently, Dr. Lu is doing research and working on design and development for the high performance RDMA-enabled Apache Hadoop software package (http://hadoop-rdma.cse.ohio-state.edu). He has also co-founded and been leading another two open-source projects, LingCloud and DataMPI, for Cloud Computing and Big Data Computing, respectively. He is a member of IEEE. More details about Dr. Lu are available at http://www.cse.ohio- state.edu/~luxi.




Tutorial 3

Title

Accelerating Key/Value Search and Packet Switching with Gateware Defined Networking

Speaker

John Lockwood and Imran Khan (Algo-Logic)

Abstract

Key/Value search is a fundamental service used widely in modern datacenter networks. In this hands-on tutorial, we teach attendees how to build scalable datacenter applications that perform key-value search. A high-speed Local Area Network (LAN) and equipment in a mobile rack is provided that enables attendees send live traffic from their laptop as well as take turns bursting traffic from a generator to multiple key/value search endpoints.

Multiple open-source Application Programming Interfaces (APIs) are provided to allow attendees to perform key/value search from applications implemented in C/C++, Java, or Ruby. We leverage an efficient, open-standard, binary message format to transfer input of keys and output of values over standard Ethernet. To show how the key/value search service scales, we combine traffic from the LAN to load a Top-of-Rack (ToR) switch then forward packets to multiple search endpoints. We compare and contrast results from four different types of key/value search implementations: virtualized machine within a hypervisor, traditional Linux socket software on a native OS, optimized software using Intel's Data Plane Development Kit (DPDK), and gateware in Field Programmable Gate Array (FPGA) devices. Results will be tabulated and we will characterize each implementation of search in terms of throughput, latency, and power.

Bio

John W. Lockwood

John W. Lockwood has 22 years of experience in building FPGA-accelerated network applications. As a tenured professor at Washington University, he led the Reconfigurable Network Group; served as PI on grants from NSF, SAIC, Xilinx, Altera, Agilent, Nortel, Rockwell Collins, and Boeing; and published over 100 papers and patents on FPGA applications for networking. At Stanford University, he managed and grew the NetFPGA Alpha and Beta programs from 10 to 1,021 deployed cards worldwide. In industry, Lockwood co-founded Global Velocity; worked for the National Center for Supercomputing Applications (NCSA); and has consulted for AT&T Bell Laboratories and IBM. Lockwood holds BS, MS, PhD degrees in Electrical and Computer Engineering and is a member of IEEE, ACM, and Tau Beta Pi. Dr. Lockwood is the founder and CEO of Algo-Logic Systems.

Imran Khan

Imran Khan has 27 years of experience with technology companies including 12 years of experience in the Semiconductor Industry. Most recently he was the director of marketing and business development at Intilop Corporation. Prior to that, Mr. Khan held engineering and senior marketing management positions at Intel Corporation, COMPAQ and General Electric Company. Mr. Khan has an MBA and holds BS and MS degrees in Engineering. Mr. Khan is the Vice President of Marketing and Business Development at Algo-Logic Systems Inc.




Tutorial 4

Title

Fast software packet processing with the netmap framework

Speaker

Luigi Rizzo (Universita` di Pisa, Italy)

Abstract

This tutorial targets hardware vendors, network engineers, and researchers looking for solutions to: i) OS support for high speed NIC; efficient software packet processing techniques for SDN products; high speed networking in VMs. We will show how to achieve these results using netmap.

Netmap is a framework for high-speed packet I/O from userspace/kernel, similar in spirit (but with many features that make it unique) to proposals such as DPDK, PFRING-DNA, OpenOnLoad, SnabbSwitch and other vendor specific libraries. Netmap uses the same API to access physical NICs, virtual switches (the VALE software switch) or fast interprocess communication channels (netmap pipes). Unlike other solutions, netmap provides a file descriptor for synchronization (select/poll, epoll, kqueue), thus not requiring active threads to monitor the device's status.

All netmap features are device- and OS-independent, and implemented as a single FreeBSD/Linux kernel module. Optional device driver support exploits the hardware's full capabilities (14.88 Mpps on a 10 Gbit/s NIC with a single core at less than 1 GHz). We have extensive application support: QEMU/KVM and Click have native netmap support, and pcap clients can access netmap without even recompiling through our netmap-enabled libpcap.

The lean architecture and ease of adding driver support (4-500 lines of code per NIC) make netmap the most viable approach to exploit the features of high speed (10-40Gbit/s) NICs in general purpose OSes.

Bio

Luigi Rizzo is a Professor of Computer Engineering at the Universita` di Pisa, Italy. His research focuses on computer networks and operating systems, including some highly cited work on multicast congestion control, FEC-based reliable multicast, network emulation, packet scheduling, fast network I/O, virtualization. Much of his work has been implemented and deployed in popular operating systems and applications, and widely used by the research community. His contributions include the popular dummynet network emulator (part of FreeBSD and OSX, and also available for linux and windows); one of the first publicly available erasure code for reliable multicast; the qfq packet scheduler; and the netmap framework for fast packet I/O.

Luigi has been a visiting researcher at several institutions including ICSI (UC Berkeley), Intel Research Cambridge (UK), Intel Research Berkeley, Google Mountain View. He has been General Chair for SIGCOMM 2006, TPC Co-Chair for SIGCOMM 2009 and CoNeXT 2014, and TPC member/reviewer for many major networking conferences and journals.



FOLLOW US ON:


GOLD PATRON
SILVER PATRON
REGISTRATION SPONSOR

SPONSORED BY:

COPYRIGHT ©2014 HOT INTERCONNECTS