SCinet 2016 Volunteer Reports

In November 2016 two members of the KINBER community were invited to participate as volunteers at SC2016 to help build SCinet, the most powerful and advanced network in the world. Read their reports of their incredible experiences on the SCinet volunteer team below.

Report from Jonathan Miller, Network Analyst, Franklin & Marshall College

Every year, about 200 networking professionals, students, and vendor volunteers from around the world get together to build the world’s fastest network, run it for a week, and tear it down.  This year I had the opportunity to volunteer at SCinet 2016 in Salt Lake City, Utah as a representative from Franklin and Marshall College.  I worked with the Measurement and Analysis group of the DevOps team to produce graphs and charts for displays located at various kiosks around the show floor.  We provided up/down monitoring via Nagios, sFlow data via inMon Traffic Sentinel, as well as other key metrics, like the utilization of the NOC Coffee Pot.  My main focus was the backup SNMP-based monitoring software LibreNMS, as well as SmokePing for network/application latency monitoring.  The Measurement and Analysis group consisted of 8 people representing Georgia Tech, InMon, Penn State, and Indiana University, and even included a graduate student from Germany.  

In an attempt to bring home some of what I learned while working at the show, I’ve set up a small instance of SmokePing locally.  So far, it monitors our RADIUS servers for network and application latency.  As I have time here and there, I hope to expand it to include things like DNS, NTP, and possibly HTTP application latency for some of the services around campus.

It was inspiring to work with such a broad range of individuals who are so excited about technology and networking.  This chance to see and work with a broader than usual subset of technology and technologists will help to illuminate possible options when evaluating new solutions.  I’d like to thank Carrie Rampp and Alan Sutter of F&M who saw the value in sending me to this event, and Ken Miller from Penn State for letting me work on his team.  I’d also like to thank the good folks at the NSF, as this professional development opportunity was funded through the CC*DNI grant awarded to F&M to support the implementation of our Science DMZ.  The exposure to technologies that I’ve not previously used, and interpersonal connections developed, are certain to be invaluable as we progress with our Science DMZ implementation.  This was a highly memorable event, and I hope to have the good fortune to see everyone again at SCinet 2017 in Denver.

Report from Zach Bare, Network Engineer, KINBER

Supercomputing 2016 was held at the Salt Palace Convention Center in Salt Lake City, Utah. The conference went from November 13th-18th, with the exhibition floor open from the 14th-17th. I did not get an official number for this year’s attendance, but did hear the year prior saw around 12,000 attendees, and conference planners expected to meet or exceed that number this year.

This was my first year both attending the conference and volunteering for SCinet, the conferences network services unit. SCinet has two functions at the conference; the first is to provide public access commodity internet throughout the entire conference and exhibition center. The second, and more well known function, is to provide extremely high performance networking to exhibitors to demonstrate high performance computing throughout the convention center and across the globe.

screen-shot-2017-01-16-at-8-17-42-am

This year’s SCinet network involved foreign wide area network (WAN) connections from Salt Lake City to Tokyo, Daejon, Taiwan, Singapore, Toronto, Ottawa, Paris, London, Amsterdam, Geneva, Frankfurt, Rio De Janeiro, and Sao Paolo.

Planning of the network for the convention begins before the previous convention is complete. It is often said that this network takes a year to plan, a week to build, a week to run, and a day to tear down. I can say that this is very accurate to what really happens. Work began on Monday to get everything built and in place. The network was to be fully operational by noon on Saturday in order to have enough time to fix last minute issues and allow exhibitors to set up their demonstrations. Without the vast amount of volunteer help from industry professionals and university students, this would not have been possible.

screen-shot-2017-01-16-at-8-19-16-am

An exhibitor at the show was giving away lego kits which included a lego person and the pieces to build a network rack. A group of engineers used these kits to build a model of the SC16 network deployment. The large row of racks at the top of the image depicts the core of the network. Everything for the conference ran through this area. The small racks were used for network distribution to the booths. Fiber was run along the ceiling rafters from the core to the distribution racks. From there, if a booth paid for a fiber connection, a fiber cable was run along the floor from the distribution rack to the booth. Despite protections put in place, it was not uncommon to have to re-splice fibers that were damaged due to heavy forklifts driving over them.

screen-shot-2017-01-16-at-9-04-48-am

SC16 had roughly 3 terabits per second of outside (internet / internet2) connectivity delivered to the convention center. Not all circuits were able to be monitored, however the above chart shows traffic data for the WAN links that were monitored. This chart shows that at the peak, approximately 1.2 terabits per second of traffic was observed on these links.

screen-shot-2017-01-16-at-9-05-02-am

This chart above shows traffic levels on the exhibition floor within the high performance network. At its peak the network saw roughly 3.1 terabits per second moving through the network.

The group I was assigned to at SC16 was the Edge / Wireless group. This was the first year that these two groups had been combined, and those who had participated in previous conferences said combining the two groups was a great improvement. The Edge / Wireless group had two functions, 1) to deploy wireless access points throughout the entire convention center for public commodity wifi, and 2) deploy wired access switches around the convention center for use by presenters, attendees, and digital signage. This year the wireless network reached a new record milestone, with a peak of approximately 5,745 simultaneous users.

The first couple days of setup were spent deploying Netgear switches into columns on the main exhibition floor. Another group on my team followed behind and would deploy tripods and Ethernet cable for the access points which would be deployed later that week. The access points were connected to the Netgear switches which connected to distribution switches in closets throughout the convention center. This model was repeated throughout the rest of the center, with a total of 208 Cisco access points being deployed.

screen-shot-2017-01-16-at-9-05-15-am

screen-shot-2017-01-16-at-9-05-25-am

screen-shot-2017-01-16-at-9-05-32-am

Systems engineers were able to build upon the access point placement information that I programed into the controller, as well as real-time client data from the controller, to build heatmaps and client tracking dashboards. While more of a proof of concept than a use for production for SC16, this showed how this information could be put to use to help engineers plan wifi deployment for future conferences.

Speak Your Mind

*