Five teams will be defending America’s Big Iron honor at the upcoming Student Cluster Competition (SCC) which begins November 18th at the SC13 conference in Denver. These battles feature teams of university undergraduates designing, tuning, and optimizing clustered systems to see who can serve up the highest performance while using only 26 amps of electricity.
The Big Iron division of the competition this year features eight teams. We’ve already profiled the non-US entrants (here) and now it’s time to take a look at the home teams.
There’s a lot of pressure resting on their non-degreed shoulders. They aren’t defending just America but the whole damned hemisphere, since there isn’t a South American entry in the competition this year. So who are these teams, and what are they bringing to the cluster competition table?
University of the Pacific : The all-female Team Venus returns to the SC cluster competition for the second time this year. Venus 2.0 is a mix of both competition veterans and novices. The team skews young, with three freshmen joining two juniors and a sophomore.
All of the team members have computer science majors with various arts and sciences minors. They’re a brainy lot for sure, sporting a gaudy 3.74 average GPA. It sounds like this version of Team Venus has been training hard, working out with a brace of LittleFe starter clusters during the off-season.
The university has bulked up their HPC course offerings over the past year, offering more classes in computer science with an emphasis on scientific processing. Team Venus 2.0 members also take a special course specifically designed to prepare them for the SC13 competition.
Team Venus is again sponsored by Cray with an assist from Intel, so we can expect them to bring some variety of Xeon-flavored cluster – which just might be spiced with some crunchy accelerators (although we don’t know the variety.)
Lawrence Livermore National Lab is lending a hand on the training side, giving the gals a valuable resource indeed. The folks at LLNL know their way around clusters and optimizing applications.
We’ll see whether another year of training and experience pays off for Team Venus 2.0 at SC13 in Denver.
The Chowder Consortium consists of Boston University, MIT, Northeastern University, Harvard, and the University of Massachusetts.
This will be the third time at the competition for team from the land where they love that dirty water. They’ve always brought a lot of personality to the cluster wars, and usually have finished in the middle of the pack. But 2013 could be the breakthrough year for the Beantown Six.
Over the long off-season, the kids on Team Chowdah have gained a lot of real-world skills that should pay dividends in the competition. Rather than spend their free time slinging coffee at Starbucks or smoking and joking with their hooligan pals, they spent their vacations working and learning lots of stuff.
For example, the only twins ever to compete in the cluster wars return for their third appearance this year in Denver. Over the past year, this human cluster worked at Boston U’s Neuromorphics Lab as CUDA programmers. They’re also fresh off dual NVIDIA summer internships, working in the DevTech group to optimize HPC applications for GPUs.
Three other team members expanded their horizons by working to optimize drones and other unmanned flying machines in one way or another. One of these people worked for the Navy, researching how to automate and optimize flight decision making. The other two built, flew, and programmed flyers with the BU Unmanned Aerial Vehicle team.
Rounding out the team is a veteran of the BU College of Engineering IT department. His days are spent troubleshooting and fixing various computing hardware and software problems under “fix it or else” pressure. He spent the summer at Akamai, where he certainly added to his real-world skills.
The team is bringing a unique set of hardware that really captured my attention. I can’t discuss what they’ve proposed in any detail due to competitive considerations. But I can say that if Team Chowdah can pull together what they outlined in their application, they could go all the way.
Once again, Team Chowdah is being supported by Silicon Mechanics, a sponsor that has always gone above and beyond the call of duty in acquiring and integrating the various exotic technologies on the team’s wish list.
The University of Texas at Austin captured the Golden Fleece of Cluster Excellence (GFCE) at the SC12 competition last year and is returning to defend the fleece. (There isn’t any sort of fleece awarded at the competition.)
This is Team Longhorn’s fourth competition, and they’ve usually fielded very competitive teams. In their very first foray in 2010, they snagged the Highest LINPACK award and challenged for the overall championship.
In the 2011 Seattle contest, they brought the first liquid-cooled system but were swamped by GPU-wielding competitors on the performance front. The team returned to SC12 with a vengeance (and a few GPUs of their own) and topped a highly experienced field to snag their first overall victory.
This year, the Longhorn team has a 1:3 ratio of veterans to newbies. The two veterans will be teaching (or hazing?) the rookies everything they need to know to compete successfully in Denver. All of the Texas players have a fairly deep computing background. They’re all either computer science majors or have had significant experience working with HPC within their academic discipline.
One team member bio caught my eye: a guy who just can’t say no to brain-numbing complexity. He is a senior who will finish up with a triple major in Computer Science, Aerospace Engineering, and Mathematics. I have to imagine that there aren’t a lot of ‘cruise control’ classes when you get to the upper division courses in his schedule. Yikes.
On the hardware side, Team Longhorn will be supported by their long-term sponsor Dell. I didn’t see anything in their hardware proposal that really stood out to me, but you never know what Texas might unpack in Denver.
In the past, they’ve received technical support from their HPC homies at TACC (Texas Advanced Computing Center). Interestingly enough, the Dell-TACC connection has now generated two Cluster Competition title: the Texas win in 2012 and the South African team win at ISC’13. Can they do it again this year?
University of Colorado (Boulder) is the home team this year, defending the honor of Colorado, the US, and, as mentioned above, the entire Western Hemisphere. But this is the most experienced team in the competition; they’ve been there and done that.
How experienced are they? Well, Colorado (Team Buffalo) has competed in five of the six SC competitions and both of the ISC competitions in Europe. Also, five of their six team members have been to the big dance before, most of them multiple times.
The team has been a successful competitor, grabbing the Highest LINPACK award in 2009 and a Fan Favorite award in 2010. More importantly, this is a team that really personifies the spirit of the competition. Three times now, the Buffaloes have bailed out other teams experiencing hardware problems. Colorado gave these teams their spare servers and helped them find other replacement gear.
Team Buff is a proven competitor – a lunchbox team that puts on their pants one leg at a time and always gives 110% whether it’s in practice or on the field. They have a longstanding partnership with hardware provider Dell and lots of access to HPC experts from nearby NCAR (National Center for Atmospheric Research).
This will be the first home game for the Buffaloes after seven long years of road bouts. Maybe this will prove to be the Year of the Buffalo.
University of Tennessee (Knoxville): The University of Tennessee will be familiar to both sports and supercomputing enthusiasts. The Tennessee Volunteers have been a perennial SEC college football powerhouse, albeit down on their luck the past few years. They are also one of the biggest players in the HPC game too.
First, UT owns the 919 TFlop/s, Kraken XT5 supercomputer. Ranked at #31 on the most recent Top500 list, Kraken has 9,408 nodes and 112,800 AMD cores connected with SeaStar routers in a 30D torus and is one of the largest university owned supers in the world. The system actually lives a few miles away on the Oak Ridge National Lab campus, which is billed as the world’s most powerful computing complex. , which has a few supercomputers of its own.
The National Institute for Computational Sciences (NICS) makes its home in Tennessee, and the two coaches/organizers for Team Volunteer, Stephen McNally and John Wynkoop, hail from NICS.
Team Volunteer is making a bit of history their first bid for cluster competition glory; they’re the youngest standard track (Big Iron) team to ever compete in a major Student Cluster Competition. The oldest member of the team is a college junior, followed by two sophomores, plus two high school students (and juniors, not high school seniors).
Even though they’re young, this is a formidable team. Their bios are pretty strong, judging from their application to the SC committee. They’ve also had great training and coaching, including help from NISC and team sponsors Cray and Intel.
Hardware-wise, their final configuration was still a little up in the air when they submitted their application in late spring. The team discussed a few paths they exploring, but were keeping their options open. One of the optimization solutions they discussed would, if utilized, be the first time it has been seen in a cluster competition. Suffice to say, I’m damned interested in seeing what Team Volunteer brings to the big show in November and how they do in the competition.
Now field is complete, with five American teams, joined by Germany university FAU, NUDT from China, and the IVEC team from Australia. Which of these teams have what it takes to out-wit, out-perform, and out-cluster the rest?
Next up, it’s time to check in on the Commodity Track competition, where four student teams will compete to design and drive the fastest cluster $2,500 can buy. They’ll be running the exact same apps as the Big Iron teams, but competing for separate prizes in their division. Stay tuned.