Texas is at SC10 with what looks to me to be the most elaborate booth of the SCC competition – at least in terms of artwork and theme. Their booth is festooned (or at least half-festooned) with “TACC to the Future” posters that play off the “Back to the Future” movies from the 1980s. TACC (Texas Advanced Computing Center) is the lead technical (and probably spiritual) advisor to the Longhorn team.
Dell is holding down vendor sponsorship duties and has provided the team with shiny new cluster gear. They’re running nine two-socket Intel Westmere-fueled nodes – with each Westmere packing six cores for a total of 108 cores. Total system memory comes in at 432GB.
In the video, I give the team a hard time on a wide variety of topics: Texas itself, why U of T is late in coming to the SCC, and even the woeful shape of Texas football. The team members I talked to were confident, and even discussed the possibility of HPC clustering taking place in front of 50,000 screaming fans. Hmmm… not so sure that’s in the cards, but it’s a beautiful dream.
The time for dreaming will soon be over here at the SCC. The serious business of benchmarking will soon begin. Does Texas have what it takes to back up their reputation and semi-tough talk? We’ll soon see…
This is the first SCC for the Florida A&M (FAMU) team, and it couldn’t have started much worse. They found out that due to a delivery snafu, their system won’t arrive in time for them to participate in the competition. However, there’s more to the story – and it illustrates the true spirit of the SCC.
Many teams would have called it a day after learning that the system they’ve anticipated working with isn’t going to show up. It’s not like you can run over to Staples and buy another one. Sure, the vendors have systems in their displays, but they don’t keep any spares hanging around, and the systems they do have aren’t necessarily in competitive trim.
Due to the generous help of the SCC staff and other competitors, FAMU wasn’t forced to pack it in. The Colorado team contributed some spare nodes, and other teams also pitched in. I don’t have a full accounting of who gave what and who did what (this is one of those times when it would help if I were a real reporter). But the fact that FAMU has enough hardware to participate at all is a real testament to the spirit of the competition and the character of the participants.
So will FAMU win? It doesn’t really matter – they’re going to compete, and that’s a hell of an achievement given the hand that they were dealt.
The Colorado team is no stranger to these competitions. They’ve been here before – four times before. While they’ve seen some success, winning the LINPACK crown (no actual crown), they’ve never won the big trophy (there isn’t any trophy of any size).
However, this might change in 2010. Their experience, along with some hot hardware, might make the difference between competing and winning. Aided by their sponsors, the Buffaloes have put together a Dell AMD Magny Cours-based cluster that relies on Infiniband from Mellanox and PCIe SSD storage from Fusion-io.
The folks I talked to were divided on whether SSD storage makes much difference. Some say that the benchmarks aren’t storage intensive to the point where ultra-speedy SSDs would pay dividends. Others (mainly the teams utilizing SSDs) believe that using them as scratch disks speeds their processing and increases overall throughput.
One thing they can all agree on is that solid state disks use less power and generate less heat – definitely a positive given the 26 amp limitation on configurations.
We’ll also see how AMD’s premier chip stacks up against Intel’s best. Do the extra 6 cores from Magny Cours deliver the goods? Colorado had a chance to evaluate both and gave AMD the nod – was this the right call? We’ll know more at the end of the day tomorrow when teams turn in their LINPACK results. Stay tuned…
Another first-time competitor is the team from Louisiana State University (LSU). They’re not located right in New Orleans, but at only 80 miles or so away, they have a definite home-team feel.
They’re close enough that it didn’t make a lot of sense to use a shipping company to move the hardware to the show. So the team advisor put the cluster into the back of his pickup truck (in a covered camper shell), placed the blades in the back seat, and drove the whole configuration to the event.
There’s a rumor that he may have arranged the gear in such a way as to seal his luggage into the camper for the first night – but that hasn’t been corroborated by a second source, so I won’t say anything about it here.
The LSU team is quite personable; they’re very good representatives of the local area.
Sunday at the Student Cluster Competition (SCC) at SC10 in New Orleans… we stop in at the Stony Brook booth to meet the team and see what they’ve brought with them. They introduced us to their cluster, dubbed the “Bear-O-Dactyl.” It’s a combination bear and pterodactyl – but I’ll let them explain the reasoning behind the nickname.
Stony Brook has to be considered a powerful competitor. They won all the marbles last year (there aren’t any actual marbles), and several members of their team have been to the big show before, so they have experience on their side.
They also have supercomputer legend Cray in their corner, along with NVIDIA GPUs. In fact, they are one of two (maybe three) teams deploying GPUs in the competition, making their box GPU-riffic (I’m still trying to coin that term).
Will all of this be enough to make Stony Brook the first repeat champion in SCC history? Could be. I think they’re an early favorite, but it’s still anyone’s game at this point. We’ll know more tomorrow when they run the HPCC benchmark.
Eight university teams – six from the US, one from Russia, and one from Taiwan – descend on the SC10 supercomputing show in New Orleans next week to take part in the Student Cluster Competition (SCC). Each team arrives with its own unique set of skills and challenges; read a profile of each team here.
The students bring their self-designed, self-built clusters to the show where they re-assemble them and race to complete a set of benchmarks and workloads in the quickest time.
The competition tests their system design skills, their aptitude for learning new programs and new methods, and their ability to optimize code to produce more (and better) output than their rivals.
GCG is covering the competition from the show floor for The Register… Pick a team to cheer on, and check back to see how they’re faring…
Results from the Student Cluster Competition at SC09 in Portland, OR:
We “discovered” the student cluster phenomenon at SC10 in New Orleans… but it existed even before we took laptop, camcorder, and pompoms in hand to document the excitement. Here’s what we know about SC09:
LINPACK Award: University of Colorado at Boulder, USA – 692 GFlop/s
- Arizona State University
- University of Colorado
- Purdue University
- Stony Brook University
(Click photo to enlarge.) All the details on the gear at the November 2008 SC Student Cluster Competition in Austin, Texas.
Results from the Student Cluster Competition at SC08 in Austin, TX:
We “discovered” the student cluster phenomenon at SC10 in New Orleans… but it existed even before we took laptop, camcorder, and pompoms in hand to document the excitement. Here’s what we know about SC08:
Overall Award: “The ClusterMeisters” (Combined team from Indiana University, USA and Technische Universitat Dresden, Germany)
LINPACK Award: National Tsing Hua University, Taiwan – 703 GFlop/s
(Click photo to enlarge.) What happens in Reno doesn’t stay in Reno: all the details on the gear at the November 2007 SC Student Cluster Competition.