As can be seen by their LINPACK results on Monday (only .233 TFlop), Purdue either completely melted down or had something up their collective sleeves. It turns out that they had a plan – a cunning plan. One that might give them a leg up on the other competitors on the all-important scientific applications. What was the plan? Why did they do it? Watch the video to find out.
We grabbed Team Colorado for a few final thoughts before the end of the Student Cluster Competition. Spirits were good, despite a few problems with their hardware and memory usage.
The Costa Rica team gave it their all at their first Student Cluster Competition. We caught up with them just a few minutes before they turned in their final results files to the judges…
We spent a few minutes talking to Team China before they submitted their final results for the 2011 Student Cluster Competition. They’re happy and had a good time, but it’s hard to figure out how they gauge their chances. We’ll find out soon…
We caught up with Team Boston (aka Team Chowder) a few hours before they turned in their results for SC11. They shared their thoughts about the competition and their results thus far, along with whatever else went through their sleep-deprived minds.
The results from the LINPACK portion of the Student Cluster Competition in Seattle have been released. This brief (barely three minute) video reveals all, including a short discussion of the LINPACK rules, the winner, individual team results, and the last odds.
Knowledgeable bettors who put their virtual money on system configuration and experience will find themselves rewarded. Those who bet brand names, emotion, or a mindless urge to follow the herd will find themselves a bit poorer today than yesterday. Stay tuned for more updates…
What can you say about the University of Texas Longhorns that they haven’t already said about themselves? They’ve got swagger for sure, and their LINPACK-topping success in 2010 showed that they can back up at least part of it. The Longhorns have brought the most attention-grabbing entry in Student Cluster Competition history with their 2011 deep-fried cluster.
What they’ve done is immerse all four of their servers (11 nodes with 132 Intel Xeon cores) in a vat of mineral oil. This gives them very effective cooling and saves enough energy to allow them to drive anywhere from 5-15% more cores.
The energy savings arise from being able to remove system fans – each of which could draw as much as 5 amps (theoretical maximum under extremely harsh conditions). The oil circulation and heat dissipation takes some juice to be sure, but it’s still less than what it would take to drive the various system and power supply coolers.
Check out the video to get a better look at the Texas hardware and the guys who pulled it together. Upcoming vids will show them removing and replacing nodes when their initial cabling proved to less than optimal.
As the competition progresses, we’ll see if their bold experiment pays off or if it just ends up as an ill-conceived, oily mess. I’m really happy to see a team take a chance on new technology – it’s that kind of spirit that drives the tech industry.
The team from Taiwan’s National Tsing Hua University is on a mission: to become the first university to repeat as Student Cluster Competition champions. While the team this year is almost entirely new, they have the same coach and have been mentored by their predecessors from the 2010 championship team.
They’ve added a new sponsor to the mix this year. Acer returns as their system sponsor, supplying a 72-core Xeon-based cluster, and NVIDIA is jumping on the Taiwan bandwagon with their contribution of six Tesla GPU cards.
Like the other teams driving GPUs, Taiwan’s success is somewhat dependent on how well they’ve adapted the scientific codes to optimize these specialized number-crunching beasts. Or, in the case of non-GPU friendly apps, how well Taiwan can utilize their traditional cluster hardware to handle the loads.
Take a look at the video to get a feel for Team Taiwan. To me, they’re the definition of quiet confidence and competence. Like last year, they never seem to hurry and never show any signs of frustration. As one observer noted, they’re one of the most well-prepared teams in the competition – something that should aid them in their quest to take home another SCC trophy. (There isn’t a real SCC trophy, but there should be.)
Team Russia, representing the State University of Nizhny Novgorod, is returning to the Student Cluster Competition for the second time. The team is again going with a hybrid approach, mixing Xeon CPUs (84 cores) and as many as 12 NVIDIA Tesla GPUs.
I use the term ‘as many as’ due to the fact that they’ll probably end up changing their configuration to adhere to the 26 amp limit. Even with throttling down system components, 84 Xeon cores and a dozen GPUs will suck up a lot of power – enough to push them over the limit.
The team returns with the same coaching staff, the same sponsor (Microsoft), and a roster of both veteran and newbie competitors. While they finished in the middle of the pack last year, they figure that another year of honing their clustering craft and another year of GPU development should pay off in a better finish.
Their first test was LINPACK on Monday, a task that should play to their “GPU-riffic” strengths. The other applications may or may not be a good fit for Team Russia’s hybrid cluster, depending on whether the team was able to find GPU- and Microsoft-ready versions of the code. With an experienced team and solid hardware, the Russians have a solid shot at SCC success.
Purdue is another team that’s participated in the SC Student Cluster Competition (SCC) since its inception. They’re a solid team with a half-n-half mixture of rookies and SCC veterans. This year they’re bringing the typical workmanlike Purdue attitude to the competition – along with a plethora of traditional HPC gear.
The Boilermaker cluster relies on the latest 10-core Intel Xeon CPUs provided by sponsor Intel. They’re running four quad-socket nodes, which gives them a total of 160 cores to devote to the various challenge applications. At 64GB per node their cluster is mid-range memory-wise, which may put them at a disadvantage vs. a few of their competitors.
In the off season, the Purdue team worked on gaining greater understanding of the scientific disciplines associated with the specific workloads in this year’s competition. This approach could pay off; they may have learned tuning secrets that can be gained only through experience – or through picking the brains of those experienced practitioners.
Another constant with Team Purdue is their signature sledgehammer. I was concerned when I didn’t see it prominently displayed in their booth as they set up their gear. But it was there today while they were running LINPACK, putting my mind at ease. Check out the video to get an up close and personal look at this crop of Boilermakers.