Until recently, academic and government
research centers were the exclusive domains of high-performance computing (HPC) because of the supercomputers' ability to
perform sophisticated mathematical modeling. In 1976, the first Cray-1 supercomputer was installed at Los Alamos National
Laboratory. Designed by Seymour Cray, whom many regard as the "father of supercomputing," the Cray-1 clocked a speed
of 160 megaflops, or 160 million floating-point operations (FLOPS) per second. Last year, Cray Inc. installed what was at
the time the world's fastest supercomputer at Oak Ridge National Lab; named "Jaguar," the XT5 System has a clock
speed of 1.8 PetaFLOPs, or 1,800,000,000,000,000 FLOPS per second.
This surpassed the IBM "Roadrunner"
system, installed in 2008, which was the first computer to pass the PetaFLOP barrier.
Fast-forward to March 2010, when Cray introduced the CX1000, which HPCwire.com
described as an "entry-level" or "mid-level" machine designed for "the typical data center environment."
Not exactly "typical"
by today's standards, Cray's and other manufacturers' cluster-based supercomputers are likely to become a mainstream solution
for data centers that require high-availability clusters for 24x7 transaction processing. HPC will also become the required
computing platform for Internet content providers if the forecast of a 3D Internet within five to 10 years is accurate. In
addition, the advent of "P4" — "predictive, preventive, personalized, participatory" medicine —
will require HPC analysis of each individual's genome, opening the way for a sea change in medicine, how doctors treat patients,
and how medical technologies are applied.
Compared with today's typical data center, an HPC facility will require dramatic increases in power and cooling capacity.
These massively paralleled networks of specialized servers have a load density of 700 to 1,650 watts per square foot, while
most current data centers have a load density of 100 to 225 watts per square foot.
For a company that has decided to enter the 3D Internet world, let's consider a
hypothetical entry-level HPC: a 700-teraFLOP (700 trillion FLOPS) computer cluster using the Cray XT6 platform.
This HPC will fill two rows of 20 cabinets
at 45 kilowatt per cabinet. These cabinets are not typical racks; they are custom enclosures that are 22.5 inches wide and
56.75 inches deep. At 35 square feet per cabinet (including cold and hot aisles, service space, air conditioner space and
PDUs), this will total only 1,400 square feet of space.
If the data center has a capacity of 100 watts per square
foot, the HPC would consume the power available for 18,000 square feet. There are two potential solutions: Either install
additional capacity for this system, or set aside a lot of data center space with nothing in it, and divert the power from
that area to the HPC.
The system will require 1,800 kilowatts and will reject 6.14 million BTUs per hour or 512
tons of heat to water. The system is also heavier than today's typical rack, which is partially loaded with technology. Each
HPC cabinet is fully configured with technology and weighs 2,000 pounds per cabinet.
"Geek Speak, is not Greek"
High-performance computing (HPC)
is the use of parallel processing for running advanced application programs efficiently, reliably and quickly.
The term applies especially to
systems that function above a teraflop or 1012 floating-point operations per
second. The term HPC is occasionally used as a synonym for supercomputing, although technically a supercomputer is a system that performs at or near the currently highest operational rate for computers. Some supercomputers
work at more than a petaflop or 1015 floating-point operations per
most common users of HPC systems are scientific researchers, engineers and academic institutions. Some government agencies,
particularly the military, also rely on HPC for complex applications. High-performance systems often use custom-made components
in addition to so-called commodity components.
As demand for processing power and speed grows, HPC will likely
interest businesses of all sizes, particularly for transaction
processing and data warehouses. An occasional techno-fiends might use an HPC system to satisfy an exceptional desire for advanced technology. +++++++++++++++++++++++++++++++++
Investing in high-performance and
technical computing is a strategic and competitive decision. High-performance computing (HPC) computational power and speed
can help you:
Deliver research results
Improve time to science and
Propel new results and discoveries
Develop and innovate new and game-changing
When you need more capacity
to scale your environment and accommodate ongoing growth, the portfolio of solutions from Dell can provide your total
compute requirements. If you need pure performance, Dell can design solutions geared to absolute performance. If capacity
and scale are top priorities in your environment, Dell can design solutions that can carry you through the next set of grand
and exascale challenges. And if you just need systems sized to help your organization get ahead of your competitor, we have
production-ready systems for all sizes, missions and charters.
Explore the search results below to learn more about
HPC and how Dell can provide the computational power and productivity you need.
With the fastest supercomputers
on the planet sporting multi-megawatt appetites, green HPC has become all the rage. The IBM Blue Gene/Q machine is currently
number one in energy-efficient flops, but a new FPGA-like technology brought to market by semiconductor startup eASIC is providing
an even greener computing solution. And one HPC project in Japan, known as GRAPE, is using the chips to power its newest supercomputer.
GRAPE, which stands for Gravity Pipe, is a Japanese computing project that is focused on astrophysical simulation. (More specifically,
the application uses Newtonian physics to compute the interaction of particles in N-body systems).
The project, which began in 1989, has gone through eight generations of hardware, all of which were built as special-purpose
of the GRAPE machines was powered by a custom-built chip, specifically designed to optimize the astrophysical calculations
that form the basis of the simulation work. The special-purpose processors were hooked up as external accelerators, using
more conventional CPU-based host systems, in the form or workstations or servers, to drive the application.
The first-generation machine, GRAPE-1, managed just 240 single
precision megaflops in 1989. The following year, the team build a double precision processor, which culminated in the 40-megaflop
GRAPE-2. In 1998, they fielded GRAPE-4, their first teraflop system. The most recently system, GRAPE-DR, was designed to be
a petascale machine, although its TOP500 entry showed up in 2009 as an 84.5 teraflop cluster.
Even though the GRAPE team was able to squeeze a lot more
performance out of specially built hardware than they would have using general-purpose HPC machinery, it's an expensive proposition.
Each GRAPE iteration was based on a different ASIC design, necessitating the costly and time-consuming process of chip design,
verification, and production. And as transistor geometries shrunk, development costs soared.
As the GRAPE team at Hitotsubashi University and the Tokyo
Institute of Technology began planning the next generation, they decided that chip R&D could take up no more than a quarter
of system's cost. But given the escalating expense of processor development, they would overshoot that by a wide margin. In
2010, they estimated it would take on the order of $10 million to develop a new custom ASIC on 45nm technology. So when it
came time for GRAPE-8, the engineers were looking for alternatives.
The natural candidates were GPUs and FPGAs, which offer a lot of computational
horsepower in an energy-efficient package. Each had its advantages: FPGAs in customization capability, GPUs in raw computing
power. Ultimately though, they opted for a technology developed by eASIC, a fabless semiconductor company that offered a special
kind of purpose-built ASIC, based on an FPGA workflow.
The technology had little grounding in high performance computing, being used mostly in embedded platforms, like
wireless infrastructure and enterprise storage hardware. But the GRAPE designers were impressed by the efficiency of the technology.
With an eASIC chip, they could get the same computational power as an FPGA for a tenth of the size and at about a third of
And although the latest GPUs were slightly more powerful flop-wise than what eASIC could deliver, power
consumption was an order of magnitude higher.
HOW AND WHY
YOUTUBE IS THE FUTURE OF WEB DELIVERED TV CONTENT
friend recently tried to call one of our older relatives. He dialed the number and quickly hung up with a confused look on
his face. I asked him what was wrong and he replied, “I don’t know. There’s something wrong with their phone,
it kept beeping.” I called the number and was amused to hear a landline busy signal, something my cell phone centric
pre-teen had never encountered.....
My son is similarly unacquainted with cable TV. Other than the occasional NBA game, he consumes his video content
via our iPad and Xbox. Most of his online viewing is spent on YouTube. He is not alone.
Mark Suster, fellow venture capitalist and serial entrepreneur, has written extensively about YouTube’s evolution from
dogs-on-skateboards to its current status as an entertainment medium rivaling cable television networks. Mark provides an
excellent primer regarding the future of Internet TV HERE.
YouTube has over 800 million monthly unique visitors who
consume over 4 billion videos EACH DAY. Its evolution has resulted in a new class of entertainment entrepreneur, the creators
of professional YouTube content, affectionately known as YouTubers.
Feeding The New Networks
Running an immensely profitable business has many benefits, including the ability
to jumpstart an ecosystem which directly drives revenue to one of your core offerings. During the latter half of 2011, Google
awarded a number of leading YouTubers and nascent Internet TV networks grants of $1 million to $2 million. Although estimates
of Google’s total investment vary, the industry insiders I spoke with believe it exceeded $100 million. Earlier this
month, Google announced that it will spend an additional $200 million to promote YouTube content.
Despite the snarky rumors that Google’s content development
program was poorly managed at its outset (e.g., some of the money was spent by YouTubers to purchase homes and not create
videos), the majority of the funds were properly deployed by the emerging community of professional YouTubers. In fact, all
of the most prominent Internet TV networks were recipients of such grants, including: Maker Studios,
Big Frame, Machiniml
Before the creation of YouTube networks,
a handful of leading YouTubers were generating over $100,000 in revenue per year, simply by running run-of-site YouTube ads.
Networks have significantly increased YouTubers’ advertising revenue by negotiating sponsorships and large ad purchases
with major brands.
than simply act as old-world talent agencies, Internet TV networks offer their talent a number of differentiating advantages
which are difficult for YouTubers to secure on their own, including:
Critical Mass – Even though some of the most
successful YouTubers routinely generate several million views per video, large advertisers are challenged to spend efficiently
within the medium, because even millions of views results in a relatively small advertising spend.