FREE MOBILE CLOUD
COMPUTING CONCEPTS - TRAINING_MODULES_WITH_TONS_OF_VIDEOS
Amazon Web Services has made it possible to create private clouds
using CloudFormation, which automatically creates stacks of resources.
Users can now create and populate an entire Virtual Private Cloud (VPC) using a single CloudFormation template, according
The VPC service has been around for a couple of
years, and allows users to create an isolated section of Amazon's cloud where they can launch resources in a virtual network defined by themselves, including public subnets,
private subnets, and hardware VPN access.
users don't have to figure out the order in which AWS services need to be provisioned or how to make dependencies between
them work, according to Amazon.
It is both easy for humans to read and write and for machines to parse and generate, according to Json.org.
To help developers and systems administrators get going, Amazon has created two sample templates
to show how they are constructed. The first template creates a VPC with a single EC2 instance, and the second rolls out a
VPC with public and private subnets, an Elastic Load Balancer, and an EC2 instance, according to a blog post on the new capability.
There is no additional
charge for using CloudFormation or Virtual Private Cloud. Companies that create a hardware VPN connection to their VPC using
a VPN gateway, are charged US$0.05 per hour it is provisioned and available, according to Amazon. ++++++++++++++++++++++++++++++++++++
It's been nearly six years since Amazon.com
(Nasdaq: AMZN) had the brilliant idea of monetizing its oodles and oodles of surplus computing power.
The e-tailer built up a truly epic computing infrastructure to support its core operations,
but much of that hardware had a lot of downtime. While still requiring electric power, man-hours of server maintenance, and
data center cooling, the servers would often sit unused. Why not rent out idle servers by the hour instead?
The idea was an instant hit,
and Amazon's cloud-computing services have come a long way since then. Management routinely holds up the collection of data-mangling
products, collectively known as Amazon Web Services or AWS, as an example of a fast-growing Amazon operation. These days,
Amazon pumps millions of dollars specifically into building out the AWS infrastructure, which now is separate from Amazon's
But CEO Jeff Bezos
and his top-level managers are loath to break this business out as a reportable segment, or even discuss sales and margins
in an informal manner. So we're left wondering exactly how big AWS really is, and how important the operation is
to Amazon's bottom line.
aplenty So it's up to outsiders, analysts, and market watchers to figure all of this out. That's a crapshoot
at best. Two analyst firms estimated $500 million in AWS revenues in 2010. Another firm agreed, then extended that figure to $750 million in 2011. That estimate
looked timid a few months later, when the run rate for 2011 started to look like a billion-dollar business.
Not to sound like a broken record or anything, but even that impressive figure might be too low. Consider
cloud-computing rival Rackspace (NYSE: RAX) , which is seen as a relative upstart in the virtual server field. Rackspace posted
$1 billion of revenue in 2011 and 20% of that came directly from its public cloud services -- a drop-in replacement for much
of Amazon's AWS, if you will. While I admire Rackspace's offerings, I would venture to say that Amazon's slice of the market
pie is far more than five times larger today.
How about this big? A recent study by cloud-computing consultant firm DeepField Networks
throws new light on Amazon's enormous cloud-computing footprint. Culling data from a huge traffic analysis effort, DeepField
concluded that one-third of all North American Internet users touch an AWS server somewhere at least once every day. This
traffic accounts for about 1% of all consumer traffic. To put that number into perspective, video-watching giant YouTube pushes
only six times as much data. That's a fat-bandwidth service and one of the most popular sites on the Net.
Making the world a better place, one
processor-hour at a time By using Amazon, or Rackspace, or one of the many rivals popping up all over the IT
landscape these days, you can scale up a 50,000-processor supercomputer in the cloud to run a massive analysis of chemical
data on potential cancer drugs, pay only for the number-crunching time actually used, and shut it all down when you're done.
Try that with an in-house supercomputer. Many upstarts now run their entire businesses in the AWS cloud.
Many of Amazon's largest customers are, ironically,
traffic analysis experts themselves. They find it easy to leverage AWS services to conduct otherwise cost-prohibitive research and analysis projects. But DeepField
also found familiar names like Netflix (Nasdaq: NFLX) , Pinterest, Instagram, and Zynga (Nasdaq: ZNGA) among the 40 most frequently accessed AWS users.
Netflix famously moved the bulk of its IT infrastructure to AWS in 2010 and hasn't looked back since. Having a direct rival in the digital video world manage your most critical business
applications would make many business managers nervous, but Netflix CEO Reed Hastings trusts the separation between the AWS
church and Amazon's digital video state. "It's in [Amazon's] interest to make us successful in the cloud," said a Netflix spokesperson at the time of the announcement.
"That's why we felt comfortable."
gaming giant Zynga would never have been able to scale up its massive hit, FarmVille, to meet customer demand if
not for the flexible on-demand structure of Amazon's AWS. Zynga has since moved on to leaning more heavily on its own data
centers, but the company wouldn't be the mid-cap public company it is today without a billion-dollar annual revenue stream
to support its growing hardware costs.
computing enables a whole new kind of innovation, and I'm not surprised to see AWS making a billion dollars a year. Amazon
would do investors a solid if management would start talking about exactly how much this operation contributes to the top
and bottom lines.
Many entrepreneurs today have their heads in the clouds. They’re
either outsourcing most of their network infrastructure to a provider such as Amazon Web Services or are building out such
infrastructures to capitalize on the incredible momentum around cloud computing. I have no doubt that this is The Next Big
Thing in computing, but sometimes I get a little tired of the noise.
Cloud computing could become as ubiquitous
as personal computing, networked campuses or other big innovations in the way we work, but it’s not there yet. 421diggsdigg
Because as important as cloud computing is for startups and random one-off projects at big
companies, it still has a long way to go before it can prove its chops. So let’s turn down the noise level and add a
dose of reality.
Here are 10 reasons enterprises aren’t ready to trust the cloud. Startups and SMBs should
pay attention to this as well.
not secure. We live in an age in which 41 percent of companies
employ someone to read their workers’ email.
and industries have to maintain strict watch on their data at all times, either because they’re regulated by laws such
as HIPAA, Gramm-Leach Bliley Act or because they’re super paranoid, which
means sending that data outside company firewalls isn’t going to happen.
It can’t be logged. Tied closely to fears of security are fears
that putting certain data in the cloud makes it hard to log for compliance purposes. While there are currently some technical
ways around this, and undoubtedly startups out there waiting to launch their own products that make it possible to log “conversations”
between virtualized servers sitting in the cloud, it’s still early days.
It’s not platform agnostic. Most clouds force participants to
rely on a single platform or host only one type of product. Amazon Web Services is built on the LAMP stack, Google Apps Engine
locks users into proprietary formats, and Windows lovers out there have GoGrid for supporting computing offered by the ServePath guys. If you need to support multiple platforms, as most enterprises
do, then you’re looking at multiple clouds. That can be a nightmare to manage.
Reliability is still an issue. Earlier this year Amazon’s S3 service went down, and while the entire system may not crash, Mosso experiences “rolling
brownouts” of some services that can effect users. Even inside an enterprise,
data centers or servers go down, but generally the communication around such outages is better and in many cases, fail-over
options exist. Amazon is taking steps toward providing (pricey) information and
support, but it’s far more comforting to have a company-paid IT guy on which
isn’t seamless. As all-encompassing as it may seem, the so-called “cloud” is in fact made of up several clouds, and getting your data
from one to another isn’t as easy as IT managers would like. This ties to platform issues, which can leave data in a
format that few or no other cloud accepts, and also reflects the bandwidth costs associated with moving data from one cloud
not environmentally sustainable. As a recent article in The Economist pointed
out, the emergence of cloud computing isn’t as ethereal as is might seem.
The computers are still sucking down megawatts of power at an ever-increasing rate, and not all clouds are built to the best
energy-efficiency standards. Moving data center operations to the cloud and off corporate balance sheets is kind of like chucking
your garbage into a landfill rather than your yard. The problem is still there but you no longer have to look at it. A company
still pay for the poor energy efficiency, but if we assume that corporations are going to try to be more accountable with
regard to their environmental impact, controlling IT’s energy efficiency is important.
Cloud computing still has to exist on physical servers.
As nebulous as cloud computing seems, the data still resides on servers around the world, and the physical location
of those servers is important under many nation’s laws. For example, Canada
is concerned about its public sector projects being hosted on U.S.-based servers
because under the U.S. Patriot Act, it could be accessed by the U.S. government.
The need for speed still reigns at some firms. Putting data in the cloud
means accepting the latency inherent in transmitting data across the country and the wait as corporate users ping the cloud
and wait for a response. Ways around this problem exist with offline syncing, such as what Microsoft Live Mesh offers, but it’s still a roadblock to wider
already have an internal cloud. Many big firms have internal IT shops that act as a cloud to the multiple divisions
under the corporate umbrella. Not only do these internal shops have the benefit of being within company firewalls, but they
generally work hard — from a cost perspective — to stay competitive with outside cloud resources, making the case
for sending computing to the cloud weak.
Bureaucracy will cause the transition to take longer than building replacement housing in New Orleans.
Big companies are conservative, and transitions in computing can take years to implement. A good example is the challenge HP faced when trying to consolidate its data center operations.
Employees were using over 6,000 applications and many resisted streamlining of any sort. Plus, internal
IT managers may fight the outsourcing of their livelihoods to the cloud, using the reasons listed above.
Cloud computing will be big, both in and outside of the enterprise, but
being aware of the challenges will help technology providers think of ways around the problems, and let cloud providers know
what they’re up against.