FREE MOBILE CLOUD
COMPUTING CONCEPTS - TRAINING_MODULES_WITH_TONS_OF_VIDEOS
elastic cloud services
Basics for Elastic Cloud Services....
To use the Amazon EC2, customers first create an Amazon Machine Image (AMI) that contains
their operating system (OS), software, data, settings, applications, libraries, etc. This AMI is encrypted and uploaded to
the Amazon S3 service, which allows for reliable and secure access to the data. The AMI is then registered with Amazon EC2,
which associates a unique identifier to it. Finally, the AMI ID and the Amazon EC2 service are used to run, monitor or terminate
instances. Clients are charged for the running time and resources used during the instances.
The Amazon EC2 offers a 99.95% uptime/availability
guarantee, which translates into 4.3 hours of non-scheduled downtime per year. Unlike other comparable services (for instance,
Rackspace), EC2 does not have a defined time-to-resolve guarantee, nor does
AWS offer credits for any missed deadlines.
Furthermore, AWS places the responsibility for an SLA (service level agreement) violation notification on the client, which means that it is
up to consumers to prove that a service outage took place and request for credit.
A number of observers have suggested that the company develop an automated credit function should outages occur.
Amazon EC2 allows clients to access computing resources, while paying only for used capacity.
There are a number of different instance types, which are geared towards users with different computing needs. “Instances”
refer to running systems based on an AMI. Instances that are based on the same AMI will execute identically. When instances
are terminated or in case of failure, any information is lost.
There are six available families of instances, which are discussed below:
Standard – These instances are recommended for most general applications. There are three
instances in the Standard family: small (1.7 GB memory, 160 GB storage), large (7.5 GB memory, 850 GB storage) and extra-large
(15 GB memory, 1690 GB storage) instances.
– Micro instances are designed for lower throughput applications and web sites that periodically consume significant
compute cycles. This instance offers 613 MB memory and EBS storage only.
High-Memory – This family of instances is recommended for high throughput applications, such
as database and memory caching. There are three instances in the High-Memory family: extra-large (17.1 GB memory, 420 GB storage),
double extra-large (34.2 GB memory, 850 GB storage) and quadruple extra-large (68.4 GB memory, 1690 GB storage).
High-CPU – Instances in this family offer more CPU resources than memory. They are designed
for compute-intensive purposes. There are two instances in this family: a medium instance (1.7 GB memory, 350 GB storage)
and an extra-large instance (7 GB memory, 1690 GB storage).
Cluster Compute – This family provides high CPU resources and superior network performance,
to meet the needs of High Performance Compute
(HPC) applications and intensive network applications. There is only one instance offered in this family: the Cluster Compute
quadruple extra-large instance (23 GB memory, 1690 GB storage).
Cluster GPU – Instances in this family offer general-purpose
graphics processing units (GPUs), marked by high CPU resources and optimized network performance. There is only one instance
offered in this family: the Cluster GPU quadruple extra-large instance (22 GB memory, 1690 GB storage).
AWS not only claims to adhere to security best practices, but it also requires its clients
to follow best practices and use its numerous security features. The AWS infrastructure has attained ISO 27001 certification and has passed a number of SAS70 Type II audits.
Specific to the EC2 is security on numerous levels, to ensure that data cannot be intercepted by unauthorized systems
or users. This includes security on all of the following levels:
operating system (OS) of the host system – Administrators are required to use multi-factor authentication to
access administration hosts. All access is logged and audited.
The virtual instance OS/guest OS – EC2 clients are given full access and administrative control
over accounts, services and applications. AWS does not retain any access rights, nor can it log into the guest OS.
The firewall – This is a mandatory inbound firewall, which requires
customers to open ports to allow traffic.
Signed API calls – The customer’s Amazon Secret Access Key is required for calls to:
launch/terminate instances; change firewall parameters; and perform other functions.
This article explores the Amazon Elastic Compute Cloud (EC2), Amazon’s offering for scalable compute resources
in the cloud. The article briefly describes the service and functionality.
EC2 has different sizes of instances, which are running systems based on Amazon Machine Images (AMIs). Instances
are grouped into six families: Standard;
Micro; High-Memory; High-CPU; Cluster; and Cluster GPU instances. Finally, the article describes the features and best practices
of Amazon’s Web Services.
Will the Elastic Cloud Conquer
the Smart Grid Data Night-Mare?
Instant access to meter data management as a cloud-based software service could speed metering deployment, but security
concerns may slow widespread adoption.
In recent weeks, IBM and Oracle have touted the results of their
meter data management scalability tests. At stake are bragging rights for processing billions of meter
readings per hour. But these same IT vendors have the benefit of dedicated experts working with state-of-the-art computing
equipment in lab environments that would make the average card-carrying utility system administrator salivate -- and that’s
probably an understatement.
What does a high-end benchmark test environment look like? Multi-core servers with large storage arrays, highly tuned
cache management, scads of I/O ports, carefully tuned databases and such.
Scalability is definitely within reach, but at what price?
Let’s face it -- a lot of money and expertise is needed to build a truly scalable high-end enterprise data management
it’s fun to talk about Ferraris, not everyone can afford one. But it sure would be nice to rent one -- whenever you
want to go really fast.
Welcome to the world of the elastic cloud. The business model is mind-numbingly simple. It originated back in the
early mainframe days when hardware was too expensive for a company to own outright. With time-sharing, companies rented big
iron instead of buying it. They paid for the size of their computing job and the amount of time it ran. This enabled many
organizations to share the cost of computing infrastructure. Call it communal computing. The elastic cloud business model
is similar to time-sharing, but with a twist. An elastic cloud enables computing resources to be created on demand, as needed.
Further, computing resources can be allocated dynamically to accommodate varying performance and throughput requirements.
for smart grid are intriguing. One example: cleaning smart meter data to create billing determinants, a process known as VEE
(validation, estimating and editing). This is an example where a lot of computing horsepower is desired for a short amount
of time. For instance, a utility with two million smart meters collecting 15-minute interval data, plus three register reads
per meter will have 102 million data records to deal with every business day. Utilities using an elastic cloud service can
host an instance of their MDM with a service provider on a dedicated basis or for specific tasks, such as validating and estimating
meter readings, for instance. If the job is taking too long, then more compute resources can be requisitioned on demand.
But how close is
the industry to actually realizing the promise of cloud-based meter data management? It’s not for want of cloud-based
MDMs. EMeter kicked off the year by announcing a cloud
software partnership with Verizon. And just
last week, Ecologic Analytics announced a new generation of its meter data management software, including
capabilities to run in hosted environments.
The cloud movement is not limited to MDM. For instance, Aclara, which controls
15 of the top 25 consumer engagement utility
websites, has been providing its consumer engagement
portal as a cloud service that has “been in place for a long time before the ‘cloud’ was the ‘cloud,'”
according to Paul Lekan, VP of Marketing Communications for Aclara. There are also the issues of security and application
integration to consider. Lekan cites a growing legion of utility applications that can leverage AMI data about system health,
outage, voltage and other ancillary data sources to existing utility applications. “Having the data validated and available
locally and tightly integrated … shifts the dialogue to resident servers, which in most cases are hardened with redundant
There is also the issue of grid cybersecurity to consider. As attractive as an elastic cloud offering
may appear, enough concern about data privacy and security lingers to pass a dark cloud (pun intended) of fear, uncertainty
and doubt over the prospect of outsourcing critical AMI data to a third-party data center.
Over the short term, cloud offerings,
coupled with MDM 'quick start' kits offered by Ecologic Analytics, eMeter and others, can help utilities just getting started
with AMI to get up and running quickly with pilot projects and even small-scale production rollouts. Whether the elastic cloud
will catch on as a way to run core utility systems remains uncertain.
What is certain, however, is that the MDM
scalability wars are reaching a fever pitch -- a key indicator that we are moving into MDM 2.0, which will be marked by large-scale
deployments and the integration of AMI data
with legacy utility applications.