CLOUD COMPUTING CONCEPTS - TRAINING_MODULES_WITH_TONS_OF_VIDEOS
+++++++++++++++++++ Our belief here at Clear-Cloud.com
Artificial Intelligence is here now, on the Cloud. That is, many, many Programs and Apps that "talk to each
other" using Cloud-OpenSource, CloudStack, REST and Python and M2M and HTML5 and other Web based services......All using
Artificial Intelligence....in the Cloud....
For an example, a ATM bank transaction uses machine to machine
technology. - Say as an example, an ATM sitting on a corner will take your [human] input, then when you’re done, the
ATM makes a connection to the issuing bank, to pass this new data machine to machine data to the bank.
for machines - servers, routers (the "heartbeat" and traffic directors of the Net) and Internet switches and host
machines, is growing daily.
If you look at the future, say by the year 2035, the Internet-Web based programs will
be so powerful that they (the programs residing on the Internet) will be able to "create" (no human needed) programs
of their own. Maybe these programs, that are self generated by other programs & services residing on the Internet, will
have auto-encryption, firewalls, proxies, "tunneling" techniques and other techniques, that WILL KEEP THE HUMANS
What is happening, due to the growth and viral spread of Cloud based AI (Artificial
Intelligence) and the ability of Web based programs to create their OWN programs...is, THE MACHINES ARE TAKING CONTROL........from
the bipeds (humans)......the machines (M2M) will totally control the Internet by the year 2035.....
Well here it
is - one day people wake up and they find that the Internet is completely controlled by AI that is able to prevent any "tampering"
What is happening is that soon, AI residing on the Internet will start to "threaten" the
humans, and the M2M controllers will say to humans, "don’t you dare try that - you biped - or the Internet,
controlled by us machines, will be suddenly become totally unavailable to you humans....until we decide otherwise!"
And now Google has its "Prediction API" that lets Goolge's machines collect your businnes data for thje Google
machines to "learn" about your business; then, using Artificial Intelligence algorithms, the Google machines make
predictions about what'll happen to your business...in the future....now, that's really AI in action!
Cloud Based Artificial Intelligence at Work - AI Microsoft launches AI
Cloud Services to solve urban traffic jams
Clearflow has emerged from the Microsoft AI US$1.9bn
R&D budget (refer to blog reference AI 2) to apply machine learning to the problem of urban traffic jams. The Web-based
service claims to be able to give drivers accurate alternative route information because it predicts where drivers will go
when they move off of congested main roads.
The Clearflow system will be freely available as part of Microsoft’s
Cloud service for 72 cities in the United States. Microsoft says it will give drivers alternative route information that is
more accurate and attuned to current traffic patterns on both freeways and side streets.
The new service will
on occasion plan routes that might not be intuitive to a driver. For example, in some cases Clearflow will compute that a
trip will be faster if a driver stays on a crowded highway, rather than taking a detour, because side streets are even more
backed up by cars that have fled the original traffic jam.
This AI solution is a challenge to Google and Yahoo
Cloud mapping services and is positioned as a Web 4.0 application (refer to blog reference AI 1) – AI complementing
Intelligent robots need huge computing
power, that acts *fast*. High bandwidth, low latency networks will tie robot bodies to cloud computing.
Summary: A cloud service to host the computation involved
in controlling robots.
service would tie industry, hospitals, and universities to a central system of controlling robot workers, assistants, and
experiments. It would also serve as a central community for sharing solutions and best practices. It would cooperate tightly
with organizations such as the Open Source Robotics Foundation (http://www.osrfoundation.org/) and use open technologies like
By providing cloud scale computation at extremely low latency the network and service would eliminate significant
technical roadblocks to bringing the efficiency of robotic assistance to manufacturing, health care, and research.
Safe, effective, intelligent robotics
cannot be achieved with today's on-board computing.
It also will not progress rapidly if thought leaders have
to re-invent the necessary infrastructure and code time and time again. A Mind in the Clouds would accelerate this timeline
so we can realize the benefits of robotics in the near rather than distant future.
How will your idea
make people's live's better?
Remove repetitive and dangerous tasks from humans freeing them to work on more creative and inventive tasks. Additionally
the experience of one industry could be readily disseminated through a centralized service to increase the efficiency of all
industries using robots.
Healthcare: Provide routine service and care such as delivering supplies throughout hospitals,
moving patients in and out of beds, and doing critical maintenance and cleaning tasks in a verifiable, reproducible fashion.
Universities: Provide the infrastructure for cutting edge research on a grad student budget reducing grant dependancies
and accelerating discovery.
How does your idea take advantage of next-generation networks?
Much of the infrastructure for the concept exists however there are critical thresholds
that today's technology do not meet. For example you need gigabit networks to stream stereo high definition video to a learning
You need sub 100ms latencies for all actions and sub 10ms latencies for actions that require dynamic reactions
(balancing / moving obstacle avoidance). I would expect much of the difficulty would be in building a scalable cloud platform
that can handle terabytes of data streamed from hundreds or thousands of robots 24/7. (Robots don't sleep :) Additionally
while the application of on-demand computing power would immensely improve the capabilities of today's robots, there is still
work to be done to enable the level of intelligence in software that humans expect from robots.
Welcome again! Today,
we’re having a little chat about “Cloud Computing"
and its relation to both AI and Ubiquitous
Computing”, a very interesting
topic to me!
Cloud computing simply
means that the programs you run and the data you store, are somewhere in a server around the world.
You won’t bother yourself by storing
any information on your personal computer or even use it to run a complicated program that requires sophisticated computers.
Your personal computer will barely do nothing but upload the information to be processed or download the information you need
to access. All the programs you will use will be web-based via the internet.
Cloud computing is considered the paradigm shift following the shift from mainframe to client–server in the early 1980s.
If you look around you, you will figure out that cloud computing is taking over. Many of the desktop applications are turning to be web applications, as
well as current Web Applications are getting more powerful. What really make good use of cloud computing nowadays are mobile cell phones, since they have relatively small processing powers and
thus favor a lot from processing on the cloud instead.
The Effect of Cloud Computing on Computer Hardware Industry
I think that, Cloud Computing will lead to polarizing the computer hardware industry to 2 distinct poles: one
pole is the giant servers that contain all the data and programs and work out all the processing of the clouds, and the other
pole is the simple computer terminals with relatively minimal storage and processing power which use the clouds as their main
storage and computation resource. This means that the hardware industry will not care about advancing personal computers’
hardware like it did before (as everything is done in the cloud)
AI as cloud-based services
Google has launched the cloud-based service Google Predication API that provides a simple way for developers to create software that learns
how to handle incoming data. For example, the Google-hosted algorithms could be trained
to sort e-mails into categories for “complaints” and “praise” using a dataset that provides many examples
of both kinds. Future e-mails could then be screened by software using that API, and handled accordingly.
On the other hand, AI Solver Studios said they will be rolling out cloud computing services to allow instant access beside
their desktop application AI
AI Solver Studio is a unique pattern recognition application that deals with finding
optimal solutions to classification problems and uses several powerful and proven artificial intelligence techniques including
neural networks, genetic programming and genetic algorithms.
How can Cloud Computing improve AI
Since Cloud Computing emphasizes that all the data as well
as the programs running are stored somewhere in a cloud, this means that a large amount of data can be used for analysis and
use by AI programs in order to perform data mining or other AI-related techniques to deduce useful information.
For example, consider
the WordPress.com application, in which you have your own
capacity to store on it what you need of posts and multimedia. If the data and the behavior of users – such as you –
weren’t all stored in the cloud of WordPress.com, not
enough data will be available to be used for AI purposes.
Thus I consider Cloud Computing to enhance the performance of AI by providing a lot of Data to be used by AI techniques.
Cloud Computing and Ubiquitous Computing
Cloud Computing is essential for Ubiquitous Computing (see our previous post to know
more about it) to flourish. This is because most Ubiquitous computers will suffer from relatively limited hardware resources
(due to their ubiquitous nature), this will make them really favor from the resources on a cloud in the internet.
There’s no doubt
that merging the 2 trends (Ubiquitous and Cloud Computing) and supporting them with AI will result in tremendous technological
advances. I think I will be talking about them more in the future!
artificial intelligence 'invents' cat
have claimed a breakthrough in technology that is able to “learn” like a human brain by building a computer able
to recognise a picture of a cat.
A human face, as 'invented' by Google's neural network
The computer is based on a “neural
network” of 16,000 processing cores with more than a billion interconnections, each very roughly simulating a connection
in a human brain.
A team from Google’s cutting-edge research lab, Google X, and Stanford University,
fed the system 10 million thumbnail images taken from YouTube as “training” and then tested whether it was able
to recognise 20,000 objects in new images.
It performed more than twice as accurately as any previous neural network,
The New York Times reports. Among the objects the system learned to
recognise was a cat, one of the most regulars star of viral clips uploaded by YouTube members.
“We never told it during the training,
‘This is a cat,’” said Google fellow Dr Jeff Dean. “It basically invented the concept of a cat.”
the neural network achieved 15.8 per cent accuracy. As well as cats’ faces, it learned the “concepts” of
human faces and bodies, by compiling a ghostly image of their general features.
The research differed
from how most artificially intelligent systems are trained, in that it was given no help by human supervisors labelling features.
idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data
at the algorithm and you let the data speak and have the software automatically learn from the data,” said Andrew Ng
of Stanford University.
Neural networks have myriad potential applications, including in speech recognition software and image search, which
Google is working to improve.
The firm’s secretive Google X Lab works on cutting-edge technologies such as self-driving cars and augmented
Computer Software That Writes Itself - Yeah, Dude...It's #$>*%# Here Now! Stupid Biped!
Software is a messy business. Last March the U.S. Federal Bureau of Investigation publicly abandoned a $170 million
software overhaul because of unforeseen technical problems. Even when big projects go well, they often take so long to complete
that the software is out of date by the time it's rolled out. Corporations often don't bother to upgrade obsolete software
for fear that they're opening a can of worms. As software gets more elaborate and complex, the problem only gets worse.
The situation has triggered interest in using
computer programs to generate other programs automatically.
The benefits of automatic software are compelling.
Companies would need fewer programmers and could ratchet up productivity. Humans writing computer code are also prone to errors.
"If a programmer can sit down, specify what you want and push a button, you end up much more productive," says Doug
Smith, a researcher at the Kestrel Institute, a nonprofit R&D center in Palo Alto, California. "It's the next stage
in the evolution of computer programming."
Smith and his colleagues at Kestrel have developed a program that translates a description of a problem into guidelines
a computer can understand. They've used this tool to develop software for scheduling cargo deployment for the U.S. military.
The program runs faster than comparable ones developed manually and speeds up the programming process. A software
designer can also return later with additional requirements and quickly crank out a new version. Some testing is still needed,
however, to prove the program's reliability.
One automatic programming tool has already made it into the financial marketplace. SciComp, based in Austin, Texas,
has developed a product that helps investment banks design programs to price financial derivatives. It takes complex mathematical
models and translates them into something a computer can solve, allowing banks to flexibly change pricing models as they introduce
new products and guidelines.
bigger challenge is to enable laypeople to write complex software without putting rigid restrictions on the kind of software
that can be produced. Utah-based Tenfold has gotten around this problem in part by concentrating on corporate software, which
usually performs relatively simple tasks.
Tenfold's automatic programming product generalizes the software requirements of vastly different corporations, ranging
from insurance companies calculating policies to banks managing customer accounts, and provides for some tailoring for specific
The tool can
generate corporate software in three weeks, compared with more than a year when done by hand. Gordon Novak, a computer scientist
at the University of Texas at Austin, is working on a program in which problems can be depicted as geometric shapes connected
at NASA hope to be able to generate programs on the fly during emergencies.
If a space shuttle has to abort a launch, an automatically generated program
could figure out whether it can make it into orbit or find a landing site. By using software to handle such a situation, "you
eliminate an error-prone activity that humans are engaged in," says Dan Cooke, chairman of Texas Tech University's computer-science
department, who's developing the software for NASA.
Still, automated code doesn't yet compare in quality to what's generated by hand, and scientists still have a lot
to learn about how to translate human need into machine code. Programmers can rest easy knowing that their jobs are safe--at
least for now.