FREE MOBILE CLOUD
COMPUTING CONCEPTS - TRAINING_MODULES_WITH_TONS_OF_VIDEOS
Post by Dr Edwando Urenda, III with Cloud Staff of LA
You already know what augmented reality is. You just might not know it's called that, and when you've seen it as
its best, you probably haven’t even noticed it at all.
Put at its simplest, augmented reailty, or AR to its
friends, is the art of super-imposing of computer
generated content over a live view of the world. It is quite literally the practice of enhancing what’s already around
The most often used example is the one the world is most familiar with and that’s television sports
analysis. The reality is the footage of the game of football, rugby, cricket or what have you and the augmentations are the
arrows of the players' movement and the zonal areas marked out that don’t exist on the actual playing surface.
For example, the first down line in American football is something that doesn't exist on the real pitch but that the viewer
can see nonetheless on the picture beamed over the airwaves and to television screen thanks to the addition of a graphical
One definition of AR as laid down by Professor Ronald Azuma in his Survey of Augmented Reality paper in 1997, is
that it combines the real and the virtual, it’s interactive in real-time and that it must register in 3D.
Using the example of the first down line, you’ve got the combination of the computer generated line - the virtual
- on top of the real American Football footage; it’s present in the real time motion of the game - whether recorded
or not - and the graphical line obeys all physical rules of depth as if it existed in the real world.
words, that it appears underneath the plays feet and behind their legs when they cross it rather than spatially out of sync.
It’s as if it really is painted on the pitch.
The first down line, of course, represents a very mild level
of AR. It’s a very small, simple, largely static piece of augmentation and makes up for a tiny per centage of the total
It’s much more reality than virtuality. Nevertheless, according to a second definition by Paul Milgram
and Fumio Kishino, it does still exist on a continuum of augmentation which they describe as a line existing between the real
world and a totally virtual
environment. At the one end would be what you see through your eyes with no device in the way and at the other a completely
computer generated world, a virtual reality such as Second Life.
The example of the first down line lies a long way to the left of this scale - just millimetres from the end really
- but it’s equally possible to have a situation of something you might call augmented virtuality at the corresponding
point on the right.
By today’s standards, anything that fits anywhere on this continuum that isn’t
right at one of the extremes would be classed as AR.
Now that we’re fully versed on what it is we’re
talking about, let’s get practical with a few examples of what AR can do for us and how it works. The key part of AR
is that you need to place a layer of virtual information over your view of the real world and, in order to do that, there
must be a device in between to display that information upon.
of the first down line lies a long way to the left of this scale - just millimetres from the end really - but it’s equally
possible to have a situation of something you might call augmented virtuality at the corresponding point on the right. By
today’s standards, anything that fits anywhere on this continuum that isn’t right at one of the extremes would
be classed as AR.
Now that we’re fully versed on what it is we’re talking about, let’s get practical
with a few examples of what AR can do for us and how it works. The key part of AR is that you need to place a layer of virtual
information over your view of the real world and, in order to do that, there must be a device in between to display that information
upon. There are three main ways of doing that and they all relate to the position these devices occupy.
instance is where that display is right up against the eyes of the beholder.
These are often referred to as Head-Mounted
Displays (HMDs) and will take the form of a visor of some sort or a pair of connected glasses such as those manufactured for
consumers by Vuzix. The holy grail of HMDs is contact
lens solution and indeed there’s plenty of research and development here that will be the subject of other articles
in AR Week on Clear-Cloud. HMDs are generally a good solution. It leaves the user’s hands free and means that their
entire visual field can be overlayed with augmentation wherever they turn.
One step away
from that are the handheld devices which are most notably these days smartphones but will doubtless include camera-toting
tablet computers after the flurry seen at CES and MWC 2011. Either way, there’s an
advantage here in that these things already exist in quite powerful and convenient forms. The issues, though, are that the
user is limited to just a frame, that your hands are tied up and, possibly most problematic of all, is that the more AR is
used like this, the greater the chance that people will end up assaulting each other by accident as they spin around with
their arms stretched out in front of them or just getting their expensive phones whipped from their grip.
at the other end of the business, and closest to the real world itself, is the method of pasting that computer-generated overlay
directly on top of your real environment instead. It’s usually done with digital projectors or other devices known in
this group as Spatial Displays. The advantages are that the user is required to hold or wear no computer equipment whatsoever
and the second key difference is that everyone else can see the AR as well. The downside is that it’ll only work on
a very specific environment but it’s perfect for collaborations like building projects and might even be the future
of construction sites. Who needs plans and blueprints when everyone can see where each timber is supposed to go?
There is another slightly different group of devices out there which can be employed for activity specific environments
and, in fact, these are what’s used for some of the most developed and mature examples of AR around at the moment.
These are the Heads-up Displays or HUDs. HUDs are powered and connected transparent view screens with computer graphic
LED information displayed over the environment.
Essentially, we’re talking about large, fixed, transparent computer screens sat somewhere not to far out in
front of the user. The military have been using them for years with screens of fighter jets made just like this and indeed
it’s where the name HUD comes from.
It’s a heads-up display because the pilots are able to keep their
heads up and looking at the action in front of them rather than having to constantly reference controls and meters on their
cockpit panels. They can monitor the speed of other aircraft, compass headings, vectors, wind speed and anything else you
could possibly want to know - all by just looking straight ahead. In the next few years, we’ll begin to see consumer
car windscreens as HUDs but, again, there’ll be more on that coming up in AR week.
So, by now you should
be thinking that you’ve pretty much got a handle on what this AR thing is all about and there’s even an excellent
chance that knew all along anyway. Well done you. But just before we go, spare a little thought for this teaser - does AR
have to be all about your eyes?
What about putting a layer of information between your other four senses and the
rest of the world? For touch, you could have information sent back to you via haptic feedback, for taste, there could be device
mounted on your tongue - as uncomfortable as that might sound - having headphones in your ear is simple enough and there’s
no reason why there couldn’t be similar units for one’s nostrils.
So long as the user still has contact
and appreciation of the natural world while plugged up with sensors, then there is still the mixing of the real and virtual
and that is what AR is all about.
A car’s increasingly rapid beeps when its bumper gets closer and closer
to a static object could be considered AR, the noises of a Geiger counter are a form of AR and, doubtless, someone could invent
a small nasal-lining film for hay fever suffers that might emit a strong smell when pollen passes across it.
the same, for the time being, most of the development and much of the interest of AR lies with the visual mode, largely because
it’s an excellent medium for getting across more rich information in any one moment, and it’s here where much
of AR Week will be based. Tune in as we take a closer look at some of the more exciting applications that the future holds......
JULIAN FRANKS WITH NYC IT GROUP - ADVANCED, LLC
LOOKING DEEPER INTO AUGMENTED REALITY...AND TRENDS....
Augmented Reality and a New Age of Interface
The Augmented Reality Event 2011 just wrapped up in Santa Clara last month
and I am excited by what was going on there. Flat pieces of paper that come alive as interactive, virtual product displays;
Books that explode with the depth of a pop-up book combined with a movie; Art that exists all around us, yet unseen to the
naked eye. This augmented world is brimming with stuff that we are just now getting the commonplace technology to effectively
While the industry is maturing beyond the initial wave of exploratory applications, it still has lots
of potential. We at projekt202 were inspired by all of this and sat down for some concept studies of unique Augmented
Reality(AR) products that both solve a real need and utilize existing technology.
AR in Construction
architects and engineers are creating 3d BIM computer models of their projects, but these models are only being utilized at
a fraction of their potential. These parametric models are rich with information; however they are mainly only utilized
in the design process and not during construction.
For construction, the digital models are ossified
into sets of 2d drawings called construction documents. This invariably results in wasted time and energy when architects
and contractors meet about a 3-dimensional detail that is difficult to understand from the 2d construction documents.
BIM model example
is missing is a way to get that rich, 3d information into the field without printing out large and potentially outdated paper
documents or gathering around a computer and monitor. The job site is a harsh condition and it is hard enough to find
a place to set the construction documents, let alone a desktop or laptop computer to view a 3d model, so the value of the
model is unrealized.
The solution is to utilize the latest in Augmented Reality on mobile devices to display
the architect’s virtual model over a contractor’s live phone camera feed. The mobile phone already plays
a prominent role for contractors as it is usually the only method for communication on-site.
this ubiquity, the contractor would be empowered with the richness of the 3d model and be able to increase efficiency and
What goes here?
I'll pull that up...
Looks like we need to make room for
AR for Networking
What if you could know about your next
important business contact before you even meet them?
The social connections we make have always been important
to being part of the business community.
Networking and conference events provide great opportunities to
expand those connections, but often the events are chaotic and intimidating. Because there is limited time to network
around an event, you want to make sure you spending your time efficiently by meeting with the people that are most important
to your business.
With Augmented Reality on mobile devices you could find out who you need to meet –
without having to rely on someone else to make that introduction. Utilizing facial recognition technology similar to
what Facebook has today, attendees of a conference could be scanned with your phone’s camera and relevant CV data displayed
directly on your screen. You would be able to sort through the attendees’ information and find the ones that mattered
to you. You would even be able to learn a little about them beforehand, allowing for a natural conversation starter.
What was that guy's name?
Facebook's Facial Recognition
Right, I need to talk to him!
how will we interface with these new products? The mobile, touch-device platforms seem the most promising. But
it may by a hybrid of Kinect-like spatial sensors combined with touch. Or even onscreen, composited interactions between
the virtual and real objects. Whichever path, we look forward to the next chance to explore these ideas in greater detail! .........................................................