FREE MOBILE CLOUD
COMPUTING CONCEPTS - TRAINING_MODULES_WITH_TONS_OF_VIDEOS
definition of Network Virtualization
Network virtualization is a method of combining the available resources
in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which
can be assigned (or reassigned) to a particular server or device in real time.
Each channel is independently secured. Every subscriber has shared
access to all the resources on the network from a single computer.
Network management can be a tedious and time-consuming business
for a human administrator.
Network virtualization is intended to improve productivity, efficiency, and job satisfaction of the administrator
by performing many of these tasks automatically, thereby disguising the true complexity of the network.
Files, images, programs, and folders can
be centrally managed from a single physical site. Storage media such as hard drives and tape drives can be easily added or
reassigned. Storage space can be shared or reallocated among the servers.
Network virtualization is intended to optimize network speed, reliability,
flexibility, scalability, and security.
Network virtualization is said to be especially effective in networks that experience sudden,
large, and unforeseen surges in usage.
While system virtualization --
and to a lesser extent, desktop virtualization -- has held most of the virtualization limelight, there is also a growing trend
in network virtualization.
of you in the network arena probably are familiar with the terms virtual LAN or virtual private network. It's a toss-up as
to which is actually the older of the two most commonly used virtual networking technologies, but it really comes down to
whether you've been associated with remote access or traditional wired technologies.
This past fall, we were kicking around an internal discussion of what might be
happening around the industry in network virtualization. At that time, we really couldn't put our fingers on what vendors
were doing in this particular area, and there wasn't a great deal of press associated with virtualization in networks as a
whole. Sure, there was the traditional news about a new VLAN capability or a new flavor of VPN, but nothing really new and
exciting to speak of.
that time, we've hearing quite a bit of new developments. There are several interesting technologies that have emerged over
the past six months to approach network virtualization issues from a variety of perspectives. With the latest system virtualization
advancements, administrators can simply and easily configure a new virtual machine (VM) environment and drop it on any available
platform that meets their needs.
As the functionality of that VM changes or demands on its resources increase, the systems administrator can easily
transfer that VM environment from one system to another. From the administrator's perspective, that's great, and the process
typically takes very little time these days compared with what it took to configure a new system and bring it up to an operational
Now, enter this scenario from the
network perspective. What happens when that VM environment is transferred to a different physical system and ends up with
a completely different IP address? How is traffic going to get rerouted? Since the systems administrator can schedule a VM
transfer while it's operational, what's going to happen to the live, real-time end-user sessions? It all depends on whether
the network has been updated with the latest network armament that can rise to the occasion.
Routing is of course the biggest issue with these types of
VM transfers, along with caching and load balancing.
The network plumbing now has to be VM-savvy to keep up with where the systems are residing at a moments' notice.
Routing tables have to be more flexible than ever, and caching elements have to be kept informed when there is a change so
they can correctly redirect end-user session traffic from one location to another; otherwise the sessions will become stale,
and the quality of the end-user experience will drop dramatically.
The other intriguing development in network virtualization is loosely referred
to as a VM tap. When we typically think of network communications, we're almost always thinking of the packet traffic flowing
on our wired, wireless or other type of infrastructure plumbing. But what about the communications among VMs or the communications
between a VM and its virtual management layer? Vendors are learning that those communications are as important as the traditional
they've devised ways to passively intercept those intra-VM communications and channel them outside the VM environment.
Imagine if those redirected intra-VM
communiqués could be forwarded over to a no-frills logging system, similar to a syslog server, to gobble up everything
thrown its way. What would you pay for that type of business and operational advantage?
Now, what if that same VM communications tap could redirect
those intra-VM communiqués over to your favorite network management system, device-monitoring tool or network alert
system and could package those redirected communications in a form those systems could easily take in? What would be the business
and operational advantages for that type of extended intrasystem capability?
Perhaps it could eliminate some of
your workload and keep you from having to work in purely reactive mode to VM crises, or perhaps it could provide a heads-up
before your phone starts ringing from irate end users whose VM sessions are flaking out as a result of the latest VM transfer.
Now is the time to start
checking out the latest in network virtualization technologies before your infrastructure starts letting you know you're missing
the latest technology boat....
Halls today posted a deep look at Nicira, which he calls “the most intriguing startup in Silicon Valley.” Its
aim: To decouple the network from today’s network gear — and Cisco — with virtual networks that will deliver
the future Internet.
But Cisco is not sitting this out. It too is working on its own network virtualization tools, He notes. And it has
joined Nicira and others in building a networking virtualization framework using the cloud infrastructure darling OpenStack
(which Big Blue is all in on
With its Network Virtualization Platform,
however, Nicira is aiming for a software platform that runs independent of the gear it runs on. And it set its sights on Cisco and Juniper from day one. But will it succeed?
Some say it’s a great idea whose time has come, but that it will still find a tough audience in the enterprise.
is particularly useful to an outfit like Rackspace. Following in the footsteps of Amazon, Rackspace operates an “infrastructure
cloud,” offering instant access to virtual servers and storage. This service is used by thousands of developers and
businesses across the globe, and Nicira provides a means restricting each customer to its own virtual network — or multiple
“We have hundreds of thousands of customers, and that translates into multiple hundreds of thousands of network
or network segments that customers want to create,” says Rackspace chief technology officer John Engates. “Nicira
gives us the ability to put any customer, any end point, any location on one common virtual network.”
Raymie Stata, the former Yahoo chief technology
officer, agrees that Nicira changes the game if you’re running this sort of infrastructure cloud service. But he questions
how useful the company’s software will be to other web services. “If you want to have virtual private networks
for a large number of customers, that’s one of the hardest problems to solve, and Nicira is a great solution for that,”
Stata says. “But if only one tenant is using a network, even if the tenant is very large, it’s less useful. I
wouldn’t imagine it would be as useful to Facebook, for example. They’re very large, but they’re the only
tenant on their network.”
However, Casado said he believes it’s only a matter of time before networking hardware
takes a back seat to software.
Have your say: Is virtual networking inevitable, and will it ever replace traditional network gear in the enterprise?
VMware teamed up
with Stanford and Berkeley on Tuesday to create an industry consortium around software defined networks, called the Open Networking Research Center. The company, famous for hypervisors that virtualize
servers isn’t about to watch while companies attempt to build the same disruption in networking.
The consortium counts CableLabs, Cisco, Ericsson, Google, Hewlett-Packard,
Huawei, Intel, Juniper, NEC, NTT Docomo, Texas Instruments and VMware as its founding sponsors.
Much as server virtualization abstracts the
hardware for the software that runs on it, allowing people to put different virtual machines on top of one server, virtualizing
the network abstracts the cables and ports from the demands of the applications. But that’s not enough. To really achieve
the flexibility that webscale and cloud companies demand, the network must be both virtualized and programmable.
The current enabler
for this shift in networking is OpenFlow, a protocol developed out of Stanford and championed by a group formed last March
called the Open Networking Forum. Many of the companies that have joined the VMware at the ONRC are also members of the ONF,
OpenFlow is a means to separate the intelligence associated with moving packets around the network with the gear
that does the moving. At that point the intelligence can run on commodity servers, a factor that was seen as bad news for
Cisco, Juniper and other networking companies which may see their margins drop and profits erode.
CTO and VP for security and networking for VMware says that the creation of the ONRC is designed to push the concepts of software
defined networks in general rather than the OpenFlow protocol. He said an SDN requires three elements: a controller that manages
the networking gear; the controller protocol fused to control the devices (this may not be OpenFlow); and the apps on top
of the controller that direct the network.
And he notes that OpenFlow adds the critical element of programmability to networking, likely
residing in the controller with other protocols, but the creation of a software defined network doesn’t need it. He
points to VMware’s own
firewall and load balancing
products as well as its VXLAN and other networking software as an example.
That’s a common argument from the industry, with folks from Juniper and Cisco pointing to their existing fabrics
and products as an example of network virtualziation. So the key for the ONCR and the industry moving forward seems to be
about how to make SDNs programmable since they already exist. Sequeira says this requires OpenFlow.
“We had a complete SDN stack with no
OpenFlow and it was good enough for almost all the things that customers wanted to do,” Sequeira said. “There
is the virtualization of the switches and firewalls and none of that requires OpenFlow. However, to program it requires OpenFlow
even though even a little of that can be done without it.”
And while Sequeira recognizes the importance of OpenFlow, he doesn’t
see some wholesale migration to OpenFlow-only networks. Current networking software and gear will work with OpenFlow but will
not solely use the protocol. Which, given the statements by Urs Hölzl, SVP of technical infrastructure at Google, that there is no easy way for OpenFlow to communicate with other network
protocols, have me thinking that we’re going to need more efficient ways to communicate from one network protocol to
the biggest battle in SDNs looks like it will be for the controller, which companies such as IBM, Nicira and Big Switch have
developed. The question is: will OpenFlow be the lingua franca between rival controllers or will each strive to create it’s
own proprietary network — programmable, running on commodity hardware, but still a fundamentally locked ecosystem?