Towards Virtualisation 2.0

Virtualisation has been the hottest thing in the IT world for the past few years, but the technologies behind it have been around forever.
##CONTINUE##
Virtualisation, though, is evolving fast. It used to be simply about virtualising servers, but has now spread into virtualised storage and applications.

This evolution has led analysts at IDC to coin the term Virtualisation 2.0, an expression that has yet to gain widespread use and acceptance.

Virtualisation 1.0 is effectively around server consolidation, while 2.0 is around broader uses that are quickening its adoption.

“Application mobility, fault tolerant level reliability, desktop virtualisation and an increased focus on management costs rather than development costs are key components of Virtualisation 2.0,” says Derek Leitch, director of virtualisation specialists ViFx.

“Application mobility and availability are at the heart of Virtualisation 2.0. Previously, hardware maintenance of any sort included at least some planned down-time of applications, be it 10 minutes or 10 hours, whilst maintenance took place. Virtualisation allows for the virtual portability of applications between hardware resources with no break in service to the end user. This application mobility has the potential to have a massive impact on application availability and almost totally negates the need to plan any downtime,” Leitch says.

Sydney-based IDC programme manager of IT Spending, Jean Marc Annonier, agrees with IDC on virtualisation being about business continuity and business mobility.

Gartner has predicted Virtualisation 2.0 as being among the top 10 technologies businesses will be implementing in 2009.

“We are in the middle of Virtualisation 2.0. It’s about virtualisation being manageable. There is no downtime, you can do your migration without outage,” he explains.

A whole host of vendors are in this space, such as virtualisation pioneers VMware, plus Cisco, EMC, Microsoft, coupled with other providers like IBM and Gen-i.

VMware ANZ managing director Paul Harapin says the company created the industry with Vmotion, which allows organisations to move from one physical host to another without any downtime.

Harapin says such technology is now pervasive across Australia and New Zealand, with users now looking to extend from simple server virtualisation and consolidation, towards application virtualisation and eventually cloud computing, the “next tsunami in virtualisation.”

A new product called VSphere Version 4, which he brands a ‘transformational technology’, now allows cloud computing for enterprises, and for major telcos like Telecom to offer such offerings as a business service.

Such cloud computing, he says, means organisations won’t overprovision on hardware; they will be able to buy as much as they need from service providers, thus making big savings, which are said to be 50% on capital investment and 80% on running costs.

The ability to buy more services as and when required, also boosts their agility, which will mean changes in an organisation’s processes, he adds.

Indeed, Cisco Systems engineering manager Chris Lockery says Virtualisation 2.0 offers manageability of the environment, which also gives the customer flexibility.

“Once you have a more dynamic infrastructure, you can start delivering more to your users. We can deliver content from a central location. It doesn’t matter what the end device is. Software-as-a-service delivered to any device they choose,” says Lockery.

“The benefit to the business is IT can concentrate on the backend. One thing Citrix is doing is the BYO PC initiative. You can put any device on the corporate network.”

Organisations, he says, need to work out their requirements to meet the solution that minimises risk, but such Virtualisation 2.0 promises flexibility, centralisation and consolidation, giving the biggest-bang-for-bucks.

Citrix technologies, he continues, are also vendor independent, so you can switch from Microsoft, to VMware, etc, without the need to reboot or rebuild. Such manageability also promises the automation of workloads and processes.

EMC marketing CTO Clive Gold sees the shift from Virtualisation 1.0 to 2.0 as a shift from the efficiency of server consolidation, to the efficiency and control by taking multiple things like servers and desktops and “make them look like one logical thing”.

A distributed resource scheduler, from VMware, helps manage the complexity of say 10 servers, and such scheduling can also be extended to include environments such as datacentres, allowing service providers like Gen-i the potential to rent-out computing capability.

Devices like hypervisors, he continues, allow work to be shifted around, simply and automatically.

Gold likens it to a telephone exchange: in the past calls were routed manually using people, but now this is done automatically.

Virtualisation 1.0 reduced capital expenditure, he says, while 2.0 is about simplifying operations and automating them.

EMC also believe Virtualisation 2.0 will change processes and procedures as organisations shift towards virtualised storage, for example. But Gold warns people may still need new servers to make it work.

At Microsoft, Windows Server marketing manager Tovia Va’aelua says in the past few years, virtualisation has moved from just servers, to desktops and applications and even to management.

Customers are looking at using datacentres and vendors are expected to offer multiple technologies.

The move to Hypervisor type virtualisation is the standard and platforms are expected to carry in-built virtualisation capabilities.

Microsoft’s Dynamic Systems Initiative, he says, provides self-aware and self-healing infrastructure. Coupled with management technologies, it can identify which resources are working sub-optimally and need pre-emptive action, like moving to another piece of hardware.

A key promise and differentiator of Virtualisation 2.0 from 1.0 is business continuity, by way of high availability and also disaster recovery.

“People are looking for virtualisation strategies that allow the moving of applications quickly and smoothly with little or no impact on end users,” he says.

Technologies like Hyper-V are growing in adoption and App-V and Med-V are seeing a high penetration in large enterprises.

“Anyone who is looking at virtualisation, is looking for things like application portability to deliver high availability, disaster recovery and all up business continuity. This is Virtualisation 2.0 and is the differentiator against 1.0, and is what customers want today,” he says.

Gen-i Service Line Manager for Virtualisation, John Mozessohn, confirms the market has moved from the consolidation of 1.0, towards the performance-based business agility of 2.0.

Virtualisation is now mobile in being flexible in allowing the movement of applications.

IT can be better aligned with the business as this flexibility allows high application availability for better business continuity and disaster recovery. There is also scalability, which means needs can also be scaled up or down quickly to suit the needs of the business.

Such virtualisation, with its need for less hardware, also promises savings in energy use and can be seen as Green IT.

However, what is most significant is the demand for agility and flexibility, leading Gen-i and its partners to look at new product offerings like storage, cloud computing and software-as-a-service. Such offerings might be announced a year from now.

Despite this, issues such as whether a bank or government department will trust someone else with their data still have to be resolved. But services to a small business, paid for on a monthly basis, are also being considered.

The real change in virtualisation, adds Mozessohn, is not just a move away from consolidation, but to becoming a commodity; something customers need to think about that will add profit to a business.

Indeed, it is the ability of virtualisation to save money that has accelerated its adoption in recent years, argues Derek Leitch, who brands it a “recession technology”, allowing users to achieve more with less.

There is a growing use of virtualised storage, a shift to desktops and applications, cost-effective disaster recovery and increased use of advanced features like automation tools.

The lower cost of less power use is a driver, not just as Green IT.

“Customers are demanding hosted services virtualised environments (as part of the journey to cloud computing). Virtualisation is having major impacts on outsourcing services and they can now demand and get cost-effective, managed hosted services for virtualised environments. The competitive landscape is changing and becoming very exciting,” says Leitch.

But he warns: “Systems management is still important. Virtual servers will need to be managed. VM proliferation, sprawl and life-cycles require tighter management in the virtualised, centralised world. Management includes capacity and resource planning, optimisation, service chargeback, service level management, protocol and practice management, etc.”

Revera is one of this country’s largest users of virtualisation for its hosting and storage services, having operated since the early 2000s. MD Roger Cockayne has worked with virtualisation as a discipline for more than 20 years.

“The idea existed with the mainframe, partitioning machines into virtual segments, which people could leverage independently as if they were independent machines themselves. Then network virtualisation emerged — the ability to take a physical network and hardware and split it up into virtual, logical ones.

“Then storage was virtualised, so you could connect a virtual service across a number of different storage appliances, but the application connecting has no idea, presenting capacity from anywhere. It was on these principles we built our datacentres,” he says.

Local firms were too small and poor to provide the hardware themselves, so Revera uses virtualisation to offer storage they can use.

“Server virtualisation has been the final piece. In the past we had physical servers connecting to virtual infrastructure services such as network, storage, backup, data replications and some security services like firewalls.

“Now that we have virtualised our servers, we can run multiple virtual systems on a single server. But we’ve taken this to a whole new level in terms of availability. We deploy server farms on which virtual servers run. So we can move virtual instances transparently across instances that can be moved across physical servers, without customers noticing a thing.

“This means we can save the customer money and give them cluster-like availability and resiliency for even the smallest machines,” he adds.

Virtualisation 2.0 technologies will allow data centres to provide more services, Cockayne says, and allow customers to buy servers as small or large as they need. Furthermore, its an easier spreading of the workload that gives better and more predictable performance.

However, users of outsourced providers will have to look at their DR capability and that of their outsourced provider, who may not have sufficient redundancy or capacity in their systems.

IT managers will also need to model their behaviour to test what might happen in a disaster, should they move to a virtualised and hosted environment.

-----------------------------
BY Darren Greenwood Auckland
Source:COMPUTERWORLD

© Fairfax Media Business Group
Fairfax New Zealand Limited, 2009.

0 comments:

 

Copyright 2008-2009 Daily IT News | Contact Us