Disaster recovery driving virtualization – survey

Virtualization exemple

Virtualization: outsourced power

IT personnel not aware of hidden costs – including I/O bottlenecks

By ECM Plus staff

ECM Plus +++ A new survey by Coleman Parkes and commissioned by CA Technologies has found that downtime costs businesses more than 127 million man-hours per annum.

However, more that 30 percent of both VMware and Microsoft virtualisation users identified backing up VM data as a challenge, and also indentified storage management, I/O bottlenecks and server availability monitoring as major virtualisation challenges.

Another survey conducted by Diskeeper Corporation Europe of 500 IT personnel showed 80 percent were aware of the major problems of I/O bottlenecks.

“Ensuring uptime is at the top of every IT administrators list. With virtualisation you can automatically move virtual machines from one host to another and even from one data centre to another – so they’ll stay up and running, with little or no downtime, in case of a failure” said Mandeep Birdi ror Diskeeper Corporation Europe. “Effectively shared resources are of critical importance in a virtual environment, but are severely impacted by three key barriers: I/O bandwidth bottlenecks due to accelerated fragmentation on virtual platforms, virtual machines competing for shared I/O resources and not effectively prioritized across the platform and thirdly, virtual disks set to dynamically grow do not resize when data is deleted – an issue known as bloating. Instead free space is simply just wasted. These problems can result in actually costing the company unnecessary spend on additional hardware as well as time in dealing with the issues”.

According to the company, the new survey highlighted that 25% of respondents are not dealing with the problems of I/O bottlenecks, while 5% said they just purchase more disks/spindles.

Birdi also believes that many IT users who are migrating to or have migrated over to a virtual environment, and have typically employed a SAN, are being told they don’t need to defragment the SAN, and that there isn’t an I/O bottleneck issue. He comments “This is false, the I/O bottleneck issue can have a huge affect on performance, and if the customer is not aware that this is what’s occurring in the SAN environment, it can again end up costing the company a lot of money with thinking they have to purchase more hardware, when in fact this might not be the case at all.”


Related links:

ECM Plus podcasts…

Advertisements

Leave a comment

Filed under Analysis, Business continuity, Cloud Computing, Data storage, Disaster Recovery, High Availability, NAS (Network Attached Storage), SAN (Storage Area Networks), Virtualization, VM2VM

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s