Visualisation of the various routes through a portion of the Internet
How can we expect users to trust the cloud, until it has really been put to the test? Well, it has been, and it works.
Considering the business world’s dedication to efficiency and minimizing expenditure, the personal computer revolution of the 1980s coulde seem pretty perverse to later generations. Why fill the office with PCs loaded with identical software rather than centralise on one mainframe with simpler, cheaper workstations on the desk?
And yet the PC survived well into the age of the Internet, when it first became possible to deliver all software as a service from a central source. The idea of ”Software as a Service” was good, but pioneering attempts failed, simply because broadband access was not good enough to support the service. But with today’s widespread broadband it has become a practical proposition.
It’s now called ”Cloud Computing” because actual processing takes place at some unknown location, or in a dispersed virtual machine, across the Internet cloud. And it only becomes practical when the Internet access is fast enough not to frustrate a user accustomed to the speed and responsiveness of on-board software. Simarly, the success of a virtual data centre must depend on network links fast enough to preserve the illusion of a single hardware server.
We do now have networks and access technologies fast enough to meet these challenges, but many are held back because they do not have the confidence to engage in the Cloud.
Knowing what we do about the determination and skill of cyber-criminals, how can we secure a system as amorphous and connected as the Cloud? And, after decades of experience in which enthusiastic technology-advocates have promoted systems too complex to be reliable, why should the public now put its trust in cloud computing?
The answer would be to find some way to test these shapeless and dynamic virtual systems with the same thoroughness and accountability as testing a single static piece of hardware. That is asking a lot, but it has been achieved – according to a recent report from European test specialists Broadband Testing.
The performance challenge
Cloud computing potentially offers all the benefits of a centralised service – pay for what you actually use, professional maintenance of all software, single contact and contract for any number of applications, processing on state-of-the-art hardware – but it has to match the speed, responsiveness and quality experience of local software if the service is going to be accepted.
So how does the provider ensure that level of service will be maintained under a whole range of real world operating conditions including attempted cyber attacks? The answer must lie in exhaustive testing.
But there is a fundamental problem in testing any virtual system, in that it is not tied to specific hardware. The processing for a virtual switch or virtual server is likely to be allocated dynamically to make optimal use of available resources. Test it now, and it may pass every test, but test it again and the same virtual device may be running in a different server and there could be a different response to unexpected stress conditions.
This is what worries the customer – is it really possible to apply definitive testing to something as formless as a virtual system? Broadband Testing’s report, Secure Virtual Data Center Testing (September 2010), provides the answer.
“Can we trust the cloud? The answer now is ‘yes’” according to Steve Broadhead, founder and director, Broadband Testing. “Virtual Security works in theory but, until there was a way to test it thoroughly under realistic conditions, solution vendors have had a hard time convincing their customers. Without Spirent we could not have done this – the testing proved not only highly rigorous, but also quite simple to operate.”
Maintaining the application library
Whether the central processing runs on a physical, virtual or cloud server, it needs to hold a large amount of application software to satisfy the client base, and that software needs to be maintained with every version upgrade and bug fix as soon as they become available. It’s a complex task, and it is increasingly automated to keep pace with development. There must be a central library keeping the latest versions and patches for each application package, and some mechanism for deploying these across the servers without disrupting service delivery.
At this stage the service provider is in the hands of the application developer – the service to the end user can only be as good as the latest version on the server. We hope the aplication developer has done a good job and produced a reliable, bug-free product, but the service provider’s reputation hangs on that hope until the software has been thoroughly tested on the provider’s own system.
In the case of a physical server, we do not expect any problem because the application is likely to have been developed and pre-tested on a similar server. But virtualisation and cloud computing adds many layers of complexity to the process. The speed of the storage network becomes a significant factor if the application makes multiple data requests per second, and that is just one of many traffic issues in a virtual server.
Faced with such complexity, predicting performance becomes increasingly difficult and the only answer is to test it thoroughly under realistic conditions.
One cannot expect clients to play the role of guinea pigs, so usage needs to be simulated on the network. It is critical to gauge the total impact of software additions, moves and changes as well as network or data center changes. Every change must be tested to avoid mission critical business applications from grinding to a halt.
Application testing in a virtual environment
There are two aspects to testing applications in a virtual environment. Firstly functional testing, to make sure the installed application works and delivers the service it was designed to provide, and then volume testing under load.
The first relates closely to the design of the virtual system – although more complex, the virtual server is designed to model a hardware server and any failures in the design should become apparent early on. Later functional testing of new deployments is just a wise precaution in that case.
Load testing is an altogether different matter, because it concerns the impact of unpredictable traffic conditions on a known system.
To give a crude analogy: one could clear the streets of London of all traffic, pedestrians, traffic controls and road works then invite Michael Schumacher to race from the City of London to Heathrow airport in less than 30 minutes. But put back the everyday traffic, speed restrictions, traffic lights and road works and not only will the journey take much longer, it will also become highly unpredictable – one day it might take less than an hour, another day over two hours to make the same journey.
In a virtual system, and even more so in the cloud, there can be unusual surges of traffic leading to unexpected consequences. Applications that perform faultlessly for ten or a hundred users may not work so well for a hundred thousand users – quite apart from other outside factors and attacks that can heavily impact Internet performance.
So the service provider cannot offer any realistic service level agreement to the clients without testing each application under volume loading and simulated realistic traffic conditions.
The Spirent test solution
Network performance and reliability have always mattered, but virtualisation makes these factors critical. Rigorous testing is needed at every stage in deploying a virtual system. During the design and implementation phases it is needed to inform buying decisions, and to ensure compliance. Then, during operation it is equally importantand to monitor for performance degradation and anticipate bottlenecks, as well as ensuring that applications still work under load as suggested above.
But large data centers and cloud computing pose particular problems because of their sheer scale. Spirent TestCenter™ is the company’s flagship test platform for testing such complex networks, and it meets the need for scalability in a rack system supporting large numbers of test cards, to scale up to 4.8 terabits in a single rack.
As a modular system, TestCentre can be adapted to any number of test scenarios. In particular, Spirent Virtual is a software module that specifically addresses the challenge of testing in a virtual environment. It was named the 2010 Best of Interop winner in the Performance Optimization category, on the strength of its innovative approach for testing the performance, availability, security and scalability of virtualized network appliances as well as cloud-based applications across public, private and hybrid cloud environments.
Spirent Virtual provides unsurpassed visibility into the entire data center infrastructure. It is designed specifically to meet the needs of a complex environment where as many as 64 virtual servers, including a virtual switch with as many virtual ports, may reside on a single physical server and switch access port. With Spirent Virtual in the TestCentre, it is not only possible to test application performance wholistically under realistic loads and stress conditions, but also to determine precisely what component – virtual or physical – is impacting performance.
To create realistic test conditions, Spirent Virtual software is used in conjunction with devices designed to generate massive volumes of realistic simulated traffic. Spirent Avalanche is such a device. It is designed to replicate real world traffic conditions by simulating error conditions, realistic user behavior, and maintaining over one million open connections from distinct IP addresses. By challenging the infrastructure’s ability to stand up to the load and complexity of the real world it puts application testing in a truly realistic working environment.
The latency issue
Even minute levels of latency can become an issue across a virtual server. So how does one measure such low levels of latency, where the very presence of monitoring devices produces delays that must be compensated for?
Manual compensation is time consuming and even impossible in some circumstances, whereas in the TestCentre this compensation is automatic and adjusts according to the interface technology and speed.
The acceptability of cloud computing depends upon delivering a quality of experience as good as local processing but without all the overheads of licencing and software version management. Quality of experience is a subtle blend of many factors such as latency, jitter and packet loss and all these can be precisely monitored on the TestCentre under wide-ranging traffic loads, both running pre-programmed tests automatically and allowing operator intervention via a simple user interface.
And the question of security
As well as delivering good quality of user experience, the cloud computing provider needs to satisfy the clients’ fears about security in the Cloud. The hacker that accesses a soft switch can re-route traffic at will, and so virtualisation leads to potentially severe vulnerability across the whole business – and the social infrastructure in the case of cloud computing. Again, the growth in virtualisation demands a corresponding increase in prior and routine testing.
Here it is not only the need to test under unusual load conditions – because those are the times when attacks are most likely to succeed – but also there is a need to simulate a whole range of attack scenarios. The application must still work when tested in the context of the network security devices working under attacks and vulnerabilities.
Spirent’s system delivers the most comprehensive, accurate user emulation of end user traffic and unexpected attack traffic even while at high load. Simply put, Spirent can model the user behavior while scaling to full Internet levels. This “no compromise” approach is important since measuring the impact to the user and the network while loading the application with real-world loading patterns helps identify, isolate and resolve problems before the provider commits them to service agreements and puts them on-line.
Putting the test to the test
Broadband Testing set out to determine whether it is possible to secure a virtual environment, knowing that their first problem was to create a rigorous and repeatable test process.
The security system under test would be the TippingPoint IPS-based Secure Virtualization Framework (SVF), and the test bed itself would consist of both the physical and virtual versions of Spirent’s Avalanche traffic generator. These were to be combined with a typical network environment including both physical and virtual elements in order to replicate a truly representative hybrid data center environment.
Using Spirent’s pioneering cloud computing testing solutions with performance, availability, security and scalability (PASS) methodology, Broadband Testing were able to monitor and test internal and external-to-internal traffic under normal operating and extreme conditions plus a wide range of attack scenarios. All the threats in the HP TippingPoint signature base were successfully blocked, and the only ones that passed were those not yet added to the then-current database.
David Hill, Spirent’s vice president for EMEA commented on the Broadband Testing report: “The key takeaway was that testing with Spirent stressed the
capability of the security solution right to its limits. ”People assume that security is the final objective, when what is really needed is a precise way to
quantify and tailor the level of security in a complex system. ‘Tried and tested’ means more than any amount of theoretical argument in this case.”
“The economic benefits of cloud computing are overwhelming, but so are the security concerns of network operators and their customers. This independent report breaks that deadlock, as reliable testing now makes it easy for system vendors to mitigate the risks of migrating to the cloud, while optimizing resource utilization under an exhaustive range of real-world operating and threat scenarios.”
Cloud computing offers many advantages to the user, but the provider must assure the client that the service will consistently deliver on its promises. Fail, and users will vote with their feet.
The only way to ensure success is to offer a tried and tested service. Broadband Testing has now shown that this can be done and it can be proven. Most significantly for practical purposes, they found that: ” the testing proved not only highly rigorous, but also quite simple to set up and run.”