Advertisement

Examining how cloud deployment challenges infrastructure

Effort require the underlying hardware provide cost-effective solutions and scalability, while placing hooks to drive performance higher than ever before

As higher levels of performance and functionality are assigned to our mobile technologies, the more complex they become. The average person today carries around in their hands a trillion times more processing horsepower than large computers of 50 years ago.

While many tasks and applications can remain completely under local control, more and more service-oriented applications are migrating to the cloud. This is not a new trend. We saw client applications migrate to servers in the past to keep terminal costs lower and simplify IT maintenance and control.

Ideally, the virtualization of applications should simplify the requirements for handheld devices, the network infrastructure, and the back end servers.  In theory, the reduced amount of transferred data should make operations more efficient. But increased traffic demands have forced data centers to scale linearly by adding more homogenous racks of identical servers and switches. Coupled with the use of direct attached storage (DAS) topologies, latencies are additive and at some point of diminishing return, performance suffers end to end.

Another factor to consider is that smart phones are now being called upon to perform at desktop computer levels. Document creation and editing, graphics creation and rendering, audio and video capture and editing, presentational materials, and so on, are being done on tiny little screens by people on the go. With limited resources and processing power, it is the cloud that has become the extension of the handheld. It used to be, the computer was the computer. Then it was the network was the computer. Now, the cloud is the computer.

Key to this is the management burden as more sockets carry more streams from more sources to more destinations. Efforts to simplify deployments and reduce IT costs of private and public cloud centers have resulted in using tools to help automate the configurations and use of available resources. Unfortunately, these tools often only addressed subsets of the problems. Only recently have the tools evolved to the point where a single user interface could control and obtain status of resources, memory, storage and networking to allow deployment of new systems in minutes, not days.

One such collaborative approach is OpenStack. OpenStack is comprised of a global collaboration of developers and cloud centers based around HPC and Web 2.0. The resulting open source platform are usable by public and private clouds that work across service providers, VARs, SMBs, researchers, and global data centers. 

Using Amazon’s API cloud services as a model for compatibility, a key goal is to allow IT administrators to manage computational, memory, storage, networking, and service controls across the web interface to accelerate deployment, perform issue mitigation, and save time and costs.

Avago_Cloud_Deployment


Fig. 1: Everything will be resting in the hands of cloud services as applications processing and resources migrate from devices to the cloud. This puts pressure on the infrastructure designers who demand reliable, low-cost, and scalable storage solutions.

This effort also requires the underlying hardware to provide cost-effective solutions and scalability, while placing hooks to drive performance higher than ever before. Advanced storage and switching hardware will be the backbone of the backbone, and encapsulated expertise for SAS, RAID, and PCIexpress will especially make the next generation of data centers deploying cloud services faster, more nimble, and easier to maintain.

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply