<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=76180&amp;fmt=gif">
background

RoundTower Blog

Data Center Agility and Disaster Recovery in a Nimble Business Environment

dr_nsx_roundtower.png

Trying to Adapt through Hardware

For those who are trying to meet the needs of today’s fast-paced business but are trying to do it the traditional way, with physical changes to the infrastructure, there is only one thing to say—good luck!  Any IT Director who has tried to keep up with the changing demands of networks, security and evolving storage knows that making physical change after physical change is not effective; or sustainable.  At some point the infrastructure itself has to change into something more flexible and suitable for the dynamic business environment we live in today.  With virtual workers, changing business partnerships, and projects that come and go, networks have had to be much more flexible than 10 years ago. 

rtt_40_recover.png
Business Continuity

And for those IT departments that manage to provide network and storage capability in a reasonable amount of time, the next challenge is the ability to recover quickly when there is an outage.  According to Infrascale, “40% rated their organization’s ability to recover their operations in the event of a disaster as fair or poor.” And with the cost associated with downtime increasing dramatically, the need to recover quickly is imperative.  (By some estimates the cost for an hour of down time is as much as $700,000.)

Not Just Data but State of the Workload

It’s not just about backing up data.  All the recent data in the world is useless if you can’t reinstate it quickly, without disrupting the rest of your operations.  Anyone performing backups in the 90s recalls the painful days when data, OS state and connectivity couldn’t be reinstated easily, rendering the backed-up data virtually inaccessible.  Data centers need to be able to operate while recovering data, necessitating active-active data center scenarios.  And workloads need to be recoverable in their current, relevant state.

Network Virtualization

Enter the Software-Defined Network and products like NSX from VMware.   The concept is similar to what server virtualization did to computing 10 years ago but in this case it is the network that is being virtualized.  So while the physical network fabric exists as the hardware infrastructure, the connectivity, access (including things like firewalls) and services like virtual networks, VLANs, and VPNs are abstracted above the physical network at the software layer.  This allows you to treat your physical network as a pool of transport capacity while network and security services are attached to workloads using a policy-driven approach.  This eliminates bottlenecks by automating network provisioning operations.

rtt_block_services.png 

Rapid Recovery

In the case of disaster recovery, this enables a major shift in process since workloads can move from place to place and across the network; and carry all of their network and security policies with them.  Applications and data can reside and be accessible anywhere.  A workload can be moved from one data center to another, or even deployed into a hybrid cloud environment.  And this makes possible the formerly unattainable goal--the holy grail--of moving workloads from active to active datacenters.  This means better uptime and greatly reduced recovery time.

Mission Critical Networks:  The Financial Sector

This is a huge benefit to customers with mission-critical networks like those in the financial sector rtt_pic_0012.pngthat need to provide services to clients while new users/customers are being provisioned;  and at the same time have records ported over from another branch.  For example a brokerage firm needs to make transactions during market hours but has an important change to make due to a recent merger, and want to do it during business hours.  This would normally take weeks of months to handle and bring with it associated network outages.  Another example is a scenario in which banking customers are coming in rapidly causing the bank provision new accounts quickly yet it is unacceptable for the bank to shut down in order to gain the capacity in network and storage needs.  All these needs can be addressed using network virtualization and the software-defined network model.

In Conclusion

In short, networks need to run and provide connectivity—appropriate connectivity—even while repairs are being done.  As an analogy, no one likes their commute route to be impeded for months or even years while the street is widened to enable higher traffic capacity; only to have it repeated a couple of years later as the population grows.  Networks need to be usable even as virtual networks and firewalls are provisioned and additional capacity is added.  This is the promise that NSX brings:  the ability to provision services, move workloads and even virtual networks around to meet demand and recover from a data center outage while continuing to conduct business leveraging the operational workloads.  Five years ago, this technology didn’t even exist.  But now that it’s here, it acts as a differentiator and those businesses that adopt quickly will find their ability to respond to their changing and growing business far superior.

Share this Post:
« Social Engineering Spotlighted by Sam Spurlock on Dayton Fox45
DATA ANALYTICS FOR THE ENTERPRISE PART 2: Machine Learning and Driving Business Outcomes »