During the recent Data Centers Europe conference in France I heard several times that the way to make your infrastructure efficient is to “shut it down” and to protect your applications from disaster you could “move them”. Yep, it’s that easy, if you have a security breach, water leak, power failure, whatever, your answer is just move your applications. That’s right whole containers of diverse applications and IT gear in a matter of minutes or less. Man, I’m thinking a cure for all forms of cancer is just around the corner. I’ll save the discussion on “shutting down” for efficiency for another blog. This blog will focus on the disaster avoidance strategy of the whole scale moving of IT environments over the network automatically.
Clarification: I’m NOT saying that there aren’t great options for designing cloud ready, distributed applications that can support HA/Fail-over across regions. I am saying that most application environments in the enterprise today aren’t ready and won’t be ready for a while for various reasons, which I touch on below. I’m also NOT saying that there is no place for containers. Containers are viable options for solving many data center needs and they will continue to see adoption where appropriate.
IT Environment Portability
There are four (4) major hurdles or problems associated with making the portability dream a reality
- The network is less and less the roadblock it was in the past, but there are still hurdles (Capacity & latency limitations haven’t been magically eliminated by science)
- Introduction of Bigdata (as a business critical application)
- The diversity of enterprise IT infrastructure means that many applications will be locked to specific hardware, network and storage.Legacy applications and infrastructure can take years to migrate off of. The strategy of obsolescence is complicated by a wide range of variables: volume, team skills, age of solutions, business growth projections, business size, regulatory concerns, ROI, etc., etc. The aforementioned complications mean that for many enterprises (90%+) it will take years for them to migrate away from their legacy environments.
- Without having dedicated and duplicated environments in each location you plan to move to, the orchestration, policy and governance would be a real burden
Setting the Scene for a company using the magical “Move IT” strategy
NewCo has four geographically (NA/APAC/EU/SA) distributed data center locations, each comprised of several data center containers. The data centers are also configured with a data center software solution that allows for all of their workloads to be shut down or moved at a moment’s notice.
Can you spell C O M P L E X I T Y?
- In order to make the above scenario work you would have to define 25% extra and appropriate capacity (minimum) in each group of containers. This extra capacity would allow for the work of any one set of containers to be distributed across the other three. Remember this is extra hardware that must be refreshed and supported, and which pound for pound is more costly to own and operate (power & cost) than equivalent data center capacity.
- You would need to ensure that the majority of your applications can work and support customers successfully from any combination of data center locations (latency).
- All applications would either have to be designed as active in all locations or you would have to target each application to move to specific data centers where the appropriate hardware and people reside.
- If one of your applications is big data, keep in mind that it takes five (5) days on a 10GB link to transfer one petabyte.
- If you’re distributing workloads and data across countries, there is the real concern of data sovereignty issues.
- Rules, policy, and governance definition would be a b…..
Weather reports indicate the hurricane approaching will impact your site imminently. Your containers signal the threat and automatically trigger a move of your environments from that location to the other three geographically distributed facilities.
- You predict the impact of a disaster; you move all your applications (suffering high overhead (and many potential outages as a result) and then the disaster passes without causing a problem. Oops.
- An event disrupts the transfer; your workloads aren’t moved and your container or poorly designed/protected facility is in no way prepared to handle a hurricane putting your entire environment at risk.
- As your site is being moved a failure occurs in one of your other locations. How do the in transit workloads adjust to the new reality that they don’t all have destinations anymore? Does the stronger application kill the weaker one and take its resources? Or maybe the male applications say “women and children first”? That doesn’t even happen with humans.
All of this complex work was done because?
The above data center and infrastructure utopia is meant to solve the problem of data centers costing too much. The assumption is that instead of building or using robust facilities with lots of network capacity and ecosystems, you’re better off saving money on the data center and trying to protect the business by distributing workloads when disaster strikes. The problem with the aforementioned assumption is that it’s wrong. The strategy outlined here is an attempt to solve a problem of costly data centers that no longer exists or no longer carries the same weight. In fact, while costs for the best data center capacity are moderating, the value of associated technology ecosystems and network variety are increasing.
The truth is somewhere in between
The truth is that there are still many internal and colocation facilities that are designed and managed poorly. These facilities were too costly to build, are at best only moderately efficient and by and large don’t live up to any stated resiliency. However, some providers are able to remove those old assumptions and provide the customer with the best of all worlds; appropriate cost, extreme availability, military grade security, a broad and diverse technology ecosystem, and incredible network capacity and diversity.
What can be done?
In the above (move it) model, the best options you have are to identify a few highly critical workloads for protection that can be moved to specific infrastructure in alternate locations. The second option is more traditional, and involves an approach that facilitates replication and fast failover vs. HA. The third option is to design or buy all new applications with the capability to be distributed across multiple regions. With distributed environments (won’t work for every app) and a healthy data center strategy you can position your infrastructure to be resilient at every level.
For those applications that aren’t ready
The complexity and cost associated with attempting to solve the “move IT” question for all your applications is for most enterprises going to be too high. The best and only real option for these legacy or non-distributed applications is for them to be housed in a highly resilient facility, not a box or container made for companies that can treat each box as if it’s just one more computer in a larger cluster.
The deal breaker
The data center isn’t necessarily the most expensive item and the right data center and technology ecosystem can introduce tremendous advantages to your business. A commonly overlooked problem with today’s container solutions is that they are attempting to save money on the data center facility at the expense of protecting the much more expensive IT infrastructure stored within. As an example a building like a Switch SuperNAP costs several hundred million to construct, but the un-weighted cost of the IT infrastructure can be 10X or more of that. Also, the IT gear changes every 2-5 years, but the data center can be leveraged for 10-15 plus. Using a less than resilient design because it is theoretically less costly is like not paying $10 a year in insurance to protect a $200 item that also keeps your heart beating.
Keep the BS meter on high alert
There are many out there that are trying to sell you a solution to a problem you don’t have. When you consider the entire picture of what a data center is, where it fits, and how it’s best utilized, it becomes clear that distributing your critical systems around in some cheap widgets isn’t necessarily the best approach. A hybrid approach to applications, locations, and data centers is likely to provide the best combination of cost, performance, agility, and protection. Take advantage of all the options available to you, without allowing old assumptions or FUD to get in the way of helping you make the best choice.