There’s been considerable uproar in the infrastructure and analyst community of late as we all gnash our teeth and wring our hands while considering the potential impacts of multiple new CPU types in our mix. Recently we’ve had announcements from HP ( NYSE: HPQ) on Moonshot server solution. We’ve also had many announcements from Intel (Nasdaq: INTC), AMD (Nasdaq: AMD), ARM (Nasdaq: ARM), NVIDIA (Nasdaq: NVDA), and Qualcomm (Nasdaq: QCOM) among others. Each of the announcements has indicated how they can help with any or all of the following; lower your power costs, reduce the heat in your data center, right size performance to the application, and more.
It’s a good thing
Generally speaking us IT types like to look for homogeneity in our servers, storage, CPUs, networking devices, etc. However, even with that caveat, many in the IT world are starting to take notice of what these different processors can offer.
As someone who still thinks there are years of innovation left in the infrastructure layer, I personally welcome the notion of having as many options as possible to help me in solving specific workload requirements issues. In some cases I could even see combinations of CPU types used in one application environment as a potential architectural choice.
The following are some of the primary drivers for having multiple options. They are pretty straightforward, but worth mentioning all the same:
Application level Resiliency:
- Designing applications that can utilize a larger number of CPUs has very positive ramifications on your ability to provide a more resilient environment. However, there is no magic bullet solution to application design as it relates to CPU. You’ll have to do the hard work of matching transactions, I/O, graphics, and networking to appropriately architected or sized CPUs
- Energy costs are rapidly outstripping the cost of the infrastructure ( Jonathan G. Koomey, Ph.D. “Assorted Datacenter & Data Storage Power Trends”)
- Cost of power, regardless of any other metrics is a big area of opportunity in any IT environment. The appropriate design elements in combination with smart utilization of infrastructure can have a dramatic and positive impact on TCO
- As more companies recognize the need and benefit of making sustainability and efficiency a critical part of doing business, options for more efficient delivery of IT will be in high demand
- We are continuing to build more and more data centers to house the ever increasing number of server and storage devices. We need to consider the best options for expanding wisely not just “build more and let the next guy figure it out”
Appropriately applied performance:
- While appropriately applied performance aligns with cost and energy use, there is also the fact that performance isn’t as simple as “bigger processor make go fast”. In some cases a larger number of smaller processors applied to the problem can yield better results.
- It’s also possible that depending on the application your supporting combinations of processor architectures might better serve your needs. You could have a combination of CPUs and GPUs, with the GPUs focused on graphics and the CPUs serving pages etc.
The bottom line is that I see these options as powerful tools in the IT infrastructure tool chest. The more options or variables I’m provided, the more innovation I can apply to any problem (opportunity in disguise).
Fear of change will be a limiting factor to rapid adoption, but not as much as might be true for other layers of IT infrastructure. However, the unfortunate truth is that often we fail to make changes because we elect to accept functioning as the same thing as “working effectively”. As I describe in my “What is a “Working” Data Center” blog, running doesn’t mean efficiently and sustainably delivering appropriate functionality and performance.
Legacy Application Architecture tie-ins & Licensing issues:
There are still a wide range of applications that are married to the processor when it comes to function and licensing. In other words, the application won’t run on a different CPU architecture, or if it does it cant benefit from utilizing a larger number of them. With licensing there is the real potential for cost ramifications associated with having more or larger processors. Yes, that means your software vendor might charge you more for trying to provide higher resiliency or attempting to reduce your power consumption. There’s also the cost of changing or upgrading your primary but legacy business applications. In most cases no amount of efficiency improvements can overcome the cost impact of making the change, so natural migration is often the best choice.
The future of IT infrastructure is bright but
I don’t know how to end the joke I started with the title, so I’m not going to. Every effort I made was lame, even by my standards. All kidding aside the options that these new classes of processors bring to the market are essential to our ability to efficiently and sustainably meet the continually growing demand for IT services. By many estimates we are likely to quadruple the number of servers in the world by 2020. The difference between doing all those servers with traditional CPUs vs. some combination that includes low power units is potentially gigawatts of power at any moment in time. We can’t stop with the processor either, we need to maintain focus on delivering applications and new services that are extremely efficient in how they use resources. It’s only through the focus on the entire system of IT, from the design of data centers, CPUs, servers, applications, and use metrics can we maintain forward progress without overwhelming our ability to deliver resources like water and power.
Thanks to Hans Hoefnagels (@hans_hoef) for the idea to write this blog
Fun and related future prognostications:
- Moonshot is going to do well. Meaning, it will turn in to a real shot in the arm for HP server sales by Q3 2014.
- The market for ARM, ATOM, and other low power chips in the server space is likely to reach 25% of all servers sold in 2016
- Moonshot, Scorpio, and other similar solutions will put real stress on the software based hypervisor market in 2015
- GPUs will be much more widely integrated into diverse infrastructure stacks (sorry can’t pick a number here) in 2016
- We’re likely to see many bigdata and graphics or other I/O intensive applications designed to work with a combination of CPUs and GPUs
I know, I know, I’m attempting to call the future, it’s stupid but fun. I’m not charging you for this, so you get what you pay for.