2017 promises to be an exciting year for servers and the competitiveness of compute offerings. This year will see this scope of impact not only include enterprise datacenters and the public cloud, but extend to the emergence of "edge computing". Edge computing is defined as compute required to deal with data at or near the point of creation. Among other things, these devices will include the “ocean” of remote, smart sensors, commonly included in internet of things (IoT) discussions.
Here
is a list of a few things to we’ll see concerning specific CPUs.
It
should come as no surprise that Intel INTC -0.16% continues to dominate
(>99%) the server market but is under enormous pressure on all fronts. Xeon
and its evolution continue to be their compute vanguard. Xeon-Phi (and now the
addition of Nervana) make up their engines for high-performance computing /
machine learning. Phi has seen some success, but it isn’t clear yet how Nervana
offerings will materialize.
Advanced
Micro Devices AMD -0.57% (AMD) has their best shot in years for fielding an
Intel competitor that just about everyone (except perhaps Intel) is eager to
see. If the AMD Zen server CPU is simply good enough (meaning, it shows up,
works and has at least some performance value), it will take market share
simply by being an x86 competitor. AMD is encouraged by early indicators. They
also have their ATI GPGPU technology which will provide additional
opportunities.
ARM
Holdings will continue to dominate the mobile and embedded device space, but
the fight is hard in these segments. The more likely opportunity for ARM
expansion will be at the "edge" and not so much in the server space.
The death of Vulcan by Avago, the acquisition of Applied Micro Circuits (APM)
and their plan to find a place for X-Gene leaves the Cavium CAVM +0.69%
ThunderX, the "yet to be launched" Qualcomm QCOM -0.02% Centriq CPU
and a few other very focused ARM initiatives still standing. After years of
"This is the Year for ARM Servers", the outlook could be better, and
if AMD produces a plausible Intel competitor (capable of running x86 software),
it will put extreme pressure on whole ARM server CPU initiative.
OpenPOWER
seems on the other hand to have a lot of momentum but to date has not
significantly impacted the x86 server market. 2017 may end on a different note.
OpenPOWER‘s (IBM IBM -1.27%) willingness to embrace NVIDIA NVDA -0.74% (the
darling of the machine learning segment) and embed an NV-Link interface is
going to play well with much of AI and HPC communities. By the end of the year,
we will have seen some interesting OpenPOWER offerings emerge based on advanced
silicon process technology from a variety of sources, and 2018 may see a whole
different story. Especially if an embrace from Google GOOGL -0.14%, who has
been flirting with OpenPOWER for a while now, materializes and creates a
tipping point.
The
real challenge to all CPUs is the way they do work. Their philosophy is built
on the principle that data must come into the chip, be operated on by the chip,
with results or even new data being pushed out of the chip. This whole process
creates a natural bottleneck that we've flirted with for decades. As the
magnitude and scope of data increases, something has got to give, and a
favorite candidate is more parallelism. So far, this has favored GPGPUs or
accelerators.
At
the bigger-picture business level for datacenters and the public cloud, the
real question is not so much which CPU (in fact, the business folks probably
couldn't care less), but the economics of private, public or hybrid solutions.
It is safe to say enterprise computing will not disappear any time soon, and
while there is much activity, the implementations and economics of hybrid
solutions have proven to be difficult. According to Gartner, by 2020 more
compute power will have been sold by IaaS and PaaS cloud providers than sold
and deployed into enterprise datacenters. The fact that companies (especially
smaller ones) are either being born in or moving to the cloud at a rapid pace
is undeniable. However, NOT all are seeing the expected saving materialize from
this move. 2017 will certainly see some careful thinking and maybe even some
rethinking of strategy.
The
explosion of data at the edge is simply going to change data processing as we
know it and will create a variety of computing problems that are difficult to
do in the cloud (even though the results may end up there). However, they may
not be in the enterprise datacenters as we know them either, and we may find
them “stuck” all over the place. For more than sixty years, we have seen
compute follow the data. First from the original mainframe datacenter to the
desktop, to departmental servers, into enterprise datacenters, and now
significantly into the cloud. It is my opinion, that If you plan to put just
your data into the cloud, economics (the cost of network usage) will drive your
compute there sooner or later. You want to consider this carefully based on
your actual needs and usage. There might be a better overall business outcome,
depending on your size and ability to operate, in your own datacenter.
The
major emerging source of data is at the edge and will drive the need for much
compute there. By the way, all the CPUs mentions here should be able do the
edge reasonably well so … Game on again!
Disclosure:
My firm, Moor Insights & Strategy, like all research and analyst firms,
provides or has provided research, analysis, advising, and/or consulting to
many high-tech companies in the industry including Advanced Micro Devices,
Applied Micro Circuits, ARM Holdings, IBM, Intel, NVIDIA and Qualcomm. I do not
hold any equity positions with any companies cited in this column.