Eight 14nm Broadwell cores, a shared L3-cache, dual 10 gigabit MAC, a PCIe 3.0 root with 24 lanes and a lot more find a home in Intel's most powerful server SoC ever, the Xeon D-1540. Thanks to Supermicro's 5028D-TN4T superserver, we are able to compare the latest Xeon with the Atom C2000 SoC, low power Xeon E5s and the Xeon E3-1200 v3.
The demise of innovator Calxeda and the excellent performance per watt of the new Intel Avoton server were certainly not good omens for the ARM server market. However, there are still quite a few vendors that are aspiring to break into the micro server market.
AMD seems to have the best position with by far the most building blocks and experience in the server world. The 64 bit 8-core ARMv8 based Opteron A1100 should see the light in the second half of 2014. Broadcom is also well placed and has announced that it will produce a 3 GHz 16 nm quadcore server ARMv8 server CPU. ARM SoC marketleader Qualcomm has shown some interest too, but without any real product yet. Capable outsiders are Cavium with "Project Thunder" and AppliedMicro with the x-gene family.
But unless any of the players mentioned above can grab Intel-like marketshare of the micro server market, the fact remains that supporting all ARM servers is currently a very time consuming undertaking. Different interrupt controllers, different implementation of FP units… at this point in time, the ARM server platform simply does not exist. It is a mix of very different hardware running their own very customized OS kernels.
So the first hurdle to take is to develop a platform standard. And that is exactly what ARM is announcing today: the platform standard for ARMv8-A based (64-bit) servers, known as the ARM ‘Server Base System Architecture’ (SBSA) specification.
The new specification is supported by a very broad range of companies ranging from software companies such as Canonical, Citrix, Linaro, Microsoft, Red Hat and SUSE, OEMs (Dell and HP) and the most important component vendors active in this field such as AMD, Cavium, Applied Micro and Texas Instruments. In fact, the Opteron A1100 that was just announced adheres to this new spec.
All those partners of course formulated comments, but the best comment came from Frank Frankovsky, president and chairman of the Open Compute Project Foundation.
"These standardization efforts will help speed adoption of ARM in the datacenter by providing consumers and software developers with the consistency and predictability they require, and by helping increase the pace of innovation in ARM technologies by eliminating gratuitous differentiation in areas like device enumeration and boot process."
The primary goal is to ensure enough standard system architecture to enable a single OS image to run on all hardware compliant to this specification. That may sound like a fairly simple thing, but in reality it's extremely important to solidifying the ARM ecosystem and making it a viable alternative in the server space.
A few examples of the standard:
The base server system shall implement a GICv2 interrupt controller As a result, the maximum number of CPUs in the system is 8 all CPUs must have Advanced SIMD extensions and cryptography extensions. the system uses generic timers (Specified by ARM) CPUs must implement the described power state semantics USB 2.0 controllers must conform to EHCI 1.1, 3.0 to XHCI 1.0, SATA controllers to AHCI v1.3 We can only applaud these efforts: it will eliminate a lot of useless time investments, lower costs and help make ARM partners a real option in servers. With the expected launch of many ARM Cortex A57 based server SoCs this year, it looks like 2014 can be a breakthrough year for ARM servers. We looking forward to do another micro server review.
We reviewed several types of server memory back in August 2012. You still have the same three choices—LRDIMMs, RDIMMs and UDIMMs—but the situation has significantly changed now. The introduction of the Ivy Bridge EP is one of those changes. The latest Intel Xeon has better support for LRDIMMs and supports higher memory speeds (up to 1866 MHz). But the biggest change is that the pricing difference between LRDIMMs and RDIMMs has shrunk a lot, so it is time for an update.
Calxeda has announced its second generation SoC, the ARM Cortex™ A15 based EnergyCore™ ECX-2000. We try to estimate how this new Server SoC compares to the latest Intel Atom C2000 server SoCs.
Western Digital's Windows Storage Server based DX4000 has been very well received by SMBs. However, its desktop form factor restricted the target market. After listening to feedback from customers and resellers, WD has decided to release a rackmount form factor version, the RX4100.
The RX4100 is a 1U-form factor four-bay machine. The DX4000 came with either two or four bays populated, while the RX4100 ships with all bays populated and pre-configured in RAID-5. The two network ports are bonded in active-backup mode. The RX4100 is still based on the Intel Atom D525 (WD says that the D2700 differs from the D525 only very slightly in clock speed and supports a HDMI output port – not very important for the RX4100's target market). The presence of Windows Storage Server 2008 R2 and being ready to use out-of-the-box indicate that this unit is geared towards small businesses without full time IT staff.
The bundled enterprise grade hard drives are currently of the WD Re variety, but WD expects to move to the WD Se after production ramps up. WD has also tied up with KeepVault to provide off-site backups for the RX4100 customers. 105 MBps reads and 95 MBps writes are the claimed performance numbers. The WD Guardian services provide for hardware support and parts replacement for the duration of the plan. The standard warranty is three years, and the Guardian service come in three tiers (Express / Pro / Extended Care) with different features.
The 8 TB, 12 TB and 16 TB variants are being launched with MSRPs of 99, 99 and 49 respectively.
ARM based servers hold the promise of extremely low power and excellent performance per Watt ratios. It's possible to place an incredible amount of servers into a single rack—there are already implementations with as many as 1000 ARM servers in one rack (48 server nodes in a 2U chassis). And all of those nodes consume less than 5KW (or around 5W per quad-core ARM node).
But whenever a new technology is hyped, it is good to remain skeptical. The media hypes and raves about new trends because people love to read about something new, but at the end of the day, the system administrator has to keep his IT services working and convince his boss to invest in new technologies.
Hundreds of opinion pages have been and will be written about the ARM vs. x86 server war, but nothing can beat a test run with real world benchmarks, and that is what we'll look at today. We have put some heavy loads on our Boston Viridis cluster system running 24 web sites—among other applications—and measured throughput, response times, and power. We'll be comparing it with the lower power Xeons to see how the current ARM servers compare to the best Intel Xeon offerings. Performance per Watt, Performance per dollar, whatever your metric is, we have the hard numbers.
We've been quietly testing doing more video content on the site over the past year. I've done a few reviews over at our YouTube channel, and we also host all of our smartphone/tablet camera samples over there as well. Going into 2013 we'll be ramping up the amount of video content on the site to go along with Pipeline and the Podcast as some the new features we've introduced over the past couple of years. In doing so we're also going to be hosting videos locally.
When we were looking for the first content to trial our locally served video, I asked Johan de Gelas, the head of our IT/Enterprise testing at AnandTech if he could put something together. Johan came back with a behind the scenes look at the Sizing Servers Lab in Belgium, the back-end for all of our server reviews and testing.
Johan's video is embedded below and if this goes well he's promised to bring us a look at ARM based servers on video in the not too distant future.
MAKER OF EXPENSIVE PRINTER INK HP announced a .9bn loss for its fourth quarter as revenues and operating margins fell while a write-down of .8bn on its acquisition of Autonomy put paid to any hopes of posting a profit.
HP’s painful 2012 continued with the firm racking up a huge loss as sales in just about every division declined while operating margins fell sharply. This resulted in the firm reporting a .9bn loss on revenues of bn, seven percent down from the same quarter last year.
Not only is HP’s .9bn quarterly loss shocking but dissecting it reveals a company in trouble. After the firm posted a bn write-down on its EDS purchase, it has had to wipe .8bn off its purchase of Autonomy due to “accounting improprieties, misrepresentations and disclosure failures”.
HP effectively said that Autonomy’s board had inflated the value of the firm, a firm HP paid bn to buy. HP issued a statement saying, “HP is extremely disappointed to find that some former members of Autonomy’s management team used accounting improprieties, misrepresentations and disclosure failures to inflate the underlying financial metrics of the company, prior to Autonomy’s acquisition by HP. These efforts appear to have been a willful effort to mislead investors and potential buyers, and severely impacted HP management’s ability to fairly value Autonomy at the time of the deal. We remain 100 percent committed to Autonomy and its industry-leading technology.”
Aside from HP’s write-down of Autonomy, things weren’t much better for the firm. HP reported that revenue in its PC division fell by 14 percent from the fourth quarter last year, while its printing division revenue fell by five percent. Revenue in the firm’s enterprise division, which includes server and networking equipment, fell by nine percent, while its services division revenue fell by six percent.
Only HP’s software division, which includes the once over-valued Autonomy, showed any sign of life, posting a 14 percent increase in revenue, while the firm’s financial services arm, which funds HP customers’ investments to buy HP products and services, saw a nominal one percent increase in revenue. All in all, the firm’s hardware divisions performed very badly, and sales at Autonomy could only go so far to turn the red ink to black in its software division.
HP CEO Meg Whitman avoided any mention of the steep loss and declines in revenue almost across the board by saying, “We’re starting to see progress in key areas, such as new product releases and customer wins. We’re particularly pleased that in Q4, we were able to improve our balance sheet, generating .1 billion in operating cash flow, and we returned 4 million to shareholders in the form of share repurchases and dividends.”
Whitman has already scratched 2013 and HP’s stock price hit a 10-year low on Friday. While Whitman has been given time to clean up the mess left by former CEO Leo Apotheker, she and the board must hope that things won’t get much worse before they start to get better. µ
HIGH PERFORMANCE COMPUTING VENDOR Cray has bought server outfit Appro for m.
Back in April Cray sold its interconnect business to Intel and now the firm has gone on to spend some of that cash by buying server and high performance computing (HPC) vendor Appro. Cray announced that it paid m for Appro with .5m of that as working capital.
As part of Cray’s purchase, Appro CEO Deniel Kim will become head of Cray’s Cluster Solutions business that will flog Appro kit under Cray’s brand. Cray will also absorb 90 Appro employees.
Cray CEO and president Peter Ungaro said, “Cray has always been a company with a singular focus on the high performance computing market, and with this acquisition, we have strengthened that commitment and will now be positioned to expand our portfolio of highly innovative supercomputing solutions. Appro is one of the market leaders in HPC cluster solutions, and this acquisition is another step forward as we continue to transform Cray into a company that provides world-class offerings to customers across all segments of the supercomputing market, including Big Data. I look forward to welcoming all our new Cray colleagues in this exciting moment for our company – positioning us well for accelerated growth into the future.”
While Appro might not be held in the same high regard as Cray in the HPC market, it is still the third most popular HPC vendor in the soon to be updated Top 500 list. Cray on the other hand has taken the top spot in the latest Top 500 list with the Titan cluster and recently launched its next generation XC30 cluster codenamed Cascade.
Appro’s product range includes HPC and standard servers, and offers Cray a presence in the server market that is becoming increasingly reliance on interconnect for overall system performance, a crucial performance factor in HPC for decades. That Cray managed to snap up Appro for m including working capital but no debt seems something of a bargain.
Cray said that it expects the deal to close within a matter of days or weeks. µ
CEO OF ARM Warren East has talked up the firm’s technology as key for solving the rising costs associated with running server farms.
Speaking at the IP Expo event in London on Wednesday attended by The INQUIRER, East used his opening keynote session to discuss the firm’s aims in the server market.
He said ARM believes that improving efficiency and reducing energy consumption of servers is vital to the future technology landscape, as the number of servers in use rises to meet the demands of consumers and enterprises.
“We are seeing server volumes growing significantly. Studies suggest that for every 600 smartphones in use a server is created, and the number of smartphones in use rose by a third last year, so there’s huge amounts of electricity being used by servers,” he said.
“That means huge increases in the amount of emissions produced by the energy needed to drive this. And the costs for datacentre operators are going to increase hugely. In fact already ICT in its entirety uses about 10 percent of the electricity that we generate on this planet.”
East said that as this demand for data on servers continues to increase, it is not feasible to imagine that companies can just build ever-bigger datacentres and buy more servers. Instead, new servers that are more efficient are required, he added.
“So in 2008 we looked at this opportunity and decided that in theory we could reduce the amount of energy consumed by servers, as a third of the energy used in servers is on the processing side, getting data in and out of the microprocessors,” he explained.
“So by reducing the power of the microprocessor itself you can save a lot of energy use in the CPU, we think we could save two-thirds of the energy consumed.”