[illumos-Discuss] CPU/Chipset architecture for high performance ZFS servers
Erik Trimble
erik.trimble at oracle.com
Fri Apr 1 11:42:03 PDT 2011
On 4/1/2011 9:58 AM, Alasdair Lumsden wrote:
> On 01/04/2011 14:16, Richard Elling wrote:
> <snip>
>> You won't run out of CPU, but you can run out of I/O. For most
>> 2-socket Intel boxes, there is
>> only one IOH. The 5520 IOH only provides 36 PCI-e lanes, so you will
>> see different combinations
>> of PCI-e bridges on systems from various vendors. Obviously, fewer
>> bridges is better and that tends
>> to become the biggest architectural difference in the machines.
>> Similar analysis applies to the AMD
>> chipsets. For an interesting case of simplicity built for speed,
>> look at the Sun X4500 design.
>
> *nods*
>
> There are also the memory speed differences, the latest AMD Opteron
> 6xxx series seems able to go all the way up to 256GB Ram at 1333MHz,
> whereas the Dell Intel Nehalem boxes appear to drop down to 800MHz for
> the RAM speed beyond 96GB. I'm not sure how this would impact the
> maximum throughput of the box though.
>
>
As Garret and Richard have mentioned, your primary problem is going to
be I/O bandwidth, both to get data in from the disks, and shove it out
the network ports. I don't think you'll really have to worry too much
about RAM size, since I would expect you to be using SSDs for L2ARC
caching (even if you aren't using Dedup), and, until you get to huge
amounts of attached data, 96GB is really more than you need. Then
again, if you want Dedup, stuff it full of RAM, my brother!
Personally, due to these kind of limitations, I tend to go for a bigger
number of mid-sized systems, rather than a few larger ones. That is,
I'd tend to go for something like a 2 socket AMD 4000-series with the
SR5690 northbridge, which gives you about 8-12 cores, up to 12 DIMMS,
and lower-power options (my favorite is the 4176 HE, 6-core 2.4GHz @ 50W
TDP). The SR5960 chipset will get you 42 PCI-E 2.0 lanes. Look at the
Tyan website for examples of this type of Motherboard
(http://www.tyan.com/product_motherboards_dp_amd_opteron_4100.aspx)
Working for the Java VM group, I tend to be biased *against*
Hyperthreading, so I tend to discount the use of hyperthreads in a CPU,
and only count cores as useful. However, I don't have any ZFS-specific
numbers to back up this thread for use in a ZFS setting, so I'd be happy
for someone with benchmarks to show whether Hyperthreading provides
useful performance enhancement when used on a ZFS data server.
On another front, I still don't have the warm fuzzies with FCoE. Maybe
for direct-attachment (i.e. storage device to host), but I'm really not
convinced that the switch vendors have this down well (and, I'm not
really sure that 10Gb Ethernet + FCoE enhanced infrastructure will cost
any less than a 8Gb pure FC design - 10GbE certainly won't outperform
8Gb FC).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
More information about the Discuss
mailing list