# Supermicro ZFS Workstation

## Spargeltarzan

Hello dear Community,

I am about to plan a high-end workstation with much Ram (128 GB, maybe later 256 GB) to serve my ZFS workstation for engineering and virtualization purposes. (of course with Gentoo, what else?)

Currently I found the Supermicro X11DAi-N (TDP 205W, 7.1 HD Audio, Thunderbolt AOC support, 2 TB RAM, 4 x16, 2 x8 PCI, 2 UPI 10,4 GT/s) - € 532

Intel Xeon Silver 4110, 8x 2.10GHz, tray (CD8067303561400) - € 510

2x KINGSTON 64GB 2666MHz DDR4 ECC Reg CL19 DIMM 2Rx8 Hynix A IDT

VGA Passthrough to Windows 

Rest of the components (hard disks, SSDs, case, etc.) I have already.

The prize should not become much higher, actually even less, but for more min. 128Gb RAM I didn't find something - what do you think about it? Any good alternatives?

ADD: Other manufactures than supermicro are also welcome

----------

## 1clue

Responding to your request from another thread:

Observations that may be pertinent:

Your board uses the Aspeed 2500 vga controller. This, like my system which uses the Aspeed 2400, is not a high performance graphics card. It's designed to facilitate remote desktop-style console access. You'll want another video card on top of this.

I strongly recommend that you get at least one pcie-based SSD. They can outperform any SATA-based devices.

You want to research this, but you may want to not use hardware RAID. There are several well-written arguments available from google, 'hardware vs software raid' with respect to performance (GOOD hardware is faster but mediocre hardware may not be) and more importantly to me, compatibility with different hardware, which you won't get with hardware RAID but is easy with software RAID.

I'm not sure how much help I can be here. I love supermicro hardware, they don't take shortcuts for the sake of price.

However I barely know what ZFS is, and I have zero experience with GPU passthrough. My supermicro hardware uses an aspeed video chip (similar to the one on your board) which is designed to facilitate console over the LAN, which is pretty much the opposite direction you're taking.

My virtualization experience is limited to two or three scenarios:

Mostly headless VMs for enterprise servers

Desktop VMs to facilitate software testing on Windows/Linux systems similar to those used by customers.

I also made a few Windows Server VMs for my employer, for things like Microsoft SQL Server and emulating Windows-based app servers used by customers.

In none of those cases are rapid GUI performance a priority. Mostly I'm using straightforward minimal installations, mainstream filesystems and no frills. I have a few special-purpose setups but nothing even close to what you're describing.

----------

## Jaglover

According to this hardware RAID has advantage when more than 10 disks are attached.

----------

## Spargeltarzan

For ZFS neither of them is necessary, the best is to use a HBA or the internal motherboard ports and simply create a ZFS pool with the whole raw devices. ZFS does all the work. I did some intense academic research in ZFS, I know much about it, but I know very little about supermicro,

I will use a dedicated graphics card for sure and will also  consider a pcie ssd.Thanks for your input  :Smile:  What I am unsure about is which motherboard to buy, because supermicro is super untransparant and there are so many different versions available

----------

## mike155

 *Quote:*   

> 8x 2.10GHz

 

My personal experience is that a CPU with less cores, but more GHz (e.g. 4 cores with 4 GHz) nearly always outperforms CPUs with more cores but less MHz. Of course, it depends on the programs you want to run on your system... But if you want a high-end workstation, you should definitely buy a CPU with more than 2.1 GHz.

----------

## 1clue

I almost never use RAID, and when I do it's usually raid1 and there are never 10 disks attached. My use cases tend to favor speed over "hot" redundancy, and a good backup plan keeps me fairly safe with respect to device failures and more.

ZFS, the more I hear about it, the more it seems like black magic. At some point I'll read up on it, but not today.  :Smile: 

Supermicro, here's what I know:

I have direct experience with 2 boxes. One from my employer which runs ESXi and also the atom board here, for my home office: http://www.supermicro.com/products/motherboard/Atom/X10/A1SRM-LN7F-2758.cfm

The bigger, older e3 box from work has been the most reliable box I've ever administered.

The aspeed controller and IPMI 2.0 is incredible if you want to manage a server remotely. My atom box has never had a keyboard or monitor. I can watch the boot process, get into the bios, whatever I need.

That same aspeed controller is a closed-source lump of code which can't really be reached by your CPU which can give somebody from 1000 miles away hands-on control of your box. Some people have a problem with that. I call it a feature. I also take care that the IPMI-attached NIC does not have any access to the outside world. It's attached directly to my workstation by a separate NIC on a network which has no default route.

Supermicro makes hardware for data centers.

They sell no junk, but their choices might not be yours if you're making a desktop box.

All their tested operating systems, RAM, etc are the sort of testing you'd want to know about if you run a data center. Which means Linux probably runs fine on it, but you'll have to ask around to be sure.

I've fooled around, it's very easy to spend a six digit number of dollars to build a single box there.

Personal observations about boards in general:

PCIe-v3 is mature.

USB3 is mature.

With hardware like you're describing, consider 10GBase-T networking for at least one NIC, preferably something that will also sync down to gigabit.

M2 is perhaps not yet mature, but I think it's likely the devices that need work, not the interfaces on the motherboards.

M2.pcie seems a better arrangement than M2.sata.

For a small office or home office, 1U is noisier, more expensive and harder to cool than 4U, and 4U is the same compared to a desktop, and a desktop likewise compared to a tower.

Server hardware tends to assume a controlled environment for cooling and humidity and is often in a cleaner room than you might find in a small site. It's best to go a little overboard on cooling in the box for home/small office use.

Based on that, I'd personally go for hardware that supports PCIe-v3, USB3 and M2.pcie. I'd get a roomy box with external, washable air filters, put a supersized cooler on the CPU(s) and take extra care for nice big quiet fans and use internal cables that improve air flow. I'd also plan to open the box and suck out the dust bunnies on a regular basis.

----------

## Ant P.

I'd also add that Intel scalps customers who want to buy a CPU and motherboard with the ECC-disabling bit turned off. You should research both sides, especially now that Threadripper is on sale. All AMD CPUs support ECC (but not all motherboards, though you should be safe with SuperMicro).

----------

## Akkara

 *Quote:*   

> X11DAi-N

 

I don't know about this motherboard specifically.  But I have some experience with Supermicro stuff.  Generally excellent hardware.  However there's a few things to watch for that caught me by surprise:

Do you need suspend-to-RAM?  Apparently, not all of their motherboards support it.  It isn't clear from their site which ones do and which don't.  I had set up a X10DRi which doesn't.  No idea why.  I had tried "everything". Updated bios, played with settings, tried several different flavors of Linux, and no luck.  The kernel goes thru all the right motions.  All the right things appear in dmesg.  Screen goes blank.  But the hardware never powers down.  Fans keep spinning, CPUs seem to be idle but still drawing power, 90-100 watts for the whole system.  Push a key on the keyboard to "wake" it and all the right things happen again: screen comes on, dmesg shows the devices being re-initialized and so on.  Repeated attempts to contact them are met with silence.

Regarding M2.pcie: The X10DRi cannot boot it.  It is a trivial driver that's missing from the EFI.  It is easy to install via USB using the EFI command-prompt, and then suddenly the M2 storage is "seen" and can be booted.  Unfortunately, there doesn't seem to be a way of permanently adding it to the EFI-BIOS.  I emailed them and was told my query was being forwarded to their bios specialist.  And that's the last I heard from them.  Followup "pings" went unanswered.

There's a USB socket on the motherboard itself where you can plug a small memory-stick, and boot from that, to get around the M2 problem.

And, finally, it takes a inordinately long time for the bios to do its thing before it starts to boot.  This normally would not be an issue.  But when the machine can't suspend, one ends up booting a lot more than usual.

Other than all this, it works great.

----------

## 1clue

Interesting.

I can't say whether either of the boards I've touched support suspend. They're both server hardware used as servers. My gentoo atom box has suspend disabled in the kernel. However it does handle low power states.

Also I never had to open a trouble ticket with supermicro so I can't say if your experience is unique or typical.

----------

## 1clue

Experimented with my atom box with respect to Akkara's comments.

My system does not support sleep states.

It DOES support a ton of power management, c-states, etc for pretty much every system.

It DOES support wake-on-LAN, controllable for each of the 7 nics.

My system boots in about 2 minutes from power on to gentoo login screen.

My system supports fast boot which skips some steps. Mine is using "slow" boot.

It has always been my observation that server hardware takes a long time to boot compared to a workstation or laptop. I'm accustomed to having server hardware take anywhere from 5 minutes to 45 minutes to boot, depending on what kind of hardware and how much junk is packed into the box. The 45 minutes is not an exaggeration. It was an AS/400 I worked with some 15-20 years ago. I was told that the IBM mainframes took even longer.

My bios is far more complicated than any desktop motherboard's bios, based on what I've owned. I have never sat down and gone through it step by step like I have with desktop systems, because I could never dedicate enough time to understand all the options. That said, mine is configured for UEFI-only, which of course means UEFI boot and the first option is the boot loader.

I'm using UEFI+grub2, so that adds a bit of time to the process.

----------

## szatox

 *Quote:*   

> The 45 minutes is not an exaggeration. It was an AS/400 I worked with some 15-20 years ago. I was told that the IBM mainframes took even longer. 

 I've been working with cisco blades fairly recently. Half TB RAM, roughly 30 minutes spent in bios. I have seen blades with 1TB RAM too, I imagine those would take more time to test all the hardware, though perhaps not twice as long, since part of that time was related to firmware compatibility tests (so the manager could force upgrade/downgrade if it found a difference).

----------

