[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: General-Purpose Server for Debian Stable



David Christensen writes:

On 2020-10-02 04:18, Linux-Fan wrote:
David Christensen writes:

On 2020-10-01 14:37, Linux-Fan wrote:

>    2x4T SSD for fast storage (VMs, OS)

I suggest identifying your workloads, how much CPU, memory, disk I/O, etc., each requires, and then dividing them across your several computers.

Division across multiple machines... I am already doing this for data that exceeds my current 4T storage (2x2T HDD, 2x2T "slow" SSD local and 4x1T outsourced to the other machine).

Are the SSD's 2 TB or 4 TB?

I currently have:

* 1x Samsung SSD 850 EVO 2TB
* 1x Crucial_CT2050MX300SSD1

together in an mdadm RAID 1.

For the new server, I will need more storage, so I envied getting two
NVME U.2 SSDs for 2x4T -- mainly motivated by the fact that I would take the opportunity to upgrade performance and that they are not actually that expensive anymore:
https://www.conrad.de/de/p/intel-dc-p4510-4-tb-interne-u-2-pcie-nvme-ssd-6-35-cm-2-5-zoll-u-2-nvme-pcie-3-1-x4-ssdpe2kx040t801-1834315.html

Of course, given the fact that server manufacturers have entirely different views on prices (factor 7 in the Dell Webshop for instance :) ), I might need to change plans a little...

I currently do this for data I need rather rarely such that I can run the common tasks on a single machine. Doing this for all (or large amounts of data) will require running at least two machines at the same time which may increase the idle power draw and possibilities for failure?

More devices are going to use more power and have a higher probability of failure than a single device of the same size and type, but it's hard to predict for devices of different sizes and/or types. I use HDD's for file server data and backups, and I use SSD's for system disks, caches, and/or fast local working storage. I expect drives will break, so I have invested in redundancy and disaster planning/ preparedness.

Yes. It is close to the same here with the additional SSD usage for VMs and containers.

Understand that a 4 core 5 GHz CPU and a 16 core 2.5 GHz CPU have similar prices and power consumption, but the former will run sequential tasks twice as fast and the latter will run concurrent tasks twice as fast.

Is this still true today? AFAIK all modern CPUs "boost" their frequency if they are lightly loaded. Also, the larger CPUs tend to come with more cache which may speed up single-core applications, too.

Yes, frequency scaling blurs the line.  But, the principle remains.


I am not familiar with AMD products, but Intel does offer Xeon processors with fewer cores and higher frequencies specifically for workstations:

https://www.intel.com/content/www/us/en/products/docs/processors/xeon/ultimate- workstation-performance.html

AMD does it too, but their variants are more targeted at saving license costs by reducing the number of cores. As I am mostly using free software, I can stick to the regular CPUs.

If I go for a workstation, I will end up with Intel anyways, because Dell, HP and Fujitsu seem to agree that Intels are the only true workstation CPUs.
I would think that you should convert one of your existing machines into a file server.  Splitting 4 TB across 2 @ 2 TB HDD's and 2 @ 4 TB SSD's can work, but 4 @ 4 TB SSD's with a 10 Gbps Ethernet connection should be impressive.  If you choose ZFS, it will need memory.  The rule of thumb is 5 GB of memory per 1 TB of storage.  So, pick a machine that has at least 20 GB of memory.

4x4T is surely nice and future-proof but currently above budget :)

Yes, $2,000+ for 4 @ 4 TB SATA III SSD's is a lot of money. But, U.2 PCIe/NVMe 4X drives are even more money.

Noted. Actually, 4x4T SATA is affordable, as is 2x4T U.2 if not bought from the server vendor [prices from HPE are still pending, but I am scared by browsing for them on the Internet already...] :)

[...]

down to their speed -- the current "fastest" system here has a Xeon E3-1231 v3 and while it has 3.4GHz it is surely slower (even singlethreaded) than current 16-core server CPUs...

That would make a good file server; even better with 10 Gbps networking.

10GE is in place already, but there are other hardware limitations (see next).

Thinking of it, a possible distribution accross multiple machines may be

* (Existent) Storage server (1U, existent Fujitsu RX 1330 M1)
   [It does not do NVMe SSDs, though -- alternatively put the disks
    in the VM server?]
* (New) VM server (2U, lots of RAM)
* (New) Workstation (4U, GPU)

For interactive use and experimentation with VMs I would need to power-on all three systems. For non-VM use, it would have to be two... it is an interesting solution that stays within what the systems were designed to do but I think it is currently too much for my uses.

The Fujitsu might do PCIe/NVMe 4X M.2 or U.2 SSD's with the right adapter card.

Been there, failed at that:

The backplane is a SAS/SATA one which exposes the four drive slots as four SATA connectors. They are collected by some Fujitsu adapter cable and brought to a matching port on the motherboard. The datasheet does not indicate in any way that I could add U.2 drives there.

Actually, before considering to buy a new machine, I had thought about the
use of M.2 SSDs in the (only) remaining free PCIe slot inside the currently used Fujitsu RX 1330 M1. I could not find any indication on whether the motherboard would support PCIe bifurication (BIOS setup does not indicate any option to turn it on for instance...), thus I tried to get a PCIe switch chip card, this one:
https://www.reichelt.de/pcie-x8-karte-zu-2x-nvme-m-2-key-m-lp-delock-90305-p256917.html?&trstct=pos_3&nbc=1

I added two SSDs, a Crucial P5 SSD 2TB, M.2 NVMe and a Seagate FireCuda 510 SSD 2TB M.2 PCIe (all ordered together) and started the server. Nothing was recognized at the OS level but opening up the 1U case showed a fault indicator LED at the PCIe slot where I had added the new card. I reaseated the card, tried swapping the SSDs but the fault would not resolve. Finally, I removed the card and put it away for later testing. Meanwhile, rebooting the server, the network interfaces were no longer recognized properly (!) They appeared as "unprogrammed" in `lspci` and did not have any MAC addresses set... For a certain time it looked as if I had bricked my motherboard's NICs. Fortunately an other task required attention such that I shutdown the server (useless w/o NICs anyways :) ) and after putting on power agian after a few hours, NICs were back to normal. To not put the server at further risk, I continued testing the PCIe switch card inside another computer only to find out that that computer would not boot (hang at a BIOS screen without error message...) while the card was in place and populated with both SSDs. Whenever I used only one of the M.2 SSDs for testing purposes, it worked fine (system booted, SSD recognized and accessible), but that was not helpful for building a RAID 1 out of the M.2 SSDs.

In the meantime, I had to replace the server's CMOS battery and it again showed that scary issue wrt. the NICs not being recognized "temporarily" (i.e. until being shutdown for a few hours).

I conclude that while I can be glad everything works again, the space for expansion is really used up here and new hardware is needed.

Depending upon what your VM's are doing, a SATA III SSD might be enough or you might want something faster. Similar comment for the workstation.

Rather than a new VM server and a new workstation, perhaps a new workstation with enough memory and fast local working storage would be adequate for both purposes.

Maybe; I will get some prices for comparision... In terms of the base model price I do not expect there to be much difference between the server and the workstation with the same computation power, but if the workstation allows custom HDDs while staying under warranty it might be much cheaper.

SSD performance for the VMs is already acceptable with SATA (as it is now, with concurrent usage limited by the RAM). Faster is always welcome, though :)

Thanks again
Linux-Fan

[...]

--
── ö§ö ──

Attachment: pgp3yLFkY7aen.pgp
Description: PGP signature


Reply to: