[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: mdadm usage



Linux-Fan wrote:

> deloptes writes:
> 
>> Andrei POPESCU wrote:
>>
>> >> Each LSI card has a 6 bay cage attached and I have raided 6x2TB WD RED
>> >> spinning discs (for data) and 2x1TB WD RED spinning discs (for OS)
>> >
>> > 1TB for OS (assuming RAID1) seems... excessive to me. All my current
>> > installations are in 10 GiB partitions with only a separate /home.
>> >
>> > Even if I'd go "wild" with several desktops installed (I'm only using
>> > LXDE), LibreOffice, etc. I'd probably get away with 50 GiB or so. Check
>> > the output of:
>> >
>> > du -hx --max-depth=1 /
>> >
>> This is true. the root partition is not big - the rest of the space I'll
>> use for data, but I do not want to use smaller disk, because I will loose
>> two bays and have the power consumption anyway. I think 1TB is good
>> compromise. I leave some disk space as spare for LVM and LVM snapshot. I
>> put there the OS and for example the NFS root/boot stuff or some QEMU
>> machines.
> 
> Sounds OK to me :) From my point of view, I would work towards reducing
> the total number of disks, given that spinning disks of 8 TB capacity and
> SSDs of 4 TB capacity are readily available today. YMMV
> 

Yes, but the prices of these SSDs are very high compared to spinning disks.
When looking at the bigger WD RED disks (3-4TB) few years ago I found out
many people complained that disks do not have same quality as 2TB WD RED,
so I stick to those despite I could have saved at least 2bays. I still have
4 unused bays anyway. The only thing is the power consumption ... would be
at least 10W less ... but it is neglectable.

>> >> I somehow can not convince myself that I need to replace any of these
>> >> with SSDs.
>> >> I don't want the cheapest but also not unnecessary expensive drives, I
>> >> just find it hard to evaluate which drives are reliable.
>> >
>> > The reliability matters much less with RAID1. By the way, the "I" in
>> > RAID stands for "inexpensive" ;)
> 
> [...]
> 
>> I am too old for blind experimenting. This is why I'm asking if someone
>> has experience with SSD in RAID with consumer grade disks. The once I see
>> are installed in servers are not available on the consumer market.
> 
> If I understood it corretly, you initially asked about NAS-grade-SSDs. I
> believe that is quite "special" a purpose, because I tend to hink of NAS
> as slow but large storage spaces where SSDs are indeed rare.
> 

Yes I asked may be incorrectly - I use the WD RED NAS spinning disks and I
look a replacement for that. Most of the data is NAS grade data
(moves/music/documents) - a lot of read, but almost no write.
Virtual machines and development are a different type of data - I have them
on different pair of disks.
For these I also look for a replacement, but as mentioned it costs (2TB SSD
is about €250,-) and I think I would gain something, but given the LSI SATA
II controllers, I was not sure if it would indeed pay off.
After this discussion, I understand it is worth considering and it would pay
off. This is why I ask for recommendations - might be I correct to
replacement for both types. I must admit I now conclude considering putting
dedicated disks for development, VMs and OS (like SSDs - may be better buy
2 3TB SSDs and replace the 2x1 and 2x2TB WD RED NAS) or replace just the
2x1TB with SSDs for OS and VMs as for the development I do not care
compiling takes 20% more time.

> I have had good experience with the following two "consumer-grade" SSDs in
> an mdadm RAID 1 (taking the I for inexpensive literally :) ). Both have
> about 8000 hours of operation according to SMART and when in use they ran
> about 12h/day (i.e. normally not 24/7):
> 
> * Samsung 850 EVO 2TB
> * Crucial MX300 2TB
> 
> At the time, these were the cheapest SSDs I could get with 2TB. Despite
> their performance being "medicore" (for SSDs, that is), there were no
> problems with RAID operation whatsoever.
> 

Thank you this is what I am looking for -personal experience. I have been
looking at the Samsung 850 EVO 2TB. Can you share the exact model number,
please?

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD20EFRX-68EUZN0

As mentioned before I had issues with Seagate drives - they failed
constantly - making me nervous - I had to keep always one spare around for
years - until I decommissioned the last one. Amazingly this last one worked
for 12y - was 500GB Seagate drive.

> For my new system, I got two Intel DC P4510. These were actually available
> to me as a "consumer" despite them being (old) "datacenter" SSDs. They run
> much faster, but most of the time one does not notice the difference. My
> typical workload that benefits from the "faster" SSDs is installing
> packages and updates in multiple VMs and containers (e.g. 4--6 VMs and two
> containers) at once. Apart from the potential difficulties in getting to
> purchase such SSDs, they are also more difficult to put into systems due
> to different connectors: Server SSDs use either SAS or (in case of the DC
> P4510): U.2.
> 

Well - you say new system - It is nice story, but I have the limitations I
already listed before.

>> >> I saw there are 1TB WD RED SSDs targeting NAS for about €120,-
>> >> WESTERN DIGITAL WD RED SA500 NAS 1TB SATA (WDS100T1R0A)
>> >
>> > The speed gain of SSD vs. spinning discs for the OS is hard to
>> > describe. Think jet aircraft vs. car.
> 
> [...]
> 
>> Yes, but as mentioned the LSI I use in the server are SATA2 so it will
>> stick to bandwidth throughput of 300MB/s - does it make sense to replace
>> the good WD RED spinning disks with SSD?
>> I already heard one good argument.
> 
> How much do you rely on random access to the actual data? As others have
> already posted, putting an OS onto the SSD is an exceptional performance
> gain for all OS-startup related tasks including "running program x for the
> first time" or OS upgrades (apt-get operations in general). If, however,
> you are considering to use the SSD mostly for "data", it highly depends on
> what type of data you have:
> 
> * If it is os-style data like VMs, containers, compiler toolchains,
> chroots etc. then there will be a significant performance improvement,
> because these all benefit from reduced latency of access.
> 
> * If it is media like music, pictures etc. served over a typical network
> protocl, the performance of HDDs may be entirely sufficient. Some
> media-related tasks like "downscale 10000 images from 700x700 to 500x500"
> may also benefit from the SSD if files are small enough that the access
> time becomes relevant.
> 
> * Additionally, if you have a small set of data that you are accessing all
> of the time and the OS manages to cache this into RAM, you will only
> benefit from the SSD performance upon first access. On systems that run
> 24/7, the benefit of SSDs is gerater in database-style continouously
> random-access-intensive applications rather than typical file access
> patterns.
> 
> As others have noted, the performance gain of SSDs is largely independent
> of connector. You can get an improvement even on old connectors and to
> some extent also on old systems. Unless you are thinking of using IDE SSDs
> (special-purpose devices which are mostly _not_ used for performance),
> everything should be fine in that regard :)

OK - thank you - this is the most complete answer I accept and I must once
again admit, that the discussion helped me put some order in my thoughts.

As mentioned I also think of splitting up the disks depending on use. The
multimedia would stay on the WD RED, but I will look forward to replace the
OS, VM and for development disks.

If someone has good experience with SSDs in RAID please share the device
model, family and manufacturer.

thank you once again
regards


Reply to: