Christoph Ulrich Scholler wrote:
Christoph!! Well said! It is one of the shortest and cleanest explanations on this topic;).Hi, On 24.10. 16:46, Jason Lim wrote:I have been investigating cheap "software" or "host-based" RAID cards. They tend to be magnitudes cheaper than real hardware RAID cards like 3ware/AMCC. But for some purposes, you don't care about the CPU overhead, but do want something hardware (not md software raid) especially when your on-board controller is crap.Sorry for not directly answering your question, but I would like to bring another point to your attention. Cheap RAID cards usually don't have their own RAID logic on-board but rely on the driver (i. e. a piece of software) to perform their duties. So nothing gained here. Nevertheless, they usually have their own on-disk format for the RAID metadata. Now what do you do, if your card dies a year after purchase? In my experience it is a characteristic of cheap hardware that it gets changed over time by the manufacturer (new chip revision, firmware or other slight "improvement") and the exact same thing is not available even a few months later. It is well possible that the metadata format has changed. A card from a different manufacturer will most likely have a different metadata format. Your disks will become unreadable to you, your data is lost. You can still use cheap "RAID" cards as disk controllers and implement the RAID functions via the Linux md drivers. Sincerely, uLI
A few words to add:They (semi-hardware) often suck as "RAID" (due to the software nature:) but are quite good as disk controllers. Naturally, if you do buy a mainboard with "RAID controller" on board this (in most cases) should be translated as "has additional (faster) disk controller(s) for you to attach more than 2 or 4 drives". The rest you can safely skip;) - just ask if this is software RAID.
-----And more than few words about the 'real hardware ATA/SATA' raid controllers - usually, they DO perform all of the "raidish" work by themselves, and OS is using them as one (or several) disks.
This means 2 things: No main CPU for the raid work. On one hand. And, on other hand, the raid does not "stale" if you kill your CPU by some means with other processes (the raid will be clean even if you do overload your machine's CPU up to the "need hw reset" point. I did have some problem requiring resyncing software raid after such kind of accidents.Raid problem (md was reported as with one of members failed after such restart, synching without problems after that.). The hardware was ok, and i have used and use it without problems - before w/o raid and after with PCI hardware (real one, lowest class 3ware 2port PCI board) raid controller.
Futuremore, the more expensive "real hardware" ones do have large amount of cache, and the performance is really boosted, but this varies, and is related to the price of the card. Check with bonnie++, not hdparm.
So, consider what "cheap" means. Also, consider a good back-up "system" for the workstations - if it is not a problem to stop, restore the system from backup and have the user data on server, you do (may?) not need the software raid. I do use it where i do want to manage the exact moment to shut down and replace, so, if you do not have issue with "downtime" of the workstations, the backups and the server is better - because the raid will not save you or the user after doing rm -rf *. if you have more than 10 WS, on one LAN of course, it is better to buy something to use as a server, with hardware raid 5 for user data and backups. check the prices for 10 new, say, 80GB HDDs vs one PC, one hardware board and two or three bigger HDDs(say, 250GB) for the "server".