[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Homebuilt NAS Advice



>> Extra space taken, extra power used 24/7 (which in turn requires an
>> extra plug because the poor BananaPi can't provide all that power),
> 	Now it is my turn to ask, "Seriously?"

[ See, our use cases *are* very different.  ]
Yes, in my experience Banana Pis quickly become unreliable if you push
their power system a bit (e.g. I've had no end of problems with it until
I finally found a good enough USB cable to provide power).

>> Extra failures (more hardware => more failures), ...
> 	More failures on average, but far less serious and costly ones.

In terms of monetary cost, the RAID solution is definitely on the "more
expensive" side (at least overall, and I can't see why it would be
cheaper "on the spot" when a you need to replace a failing drive).

As for less serious, as explained I have enough machine-level redundancy
that any hardware failure isn't really serious.

>>>> RAID is basically an insurance.
>>> 	Not entirely.  A RAID 5 or RAID 6 array is far, far faster than
>>> 	a single hard drive.
>> Right, RAID-over-USB is of course going to blow my SSD-over-SATA out of
>> the water by a wide margin (not!).
> 	On the Banana Pi it can.

I'm not sure what makes you think so.  In terms of bandwidth USB2 limits
you to about 30MB/s so even with the old 50MB/s max write speed limit of
the A20's SATA driver you're still going to lose, and in terms of
latency I have a hard time imagining how your RAID will be sufficiently
faster than a single SSD to overcome USB2's slowness.

>>> 	It is also much larger than a single hard drive, sometimes at
>>> 	less expense than a single large hard drive.
>> That's great if you happen to be in that spot, but that's not my case.
> 	What spot?  You mentioned cost.  For many configurations, 4 small
> 	drives are cheaper than 1 large one.

My space needs are sufficiently low that I don't need to buy an
expensive large drive.  The drive with the lowest cost in terms of "GB
per $" is plenty for my needs.

>>> 	It is also portable from one system to another.  Unplug the array
>>> 	from the laptop, plug it in to the Banana Pi, and presto! The array
>>> 	is now attached to the Pi.
>> Wonderful.  But then it's not a RAID shared with the internal drive
>> any more.  So it won't protect my root partition.
> 	It can, but I probably would not.

I don't think it can because you then have to sync the external drive
with two different internal drives and that seems to be asking for a lot
of trouble because of the need to keep your different systems 100% in-sync.

>> And if I keep my home partition in it, the "presto" comes with the
>> footnote "after you logout and log back in" (fun!)
> 	You lost me.  Why log out?

In my experience unplugging a USB disk while it's mounted is a recipe
for hangs and replugging it will not always bring the partition back to life.

>> but if I don't keep my home partition on it, then my home partition
>> is again not protects by RAID at which point I'm starting to wonder
>> what I would put on that RAID.
> 	Data, but then I definitely recommend putting /home on the external
> 	array, so the question is a bit moot.

I don't have other data than /home on my laptop.

>>> 	It's really not any different logically than an external drive,
>>> 	except it is faster, larger, and more robust.
>> It's no different, indeed, except a bit more expensive and bigger.
> 	Well, OK.  How much is down-time worth?

What down time?
You mean the time to walk over and grab my hot spare laptop?

> 	If you consider the cost of the downtime associated with
> 	a failed system to be trivial, then that aspect is
> 	not important.

That's exactly my point.

Even more so when that downtime only happens once every 10 years or so
(my rate so far is a bit lower than that, but let's assume that a drive
of mine will fail tomorrow).

>> But more importantly: there's a reason why I'm not using an external
>> drive at all in the first place.
> 	I am sure there is.  Do you admit that external drives are extremely
> 	popular?  Literally millions of them are sold every year.

Yes, and so were floppys and optical drives.  I stopped using floppys
when I got access to the internet, and I never started using optical
drives because... well I already had access to the internet at that
point ;-)

> 	Yes, of course.  I mistook what you were saying.  Are you
> 	suggesting that is not the case for a non-RAID system, however?

No, I'm just saying that RAID will save me from this trouble once every
10 years, but other things will still cause me to lose some of my work
several times a year, so the gain of RAID is a drop in the ocean.

>>> 	That is another matter.  Indeed, it is probably the most likely
>>> 	reason a need for a backup solution exists.
>> For my use-case, RAID would cost a fair bit of money and inconvenience,
>> and the benefit would be rare and minor.
> 	How much does downtime and time to rebuild a system cost you?

As I said, downtime is minimal because I have other machines I can use
"on the spot".

As for time to rebuild a system, it's a matter of connecting the new
drive to my running system (via an external enclosure) to clone the
100GB or so of my root+home and then put the drive into its
final destination.

Not fundamentally very different from plugging it into the machine and
waiting for the RAID system to do the close.  The main difference is
that the machine is temporarily unusable, but since I have other
machines it's not a significant issue.

> 	For many of us, even an hour of downtime is very expensive, and
> 	rebuilding a machine during peak hours can be hideously costly.
> 	It is always a pain in the ass.

Yes, I agree that RAID can be handy in some contexts.

> 	Perhaps so.  How many people - especially Debian users - consider
> 	a catastrophic failure to be cost-less?  Debian's paradigm is based
> 	upon being rock solid and stable.  Ubuntu gives up stability for the
> 	sake of new bells and whistles.  I expect the typical Debian user is
> 	concerned about things like downtime and reliability.  You seem not
> 	to be.  If you are aware of and willing to take the risk, then that
> 	is fine.  I certainly am not, and as a responsible professional
> 	I certainly would not ever recommend it.  If anyone loses something
> 	based on my recommendation, they are very likely to come back and
> 	scream at me.  I really don't care for that to happen.

I think part of the difference is also where we get the "reliability via
redundancy".  I use my machines in such a way that none of them
are indispensable.

Another part is yet different: I spend a fair bit of my time hacking on
my own editor (where I do all my work).  So my editor tends to crash
more often than it should because of bugs I introduced into it, so I got
used to working under the assumption that "failure is normal".


        Stefan


Reply to: