[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Single root filesystem evilness decreasing in 2010? (on workstations) [LONG]



Robert Brockway wrote:
[...]
Possibly. I didn't mean to suggest that dd was a good way to backup. I think it is a terrible way to backup[1]. I was talking about dump utilities. I started using dump on Solaris in the mid 90s and really like the approach to backing up that dump utilities offer. On Linux I use xfs a lot and backup with xfsdump in many cases.

OK, now we're on the same wavelength.

[...]
Sure. GPFS (a commercial filesystem available for Linux) allows for the addition of i-nodes dynamically. We can expect more and more dynamic changes to filesystems as the science advances.

I once nearly ran out of i-nodes on a 20TB GPFS filesystem on a SAN. Being able to dynamically add i-nodes was a huge relief. I didn't even need to unmount the filesystem.

Oh, that.  I was looking too far again - that's neat indeed.

I was actually referring to the performance problem in my original post;
which of course highly depends on the filesystem type.  I remember reading
stuff about NTFS (old stuff, probably not relevant anymore) saying that a
big MFT would impact performance by 10-20% (whatever that means) depending
on its size.  I was wondering whether that could be true for other
filesystems as well, but I suspect not, since I've never seen anyone
actually considering this.

OTOH - I haven't studied XFS - but from the little overviews I read about
it, I suppose its allocation groups are a way to scale with this problem
(along with other unrelated advantages like parallelism in multithreaded
environments).  What happens if a filesystem doesn't have anything like it?

BTW, that's a way to dynamically (and automatically) add i-nodes, too
(unless I've missed the point).

Maybe no-one cares because we currently don't have filesystems big enough to
actually see the problem?

[...]
The core of any DR plan is the KISS principal. There's a good chance that the poor guy doing the DR is doing it at 3am so the instructions need to be simple to reduce the chance of errors.

If the backup solution requires me to have a working DB just to extract data or wants me to install an OS and the app before I can get rolling then I view it with extreme suspicion.

I agree with that, but I know it's because I, personally, *need* to know
what's going on, all the time.  Some people are OK with letting a program
(even such a critical one) do some magic;  and without having tested any
"complex" one, I suspect they try to KIS for the user.

The problem is, if there's a problem with the backup system itself, then
it's going to be a long night.  If there's no need for such software, I,
again, agree, there's no use to take risks, even if they're minimal.

Considering your experience, I have to believe you;  we can always backup
very simply, even very large systems.  It's just weird to picture, all these
complex backup systems would be useless?  (I know, it's not a binary answer,
but you know what I mean.)

And for those people who think that off-site/off-line backups aren't needed anymore because you can just replicate data across the network, I'll give you 5 minutes to find the floor in that plan :)

I guess I'm perfectly OK with that, but are we still talking about
workstations?  :-)

[...]
Free is telling you the total memory in disk cache. Any given page in the cache may be 'dirty' or 'clean'. A dirty page has not yet been written to disk. New pages start out dirty. Within about 30 seconds (varies by filesystem and other factors) the page is written to disk. The page in the cache is now clean.

Unless your system is writing heavily most pages in the cache are likely to be clean.

Yup, I think I had that right.

The difference is that clean pages can be dumped instantly to reclaim the memory. Dirty pages must be flushed to disk before they can be reclaimed. Using clean pages allows fast read access from the cache without the risk of not having committed the data. I describe this as 'having your cake and eating it too'[2].

My understanding is that the "cached" column of the output of free(1) is the
sum of all pages, clean and dirty.  The "buffers" column would be the
kernel-space (implied by the manpage), and "used"-"buffers"-"cached" would
be the userspace.

Since there's no "cached" column for the swapspace, I guess no clean page
gets pushed there, although it could be useful if that space is on a
significantly faster volume.  Anyway, the "used" column should be the total,
actual swapspace used, so your comment kind of confuses me.  Am I really
wrong here?

I can't find any documentation in the procps package, and I think I need some.

[...]

Thanks.

-thib


Reply to: