[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Aranym installation, was Re: Centris 650 Debian 10 SID Installation



Can we move systemd vs. sysvinit discussions to a more general forum? I'm sure it would be welcome there.

Cheers,

	Michael

(who didn't realize how much he missed the good old usenet and mailing list flamewars of the past... )

Am 22.06.2019 um 03:02 schrieb John Paul Adrian Glaubitz:
On 6/21/19 4:08 PM, userm57@yahoo.com wrote:
Yes, I think there are valid reasons, so we can disagree.  On a
production system, it makes sense to separate user files from system
files, including system logs.

On production systems, you don't collect system logs locally but forward
them to a loghost. At least that's I know to be common.

So if user directories are in, for
example, /usr/people, but there's only a single filesystem for
everything, then users can fill up the filesystem,

User directories haven't been in /usr for at least two decades now,
so that argument doesn't hold. User directories are in /home which
is always a separate filesystem in large production environments, the
rest of the system is more or less static.

and then if there's a
crash it can be difficult to figure out what happened because the
filesystem filled up.

Sure. But user directories are not stored below /usr, so that argument
doesn't hold. On the contrary, I have seen a lot of cases where a separate
/var filled up.

In the old days, it was common in a security
attack for a local non-privileged user to fill up the root filesystem
and then proceed with escalation attacks, which then would not be logged
(which is why /tmp should also not be on the root filesystem).

As you say, "in the old days".

This
case can be addressed by having a separate filesystem for home
directories, such as /home.

A separate /home is not the same as a separate /usr. A separate /home
is still fully supported.

A second case I can think of is if an administrator wants slightly
different executables with the same names that behave differently
whether called by a non-privileged user or root.  In that case, these
different executables can exist, for example, in /sbin and /usr/sbin,
where /sbin is not readable by users but exists before /usr/sbin in
root's path.

That's a constructed usecase and nothing that would exist in the real
world as the administrator can just use PATH variables and ACLs to
cover everything possible in this regard.

A third case involves corrupted filesystems.  If root is 2 GB and usr is
500 GB, and root becomes hopelessly corrupted, then only 2 GB needs to
be restored instead of 502 GB.  If usr becomes corrupted, then (ideally)
single user mode with critical files such as dump and restore in /sbin
can be used to restore usr without resorting to booting from an
alternate partition or installation media.

How does that help if your root partition gets corrupted? This is, again,
a very constructed use case. Also, if your system doesn't boot, just use
a rescue boot medium which is the best thing to do anyway to prevent any
further filesystem damage.

But it's all good.  There's an active community (and Linux
distributions) that are still working to maintain sysvinit.

This is not related to systemd vs. sysvinit.

Of course it is.  Entire distributions have forked over the disagreement
in philosophy between systemd and sysvinit.

No, it's absolutely not. A lot of applications nowadays assume for /usr
not to be separated anymore. As explained in the linked Freedesktop
article, a separate /usr is simply no longer tested by most userspace
applications and hence broken.

I won't rehash the systemd / sysvinit arguments here; it's clear the
direction that most distributions are taking.  People who want the
older, simpler way, especially for older systems, or who like to
maintain some compatibility with the BSD universe, will be able to find
a way.
Forking hundreds of shell instances for doing simple things like string
substitution isn't efficient. It's a brain-dead design. Anyone who thinks
that sysvinit is the original Unix design has never used an original
Unix. sysvinit has always been a hack.

And, FWIW, I recommend reading the "Unix Hater's Handbook" [1] for
anyone who is still convinced the "old traditional Unix way" (TM)
is the way to go. It isn't. Original Unix sucks. I have used HP-UX,
OSF/1 and old versions of Solaris and they are all horrible to use.

Well, I've used IRIX, SunOS (Solaris) 4-7, ConvexOS and others, and they
all had something similar to sysvinit (startup scripts based on run
levels).
Modern Solaris has SMF, macOS (which officially is a UNIX(TM)) has launchd.


All operating systems have their pros and cons.  Regardless of
how inefficient the startup scripts are, they don't run very often, so
efficiency isn't that important.

Yes, but you can't bring up the efficiency argument against systemd and now
say it's not an argument. Furthermore, classical sysvinit is static. It doesn't
understand what to do when you insert a USB drive or a wireless network
suddenly becomes available.

Seriously, I absolutely don't understand why people think that they have to
stick to a 30+-year-old solution when the requirements have changed a lot. I
would think that no one would argue that a modern laptop, smartphone or
smart TV is the same as a DEC3000 running OSF/1 in a local network. The latter
doesn't know anything about dynamic networks, hardware and power management,
but the former do. systemd supports dynamic changes, sysvinit does not. It's
an absolute no-brainer what to use in 2019.

When I see systemd taking several
minutes to do something relatively simple like adding swap, that doesn't
strike me as particularly efficient, especially on older, slower systems.

It runs perfectly fine on my Amiga 4000/060/50 MHz.

Adrian



Reply to: