[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#727708: Init should be simple, secure, and get out of the way. It should not take over the system. We should not be forced to use one that does.



On Sat, 1 Feb 2014 19:11:52 -0500 (EST)
Thilos Rich <thilos.rich@aol.com> wrote:

> Init should be simple, secure, and get out of the way. It should not take over the system. We should not be forced to use an init that does.
> 
> This man said it best:
> wizardofbits.tumblr.com/post/45232318557/systemd-more-like-shit-stemd
> 
> "
> Init has one other job, which is to keep the process tables clean. See, any process can create a copy of itself (called “forking” (don’t laugh) in Unix terminology); this is usually a precursor to loading some other program. Any process that runs to completion can deliver a status code to the process that created it; the creating (or parent) process is said to “wait” on the status code of the created (or child) process. But what happens if the parent process dies before the child does? What happens is that init is designated to be the adoptive parent of the “orphaned” process, and it waits for, and discards, any status code that may be returned by the orphan on exit. This is to prevent “zombie processes” – process table slots that are filled with status codes but have no running programs attached to them. They are undesirable because they take up process table space that could be used by actual, running programs.
> 
> 
> So it is important that init run well and not crash.
> 
> 
> Now, in Unix system design, it is a generally understood principle that a big task not be handled by a big program, but rather a collection of small programs, each tackling one specific, well-defined component of the larger task. You often hear the phrase “do one thing, and do it well” as a guiding principle for writing a Unix program. One major reason for this is that a small program has fewer places for bugs to hide than a big program does.
> "

Real power is in communicability, not in monolithic software, not
even in modular software. It's like 2^N. 2 is a small number, but if
you have enough bits, you can represent enormous number of numbers:
0,1,2,...,2^N-1.

Another example, in theory of approximation, a function f is
represented by function g. And if you tend to approximate f with g on
interval (a,b), then you your function g will start to diverge very
rapidly as soon x gets out of (a,b). 

When one software wants to cover all cases, they can never achieve this
goal. They can make it work perfectly for 99% of users, but then the
remaining 1% will have big problems.

Such an application not only do not provide infrastructure for
satisfying remaining cases, it can even become useless, and the typical
solution is replacing it with another software which is incomparably
less powerful, but yet much more usable for a particular purpose.
Moreover, they are bad psychologically, because the user is frustrated,
he has all that power in his hands, but in spite of that he cannot
achieve something he has in mind.

--
https://en.wikipedia.org/wiki/Machiavellianism
http://markorandjelovic.hopto.org


Reply to: