Is this the kind of thing ... ?
Sorry, this might not be very realistic or well thought-out, it's just an idea that came to me. Since then, I've read up on Hurd, so I want to know three things:
1) Is this the right place for my posting?
2) Has this been done before?
3) Does this mean I'm mad?!
Anyway, here goes, I was going to call it 'Beyond' but maybe it's just Hurd spelt differently and with different letters?
_The base assumption of all operating systems which boast any
features, is the 'priveledged account'. What happens if we
*** What might we end up with, were such a system implemented?
Something where no single person has power or responsibility for
controlling the entire system (short of physically destroying it
*** How could that work?
** It sounds like chaos. You'd better have a good Security model!
In order to maintain security in such a system, a completely open-
security model would be adopted. Log files would be
accessible to all.
This reduces the opportunities for cover-up, and increases the
of eyes watching. Since many people `could' be watching, it
chances that a hacker can count on a single
*** So everybody can see everything. Doesn't that mean everybody
This is where `security' comes in. So far, what has been described
multi-user DOS, ie NO SECURITY.
In order for the system to be useful, users should not be
to abitrarily delete or move other user's (or the system's)
Passwords would be used for individual account property. System
property would require a 'many keys' approach as used by 'Good
Batman to guard against Improper Use Of National Defense
(TM) - inasmuch as it is not capable of
automatically regulating itself
eg for upgrading system files.
**** If users cannot delete other user's files, what happens when
Presumably, system resource issues would generate alerts, which
users could optionally screen. The operating system would need to
know how to
cope with resource starvation - for example by logging
off random users,
or decreasing processing time allotted for non-
It would then be for the users to cooperate in fixing the resource
issue, by deleting files, closing processes, or pooling their quids
a callout to a service engineer to upgrade the system.
Unused accounts should be automatically scrapped after a certain
optionally being archived to some sort of non-local storage
Older files could aso be 'timed out' when not used for a certain
There are many options.
**** You didn't say users couldn't `read' each others files ...
No, they would be able to read each others files. The system needs
to be as easy to maintain as possible, and I think the more file-
used, the more confusing maintainence becomes. Pick one
or two really
dandy file-attributes, and stick to them.
Instead, encryption should be used when storing 'private' files.
allows users infinite variety in their personal choice of a
method for their personal files. This further reduces the
impact of any
single successful account attack.
The encryption used should be using an implementation which fits
with the 'pipes/translators' concept, to make things
EG, if you want to store a script file, which you want to be able
run, but don't want others to be able to read, you can. To run
pipe it through the de-cryption algorithm to the
This avoids having to recompile every program to embed the
algorithm in it.
The encryption tools should provide the ability to store a
'file system' entirely in a file (optionally with
to disguise the acutal size of the file(s) ).
System files, by being readable to anyone, benefit from the same
'potential scrutiny' as log files (see above).
*** Weaknesses exposed thus far
So far, we have two points of weakness: we have to prevent others
eleting our files, and we have to be able to trust the system
reveal how our files are encrypted when we decrypt them.
The second one is outwardly easiest to deal with. Offload the work
user's own PC/local terminal/anonymously redirected login
difficult to trace, where another physically remote system
This works up to a point, but could get slow for large file-sizes.
example, if the idea of a 'mountable' cryptographic 'file-
is used, then to speed things up, we might change a
few bytes and write
them back. A large number of these small-scale
write might reveal to
close analysis a pattern of weakness in the
To counteract this, the cryptographic file-system would need to use
'sloppy' book-keeping. We would perhaps write out a few bytes, but
the ancient FAT file-system, we would leave the bytes there
for a while,
simply linking them out of the chain of the file. This
way, it would not
be clear to a watcher whether new bytes were
additional, or replacing
existing bytes, and if they knew they
replaced existing ones, they would
not know which ones.
For this to work, the accounting records maintained in the 'file-
encrypted file need to be randomly distributed, and of
random length. We
can't use an identifying string or character to
find them. It might get
a bit tricky.
(* Having now read about Hurd architecture, perhaps it is possible
under this system to build a trustworthy encryption/decryption
service which would run locally on the server, but not reveal it's
workings? But you would need to be very confident in it's security!)
Preventing others from deleting our files ... well, this has pretty
been catered for reasonably successfully by many operating
wthout much 'hang', so to speak (Thankyou Jimi!). All we
are doing is
making the system account autonomous to the max.
Effectively, what we want to do is set up a system admin account
a daemon which runs scripts, then throw away the password.
If we code
the daemon into the 'kernel', then as with most
operating systems today, the only time this
core kernel can be subverted
is when the PC is offline, by booting
with a floppy.
** Physical integrity
The physical integrity of the system could be ensured eg by use of
CCTV (web) cameras - physical redundancy of the system-box itself
(ie two of them) would cater for physical repairs. UPS (battery
backup) systems would cater for power-outages. Other sources of
power could be found which are reliable for long power outages, or
the system could have a 'minimal power' core setup which preserves
these self-monitoring facilities in emergency.
*** We may be able to do better.
1) Incorporate self-validation routines into the boot-phase of the
kernel. I know this is standard at least in NT (to a minimal
can see no reason why it can't be improved upon.
2) Did I say 'throw away the password'? OK, well then, let's
system area also, and make sure only it knows it's own
password. (Um ..
this might be tricky).
3) Yeah! Now it self assembles out of slime, checking that it
it's own slime and no-one elses, so all we have to add
on is an
interface for submitting 'self-modification' jobs, to
*** So far we are still assuming the PC which the user has
in from is 'priviledged'
Hmmm ... I wonder if this is avoidable? And if so, does it matter?
Perhaps the user's PC is 'safe'.
No. If the multi-user system has 'open' security, then anyone can
who is logging in, so to get their info, they just trace back
to where a
person is logging in from, and hack them there, where
it's probably a
lot easier! (We hope ;-)
So many open systems are needed, employing anonymous redirection.
complexity of the interconnections needs to be great so that
even a percentage of the systems would not glean enough
hack a user in real-time.
This looks REAL TRICKY, and depends on multiple autonomous
systems set up - we haven't even got one yet, so let's
think about it
** Setting up the system
During the initial installation of the system, single-user mode
necessary. Priviledged access is assumed.
Until successful prototypes have been deployed, and a way to spawn
system to system found, I see no way around this.
** Social organisational parameters
In most countries it seems you can have a legally valid body which
conglomeration of individuals, either for profit ("Company") or non-
The scheme outlined in this paper would be useful to either Company
Committee, but since most Companies have a hierarchichal
expertise for IT is delegated to a minority, and
responsibility for resource-allocation to another
minority, both of whom
are seen as requiring executive power in
their divisions, the employment
of this admittedly anarchic
configuration would be impractical in all
but the most unusual of
However, it may be easier to maintain. Since responsibility is
demonstrabely shared by all users, it may in fact be easier to
the users to 'clean up' after themselves - there is no
them to shift blame onto the IT department, if
there is none.
But it is with collectives of non-aligned individuals that this
scheme may may flourish best. People who are aggregated for no
than to use the IT resources provided, for their own
individual group's) needs.
Hence, I will concentrate on denoting a workable social framework
which I believe the system can be successfully operated and
*** Who pays the rent?
I envisage that the system-setup should be entirely stand-alone and
internally self-supporting. No situation should arise which is not
catered for either automatically by the system, or by some
If it does, it will be because the users have failed or lost their
for the system itself, or because some outside force greater
users has intervened.
Concept basis: Users who are subscribed to receive system alerts
type or another are given voting rights on any system
which needs user intervention.
The number of these tasks should be minimised, as the first
any design or implementation of the system. But it is
unavoidable in any
long-term project, that occasions should arise,
such as upgrades,
repairs, as well as ongoing funding issues -
power and telecommunication
Different levels of log-file detail to which a user subscribes
carry different weightings in a vote, perhaps these
accumulate with time, and deplete gradually when
log-on time should carry voting power, or a
system be employed where
financial contribution to the system
increases voting power.
This only seems fair. The trick is to experiment to find an
1) encourages no more system slack than is needed to keep the
running at highest efficiency and reliability,
2) encourages enough internal expertise or funding for external
to keep the system at minimum downtime,
3) encourages growth of the system to cope with more users, or
alternatively discourages growth beyond the systems boundaries,
still maintaining 1 & 2 above.
In order that esxternal forces not be brought against the system to
detriment, it seems prudent that social contracts bind users
powers to legal use of the system, however this aspect could
arbitrated by fellow-users.
Guidelines could certainly be established based on legal facts as
exist from country to country, and incorporated into a
Handbook of Best
Use. My personal vision would be that while this
process leaves vast room for error and
injustice, enough comunities
would be inaugerated, that if a user
did not fit into one, they could
move on to another.
Having strayed somewhat from the point, I will attempt to clarify
myself. The Operating System must automatically retrieve and
votes from the user community. It must also grant the user
option to vote away an individual user's voting rights.
Situations such as forced reclaim of storage space would be topics
vote, thus, it could be a topic of vote, firstly to rescind a
voting rights, and secondly to reclaim that user's currently
file-store (or vice versa!).
Because all users cannot always be expected to be actively paying
attention at any one time, votes would need to have a sliding
for success, which also slides DOWN with time, for time-
The sliding threshold would firstly be necessary to cater for
numbers of users. On a two user system, it is not
unreasonable that an
issue require 100% vote to pass (degrading
with time eventually to 50%,
and finally 0% if indicated by default
behaviour settings), whereas any
statistician will tell you that
out of 1000 users, at least one will
certainly be absent at any one
time, so no vote from 1000 people is
likely to achieve 100% in any
reasonable amount of time.
*** Administration sounds more complex on this system
Yes, it does. But we will endeavour to make our system as easy-to-
administer in the first place, as possible, so that votes are only
necessary in emergencies. Depending on who sets up a particular
voting may also take place after installation, in order to
certain ground-rules (default behaviours for the system
to use in
emergency if it receives no response from a vote).
These should be 'Sensible Choices', because the system should not,
example, assume outside attack and scrub all hard-drives then
a stack of TNT, just because the patch-cable to the
outside world has
been chewed through by rats. (See 'Big Al' of
UK's 'Viz' comic, for more
examples of what NOT to do in
perceived emergency situations).
For this reason, I do not believe that these types of systems could
justifiably called 'a haven for criminals and outcasts, which by
providing anonymous hiding-places, encourage peddling in porn and
piracy'. Any more than a home PC, video camera, tape-recorder, or
and pencil do.
Assuming anyone is free to join, if someone feels a strong need to
their God-given right to tell others what it is they should
there is nothing to stop them from getting in,
requesting maximum log-
file detail, and bloody well making sure
that no-one `is' using it as a
gateway to some of the lower planes
of hell 24 hours a day (God
But this does raise a point: even a single user should be able to
an issue for vote. Let's face it, anyone who tries to misuse
simply in order to create a nuisance or use up user's/
system time, will
quickly find themselves the topic of the next vote.
*** Should any users be prevented from raising a vote?
I don't think so. I think if other user's don't want particular
raise issues for vote, then they might as well remove
** So that's how it works?
Get free personalized email at http://www.iname.com