[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Is this the kind of thing ... ?



Sorry, this might not be very realistic or well thought-out, it's just an idea that came to me. Since then, I've read up on Hurd, so I want to know three things:

1) Is this the right place for my posting?

2) Has this been done before?

3) Does this mean I'm mad?!

Anyway, here goes, I was going to call it 'Beyond' but maybe it's just Hurd spelt differently and with different letters?

_The base assumption of all operating systems which boast any
security
 features, is the 'priveledged account'. What happens if we
take away
 this assumption?_


*** What might we end up with, were such a system implemented?

Something where no single person has power or responsibility for
controlling the entire system (short of physically destroying it
on-site!).

*** How could that work?

See below

** It sounds like chaos. You'd better have a good Security model!

In order to maintain security in such a system, a completely open-
plan
 security model would be adopted. Log files would be
accessible to all. 

This reduces the opportunities for cover-up, and increases the
number 
of eyes watching. Since many people `could' be watching, it
reduces the 
chances that a hacker can count on a single
administrator's habits.

*** So everybody can see everything. Doesn't that mean everybody
can 
delete anything?

This is where `security' comes in. So far, what has been described
is
 multi-user DOS, ie NO SECURITY.

In order for the system to be useful, users should not be
 allowed
to abitrarily delete or move other user's (or the system's) 
files.
Passwords would be used for individual account property. System 
property would require a 'many keys' approach as used by 'Good
Guys' in 
Batman to guard against Improper Use Of National Defense
Missile Systems
 (TM) - inasmuch as it is not capable of
automatically regulating itself
 eg for upgrading system files.

**** If users cannot delete other user's files, what happens when
space
 runs out?

Presumably, system resource issues would generate alerts, which
users could optionally screen. The operating system would need to
know how to
 cope with resource starvation - for example by logging
off random users,
 or decreasing processing time allotted for non-
essential tasks.

It would then be for the users to cooperate in fixing the resource
issue, by deleting files, closing processes, or pooling their quids
for
 a callout to a service engineer to upgrade the system.

Unused accounts should be automatically scrapped after a certain
period, 
optionally being archived to some sort of non-local storage
first.

Older files could aso be 'timed out' when not used for a certain
period.

There are many options.

**** You didn't say users couldn't `read' each others files ...

No, they would be able to read each others files. The system needs
to be as easy to maintain as possible, and I think the more file-
attributes 
used, the more confusing maintainence becomes. Pick one
or two really 
dandy file-attributes, and stick to them.

Instead, encryption should be used when storing 'private' files.
This 
allows users infinite variety in their personal choice of a
security 
method for their personal files. This further reduces the
impact of any 
single successful account attack.

The encryption used should be using an implementation which fits
nicely
 with the 'pipes/translators' concept, to make things
simplest.

EG, if you want to store a script file, which you want to be able
to
 run, but don't want others to be able to read, you can. To run
it, you
 pipe it through the de-cryption algorithm to the
interpreter/compiler of
 your choice.

This avoids having to recompile every program to embed the
decryption
 algorithm in it.

The encryption tools should provide the ability to store a
miniature
 'file system' entirely in a file (optionally with
embedded 'free-space'
 to disguise the acutal size of the file(s) ).

System files, by being readable to anyone, benefit from the same
random
 'potential scrutiny' as log files (see above).

*** Weaknesses exposed thus far

So far, we have two points of weakness: we have to prevent others
from
 eleting our files, and we have to be able to trust the system
not to
 reveal how our files are encrypted when we decrypt them.

The second one is outwardly easiest to deal with. Offload the work
to the
 user's own PC/local terminal/anonymously redirected login
which is
 difficult to trace, where another physically remote system
does the
 number crunching.

This works up to a point, but could get slow for large file-sizes.
For
 example, if the idea of a 'mountable' cryptographic 'file-
system' file
 is used, then to speed things up, we might change a
few bytes and write
 them back. A large number of these small-scale
write might reveal to
 close analysis a pattern of weakness in the
encryption used.

To counteract this, the cryptographic file-system would need to use
'sloppy' book-keeping. We would perhaps write out a few bytes, but
like
 the ancient FAT file-system, we would leave the bytes there
for a while,
 simply linking them out of the chain of the file. This
way, it would not
 be clear to a watcher whether new bytes were
additional, or replacing
 existing bytes, and if they knew they
replaced existing ones, they would
 not know which ones.
 
For this to work, the accounting records maintained in the 'file-
system'
 encrypted file need to be randomly distributed, and of
random length. We
 can't use an identifying string or character to
find them. It might get
 a bit tricky.

(* Having now read about Hurd architecture, perhaps it is possible
under this system to build a trustworthy encryption/decryption
service which would run locally on the server, but not reveal it's
workings? But you would need to be very confident in it's security!)

Preventing others from deleting our files ... well, this has pretty
much 
been catered for reasonably successfully by many operating
systems, 
wthout much 'hang', so to speak (Thankyou Jimi!). All we
are doing is 
making the system account autonomous to the max.

Effectively, what we want to do is set up a system admin account
running 
a daemon which runs scripts, then throw away the password.
If we code 
the daemon into the 'kernel', then as with most
successful 'secure' 
operating systems today, the only time this
core kernel can be subverted 
is when the PC is offline, by booting
with a floppy.

** Physical integrity

The physical integrity of the system could be ensured eg by use of
CCTV (web) cameras - physical redundancy of the system-box itself
(ie two of them) would cater for physical repairs. UPS (battery
backup) systems would cater for power-outages. Other sources of
power could be found which are reliable for long power outages, or
the system could have a 'minimal power' core setup which preserves
these self-monitoring facilities in emergency.

*** We may be able to do better.

1) Incorporate self-validation routines into the boot-phase of the
kernel. I know this is standard at least in NT (to a minimal
extent), so 
can see no reason why it can't be improved upon.

2) Did I say 'throw away the password'? OK, well then, let's
encrypt the 
system area also, and make sure only it knows it's own
password. (Um .. 
this might be tricky).

3) Yeah! Now it self assembles out of slime, checking that it
recognises 
it's own slime and no-one elses, so all we have to add
on is an 
interface for submitting 'self-modification' jobs, to
enable upgrades.

*** So far we are still assuming the PC which the user has
 logged
in from is 'priviledged'

Hmmm ... I wonder if this is avoidable? And if so, does it matter?
Perhaps the user's PC is 'safe'.

No. If the multi-user system has 'open' security, then anyone can
see
 who is logging in, so to get their info, they just trace back
to where a
 person is logging in from, and hack them there, where
it's probably a
 lot easier! (We hope ;-)

So many open systems are needed, employing anonymous redirection.
The
 complexity of the interconnections needs to be great so that
capturing
 even a percentage of the systems would not glean enough
information to
 hack a user in real-time.

This looks REAL TRICKY, and depends on multiple autonomous
anonymous 
systems set up - we haven't even got one yet, so let's
think about it 
later.

** Setting up the system

During the initial installation of the system, single-user mode
would be 
necessary. Priviledged access is assumed.

Until successful prototypes have been deployed, and a way to spawn
from 
system to system found, I see no way around this.

** Social organisational parameters

In most countries it seems you can have a legally valid body which
is a 
conglomeration of individuals, either for profit ("Company") or non-
profit ("Committee").

The scheme outlined in this paper would be useful to either Company
or 
Committee, but since most Companies have a hierarchichal
structure where 
expertise for IT is delegated to a minority, and
likewise, 
responsibility for resource-allocation to another
minority, both of whom 
are seen as requiring executive power in
their divisions, the employment 
of this admittedly anarchic
configuration would be impractical in all 
but the most unusual of
circumstances.

However, it may be easier to maintain. Since responsibility is 
demonstrabely shared by all users, it may in fact be easier to
induce 
the users to 'clean up' after themselves - there is no
opportunity for 
them to shift blame onto the IT department, if
there is none.

But it is with collectives of non-aligned individuals that this
scheme may may flourish best. People who are aggregated for no
other reason 
than to use the IT resources provided, for their own
individual (or 
individual group's) needs.

Hence, I will concentrate on denoting a workable social framework
within 
which I believe the system can be successfully operated and
maintained.

*** Who pays the rent?

I envisage that the system-setup should be entirely stand-alone and
internally self-supporting. No situation should arise which is not
catered for either automatically by the system, or by some
inaugerated 
decision-making process.

If it does, it will be because the users have failed or lost their
need 
for the system itself, or because some outside force greater
than the 
users has intervened.

Concept basis: Users who are subscribed to receive system alerts
of one 
type or another are given voting rights on any system
maintainence task 
which needs user intervention.

The number of these tasks should be minimised, as the first
priority in 
any design or implementation of the system. But it is
unavoidable in any 
long-term project, that occasions should arise,
such as upgrades, 
repairs, as well as ongoing funding issues -
power and telecommunication 
costs.

Different levels of log-file detail to which a user subscribes
could 
carry different weightings in a vote, perhaps these
weightings would 
accumulate with time, and deplete gradually when
unsubscribed. Perhaps 
log-on time should carry voting power, or a
system be employed where 
financial contribution to the system
increases voting power.

This only seems fair. The trick is to experiment to find an
appropriate 
formula which:
1) encourages no more system slack than is needed to keep the
system 
running at highest efficiency and reliability,
2) encourages enough internal expertise or funding for external
exertise 
to keep the system at minimum downtime,
3) encourages growth of the system to cope with more users, or 
alternatively discourages growth beyond the systems boundaries,
while 
still maintaining 1 & 2 above.

In order that esxternal forces not be brought against the system to
its' 
detriment, it seems prudent that social contracts bind users
voting 
powers to legal use of the system, however this aspect could
only be 
arbitrated by fellow-users.

Guidelines could certainly be established based on legal facts as
they 
exist from country to country, and incorporated into a
Handbook of Best 
Use. My personal vision would be that while this
internal arbitration 
process leaves vast room for error and
injustice, enough comunities 
would be inaugerated, that if a user
did not fit into one, they could 
move on to another.

Having strayed somewhat from the point, I will attempt to clarify 
myself. The Operating System must automatically retrieve and
evaluate 
votes from the user community. It must also grant the user
community the 
option to vote away an individual user's voting rights.

Situations such as forced reclaim of storage space would be topics
of 
vote, thus, it could be a topic of vote, firstly to rescind a
user's 
voting rights, and secondly to reclaim that user's currently
allocated 
file-store (or vice versa!).

Because all users cannot always be expected to be actively paying 
attention at any one time, votes would need to have a sliding
threshold, 
for success, which also slides DOWN with time, for time-
critical issues.

The sliding threshold would firstly be necessary to cater for
differing 
numbers of users. On a two user system, it is not
unreasonable that an 
issue require 100% vote to pass (degrading
with time eventually to 50%, 
and finally 0% if indicated by default
behaviour settings), whereas any 
statistician will tell you that
out of 1000 users, at least one will 
certainly be absent at any one
time, so no vote from 1000 people is 
likely to achieve 100% in any
reasonable amount of time.

*** Administration sounds more complex on this system

Yes, it does. But we will endeavour to make our system as easy-to-
administer in the first place, as possible, so that votes are only 
necessary in emergencies. Depending on who sets up a particular
system, 
voting may also take place after installation, in order to
establish 
certain ground-rules (default behaviours for the system
to use in 
emergency if it receives no response from a vote).

These should be 'Sensible Choices', because the system should not,
for 
example, assume outside attack and scrub all hard-drives then
detonate 
a stack of TNT, just because the patch-cable to the
outside world has 
been chewed through by rats. (See 'Big Al' of
UK's 'Viz' comic, for more 
examples of what NOT to do in
perceived emergency situations).

For this reason, I do not believe that these types of systems could
be 
justifiably called 'a haven for criminals and outcasts, which by 
providing anonymous hiding-places, encourage peddling in porn and 
piracy'. Any more than a home PC, video camera, tape-recorder, or
paper 
and pencil do.

Assuming anyone is free to join, if someone feels a strong need to
exert 
their God-given right to tell others what it is they should
be doing, 
there is nothing to stop them from getting in,
requesting maximum log-
file detail, and bloody well making sure
that no-one `is' using it as a 
gateway to some of the lower planes
of hell 24 hours a day (God 
forbid!).

But this does raise a point: even a single user should be able to
raise 
an issue for vote. Let's face it, anyone who tries to misuse
this power 
simply in order to create a nuisance or use up user's/
system time, will 
quickly find themselves the topic of the next vote.

*** Should any users be prevented from raising a vote?

I don't think so. I think if other user's don't want particular
users to 
raise issues for vote, then they might as well remove
those users 
altogether.

** So that's how it works?

Yes.



---------------------------------------------------
Get free personalized email at http://www.iname.com


Reply to: