[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Linux Kernel Security - Can it ever be 100%



Recent discussions in this group have raised as many questions for me as they have answered, and raised doubts in my mind that the security of the Linux kernel will ever be as good as I would like it to be. Is there a fundamentally simple and 100% effective way to stop kernel exploits, or will we be forever locked in a race between white hats and black hats? I am still hoping kernel security will eventually be as good as the security of a typical microprocessor instruction set. Throw a trillion bad inputs at an x86 microprocessor, and you will never take over the "supervisor" mode.

Here are some recent discussions with question numbers in right margin.

On Thu, 04 Dec 2003 20:50:17 +0100, Joey Hess <joeyh@debian.org> wrote:
>Dave wrote:
>> User: CallService DestroyFileSystem <victim's partition>
>> OS:   Sorry, no such service.
>> User: CallService 227
>> OS:   Sorry, no such service.
>> User: CallService 226
>> 226>  OpenForWrite <victim's filename>
>> Sorry, you don't have permission to write to someone else's files.
>> 226>  PokeMemory <some address>
>> Sorry, service 226 has no such command.
>> 226>  SaveThisData <very long string>
>> Sorry, your data exceeds the size of my buffer.
>> 226>
>
>You've just described the essense of the unix system call API. The only
>difference is that since using a syscall each time to access memory
>would be very slow, syscalls are instead used to set up memory regions, [1]
>which are protected by the processor's MMU and which processes cannot
>write outside of. cat /proc/self/maps
>
>Any API of this sort is still vulnerable to bugs in the validation of
>the data and commands though, such as the lack of bounds checking in the
>brk() hole. It's also vulnerable to bugs in the processor, such as the [2]
>old Intel f00f bug.

On Fri, 05 Dec 2003 03:00:15 +0100, Isaac To <kkto@csis.hku.hk> wrote:
>>>>>> "Dave" == Dave  <dmq@gci-net.com> writes:
>
>    Dave> So how many daemons and kernel routines need both root access and
>    Dave> input from a user process?
>
>Remember that *all* kernel routines are running in kernel-mode of the
>processor, i.e., having even higher permission than a normal root process.
>And most of the inputs taken by system calls are tainted with user inputs. [3] >Even worse, the kernel is performance critical. Adding all of these, you'll [1]
>understand why it is so hard to make sure everything is correct.  That's why
>some people advocate micro-kernels, to reduce the "source of power" to a [4]
>very small code base that can be monitored in an easier way.  But we are not
>at that point yet, so the race between white-hat and black-hat hackers
>*will* continue.  In any case, even if we are in a micro-kernel like Hurd, a
>bug in the core servers (e.g., the authentication server, the filesystem
>server or the Unix API server) can easily give out arbitrary power to the
>user, so it is important to make sure core servers are bug-free in any case.
>The only question is "how many code are in the core servers".

Questions:

I'm raising these questions not to push my non-expert solutions to the problems of OS security, but to stimulate discussion and gain some understanding of the fundamental problems the kernel developers are facing. I know from my own projects in electronic design that what seems simple to the non-expert can be devilishly difficult.

[1] Are the parts of the kernel that need to be fast the same parts that read input from a user process, or can we say that 95% of user inputs can be passed through robust, standardized validation routines with no performance degradation noticeable to the user? e.g. When the user process says "Here's a pointer to my data.", should a kernel routine just start stuffing a buffer, or call a common routine that at leasts checks the length of the data to be stuffed. If "blind buffer stuffing" were limited to just those routines where this transaction was time-critical, then those few routines could get a lot of scrutiny for buffer overflows.

[2] It seems to me that a bounds check on any number from a user process could be a fast "getnumber( num, min, max )" routine. Again, any kernel routine that really couldn't live with this small delay, and had to process un-validated user data, would get special scrutiny. I'm guessing two or three routines, but I really don't know.

[3] I'm not sure what is meant by "tainted inputs" here. My guess is we are talking about inputs that would pass simple validation checks (string lengths, number ranges, membership in an allowed set) but still cause a subtle problem later. It seems like that kind of problem would be very unlikely, and almost impossible to exploit for anything other than causing a crash. I'm comparing this situation to the comparable vulnerability with microprocessor instruction sets, and we don't see that happening.

[4] Linus rejected the micro-kernel architecture in 1991, and it seems *very* unlikely that the whole thing will be re-built at this late stage ( although if MS can do it with "foghorn", I suppose nothing is impossible :>) What I'm thinking of is a much smaller overhaul, in which maybe 100 routines are modified to use the approved validation routines. Of those 100, there might be 10 high-priority and the others can be done later. On routines which already do their own validation, the rework would be easy.

-- Dave




Reply to: