[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Heartbleed (was ... Re: My fellow (Debian) Linux users ...)



On 4/17/2014 5:40 AM, Curt wrote:
On 2014-04-17, ken <gebser@mousecar.com> wrote:

Steve brings up a very good point, one often overlooked in our zeal for
getting so much FOSS for absolutely no cost.  Since we're all given the
source code, we're all in part responsible for it and for improving it.

I don't think the point is very good for the reasons outlined below (by
others).

http://www.datamation.com/open-source/does-heartbleed-disprove-open-source-is-safer-1.html

  Robin Seggelmann, the OpenSSL developer who claims responsibility for
  Heartbleed, says that both he and a reviewer missed the bug. He concludes that
  more reviewers are needed to avoid a repetition of the incident -- that there
  were not enough eyes in this case.

  Another conclusion that might be drawn from Seggelmann's account is that
  depending on developers to review their own work is not a good idea. Unless
  considerable time passes between the writing of the code and the review, the
  developers are probably too close to the code to be likely to observe the flaws
  in it.

  However, the weakness of Seggelmann's perspective is that the argument is
  circular: if Heartbleed was undiscovered, then there must not have been enough
  eyes on the code. The proof is in the discovery or the failure to discover,
  which is not exactly a useful argument.

  A more useful analysis has been offered by Theo de Raadt, the founder of
  OpenBSD and OpenSSH...

http://article.gmane.org/gmane.os.openbsd.misc/211963

(I'll quote most of de Raadt's usenet article--hope nobody minds).

  So years ago we added exploit mitigations counter measures to libc
  malloc and mmap, so that a variety of bugs can be exposed.  Such
  memory accesses will cause an immediate crash, or even a core dump,
  then the bug can be analyed, and fixed forever.

  Some other debugging toolkits get them too.  To a large extent these
  come with almost no performance cost.

  But around that time OpenSSL adds a wrapper around malloc & free so
  that the library will cache memory on it's own, and not free it to the
  protective malloc.

  You can find the comment in their sources ...

  #ifndef OPENSSL_NO_BUF_FREELISTS
  /* On some platforms, malloc() performance is bad enough that you can't just

  OH, because SOME platforms have slow performance, it means even if you
  build protective technology into malloc() and free(), it will be
  ineffective.  On ALL PLATFORMS, because that option is the default,
  and Ted's tests show you can't turn it off because they haven't tested
  without it in ages.

  So then a bug shows up which leaks the content of memory mishandled by
  that layer.  If the memoory had been properly returned via free, it
  would likely have been handed to munmap, and triggered a daemon crash
  instead of leaking your keys.

  OpenSSL is not developed by a responsible team.



(Sorry, a bit long here).

This is a totally irresponsible post, showing the op knows very little about programming.

It doesn't matter if malloc() wrappers were replaced or not. The application gets memory from the OS in one or more pages (the exact number is dependent on several parameters). malloc() then subdivides this allocation for application use.

Now this is key - it really doesn't matter whether the memory has been subdivided or not - if the application has a pointer to the memory, the application can access the memory (this has been a source of MANY bugs in C and C++). This access is direct access to the memory, and does not call malloc().

As an example: the application calls malloc() with a request for 4 bytes of memory. malloc(), seeing there is currently no free space available for the application, sends a request to the OS for 256K of memory (so it has extra for the next request). malloc() then returns a pointer to (very near) the start of that memory, containing room for 4 bytes of application data.

But the application now has a pointer and can directly access any memory in that 256K block. And since no library code is involved, nothing will catch a problem. Only if the program tries to access memory beyond the 256K block will there be a problem; the CPU will detect the application is trying to access an invalid address and notify the OS (which will typically attempt to terminate the application).

So the whole premise on which his "not responsible team" is complete crap. He is the irresponsible one here.

As a side note - I've been programming for about 47 years now (including several years at IBM) and managed many projects, both small and large. One thing I've found - people aren't perfect. There are ALWAYS bugs in the code. Good eyes and good QC measures (including code test suites) will catch a lot of bugs. But it doesn't matter how many eyes you have on the code, or how many test suites you run the code through. ANY non-trivial program is likely to have bugs (the last figure I heard was around 1 *serious* bug for every 1K LOC in released code). Could this bug have been caught? Definitely. Should this bug have been caught? Maybe - it's fairly subtle, and probably only someone who has suffered from buffer overruns in the past would catch it.

It's unfortunate that this happened, and definitely not good. But it does not indicate any developers were irresponsible.

Jerry


Reply to: