[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: MIT discovered issue with gcc



The researchers' point was that an attacker might be able to remap that memory page so that dereferencing a null pointer would NOT segfault. (I don't actually know how feasible this is; I'm just paraphrasing their argument. They footnote this claim but I didn't bother to read the cited sources.)

Checking if tun is null is [apparently] a valid precautionary measure -- not useless -- except an optimizer might remove it. The order of these statements is definitely wrong, but the authors are claiming that this optimization turns an otherwise innocuous bug into an exploitable vulnerability. 

Anyway, I don't see what this has to do with Debian. It's an interesting paper, but Debian can't find and fix all upstream bugs, nor do I think most users would be happy if suddenly everything was compiled without any optimizations. 

--
Mark E. Haase

On Nov 23, 2013, at 10:09 AM, Robert Baron <robertbartlettbaron@gmail.com> wrote:

Aren't many of the  constructs used as examples in the paper are commonly used in c programming.  For example it is very common to see a function that has a pointer as a parameter defined as:

int func(void *ptr)
    {
    if(!ptr) return SOME_ERROR;
    /* rest of function*/
    return 1;
    }

Isn't it interesting that their one example will potentially dereference the null pointer even before compiler optimizations (from the paper):

struct tun_struct *tun=....;
struct sock *sk = tun->sk;
if(*tun) return POLLERR; 

The check to see that tun is non-null should occur before use, as in - quite frankly it is useless to check after as tun cannot be the null pointer (the program hasn't crashed):

struct tun_struct *tun=....;
if(*tun) return POLLERR; 
struct sock *sk = tun->sk;

I am under the impression that these problems are rather widely known among c programmers (perhaps not the kids fresh out of college).  But this is why teams need to have experienced people. 

Furthermore, it is very common to find code that works before optimization, and fails at certain optimization levels.  Recently, I was compiling a library that failed its own tests under the optimization level set in the makefile but passed its own test at a lower level of optimization.

PS: I liked their first example, as it appears to be problematic.



On Sat, Nov 23, 2013 at 8:17 AM, Joel Rees <joel.rees@gmail.com> wrote:
Deja gnu?

On Sat, Nov 23, 2013 at 10:34 AM, Andrew McGlashan
<andrew.mcglashan@affinityvision.com.au> wrote:
> Hi,
>
> The following link shows the issue in a nutshell:
>
> http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c
>
> [it refers to the PDF that I mentioned]
>
> --
> Kind Regards
> AndrewM

I seem to remember discussing the strange optimizations that optimized
away range checks because the code that was being firewalled "had to
be correct".

Ten years ago, it was engineers that understood pointers but didn't
understand logic. This time around, maybe it's a new generation of
sophomoric programmers, or maybe we have moles in our ranks.

The sky is not falling, but it sounds like I don't want to waste my
time with Clang yet. And I probably need to go make myself persona
non-grata again in some C language forums

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: [🔎] CAAr43iO_4L7+ViL8VqzPZro+fDm1VhpPHePOmp88hiwbn+FWXg@mail.gmail.com" target="_blank">http://lists.debian.org/[🔎] CAAr43iO_4L7+ViL8VqzPZro+fDm1VhpPHePOmp88hiwbn+FWXg@mail.gmail.com



Reply to: