[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: MIT discovered issue with gcc



[Not sure this really needs to be cc-ed to security@]

On Sun, Nov 24, 2013 at 12:09 AM, Robert Baron
<robertbartlettbaron@gmail.com> wrote:
> Aren't many of the  constructs used as examples in the paper are commonly
> used in c programming.  For example it is very common to see a function that
> has a pointer as a parameter defined as:
>
> int func(void *ptr)
>     {
>     if(!ptr) return SOME_ERROR;
>     /* rest of function*/
>     return 1;
>     }
>
> Isn't it interesting that their one example will potentially dereference the
> null pointer even before compiler optimizations (from the paper):
>
> struct tun_struct *tun=....;
> struct sock *sk = tun->sk;
> if(*tun) return POLLERR;
>
> The check to see that tun is non-null should occur before use, as in - quite
> frankly it is useless to check after as tun cannot be the null pointer (the
> program hasn't crashed):

This one has been thrashed to death.

Yes, the standard (after considerable reworking overseen by certain
groups with an axe to grind) says that, not only is dereferencing
before testing evil (i.e., undefined), but even adding to a pointer
before testing it is evil.

Committees really should not be allowed to define language semantics.
Make suggestions, sure, but actually define them, no.


> struct tun_struct *tun=....;
> if(*tun) return POLLERR;
> struct sock *sk = tun->sk;

Yes, this arrangement is less liable to induce error on the part of
the programmer.

The compiler should be immune to such issues of induced error,
especially if it is able to reliably optimize out theoretically
undefined code (which is seriously, seriously evil).

> I am under the impression that these problems are rather widely known among
> c programmers (perhaps not the kids fresh out of college).  But this is why
> teams need to have experienced people.
>
> Furthermore, it is very common to find code that works before optimization,
> and fails at certain optimization levels.  Recently, I was compiling a
> library that failed its own tests under the optimization level set in the
> makefile but passed its own test at a lower level of optimization.

Completely separate issue.

> PS: I liked their first example, as it appears to be problematic.

As I noted (too obliquely, perhaps?) the my comments why you
top-posted over, this is nothing at all new. The holy grail of
optimization has been known to induce undefined behavior in compiler
writers since way before B or even Algol.

The guys responsible for optimization sometimes forget that falsifying
an argument is not falsifying the conclusion, among other things.

> On Sat, Nov 23, 2013 at 8:17 AM, Joel Rees <joel.rees@gmail.com> wrote:
>>
>> Deja gnu?
>>
>> On Sat, Nov 23, 2013 at 10:34 AM, Andrew McGlashan
>> <andrew.mcglashan@affinityvision.com.au> wrote:
>> > Hi,
>> >
>> > The following link shows the issue in a nutshell:
>> >
>> >
>> > http://www.securitycurrent.com/en/research/ac_research/mot-researchers-uncover-security-flaws-in-c
>> >
>> > [it refers to the PDF that I mentioned]
>> >
>> > --
>> > Kind Regards
>> > AndrewM
>>
>> I seem to remember discussing the strange optimizations that optimized
>> away range checks because the code that was being firewalled "had to
>> be correct".
>>
>> Ten years ago, it was engineers that understood pointers but didn't
>> understand logic. This time around, maybe it's a new generation of
>> sophomoric programmers, or maybe we have moles in our ranks.
>>
>> The sky is not falling, but it sounds like I don't want to waste my
>> time with Clang yet. And I probably need to go make myself persona
>> non-grata again in some C language forums
>>
>> --
>> Joel Rees
>>
>> Be careful where you see conspiracy.
>> Look first in your own heart.

-- 
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


Reply to: