Re: Plan B for fixing 5.8.2 binary API
On Mon, 13 Oct 2003 16:01:49 -0400, Chip Salzenberg <chip@debian.org>
wrote:
>According to Jan Dubois:
>> I think the security implications of hash seeding are totally blown
>> out of proportion.
>
>Possible. But your comments don't persuade me.
>
>> It only helps against one specific DoS attack.
>
>One is enough.
Well, that doesn't persuade *me*. :) How much would *you* willing to
"pay" for hash seeding. Assume for a moment that it would slow down each
hash access by a factor of 100 or more. Would you still think that hash
seeding would be worth that "cost"? I mean, yes, your application would
run a lot slower, but now it would be "safe" against this specific attack.
In the end, engineering is always about compromise, providing the best
functionality for a reasonable cost (all terms not restricted to their
financial meaning). The problem of course is to figure out what
"reasonable" means once you talk about more than a single person. :)
>> If security trumps everything else, then I wonder why we[*] don't bother
>> to fix the known buffer overrun problems in Perl that result from ignoring
>> integer overflow in New(), SvGROW() etc. Examples for 32 bit machines:
>>
>> perl -e "q(xxxx) x 0x40000000"
>> perl -e "$a[0x40000000] = 1"
>
>I'd never heard of this bug before. IMO, that's something *else* that
>has to get into 5.8.2.
I agree.
>Nevertheless, hash seeding is of high (if not higher) priority because
>of the behavior we can reasonably expect from Perl programmers and the
>expectations they can reasonably place on Perl. Using an unchecked
>input string as a hash key has *always* been safe in ways that using
>an unchecked number as an array index clearly is not. After all, no
>matter what string you get, it's only *one* hash key; nothing will
>blow up if you use it. If you use a big number as an array index,
>*kaboom*. (Even if the overflow bug is fixed, you'll still get an
>out-of-memory exception.)
Note that an out-of-memory exception at the Perl level is "safe". With
the overflow errors, you can address arbitrary memory and/or produce
access violations. Those are two very different things.
I guess we just disagree: I find buffer overruns much more serious than
application slowdown. The former impacts application and data integrity,
the latter only availability.
>In short, what makes the hash seeding so important is not the nature
>of the hashing DOS, but the prevailing and justifiable expectation of
>hash key safety.
That is just FUD. Hash keys are safe even without random hash seeding.
With the same argument you could say an O(n**2) sort algorithm is not
"safe" because it gets very slow when used against big datasets. But in
the end it is just slow, not unsafe.
Anyways, I don't really want to argue (any more) about this. I agree that
it would be good to improve the resilience to such attacks. If it cannot
be done without breaking binary compatibility in the maintenance track,
then it should be done in a way that people can opt-out of these changes
at configure time.
Cheers,
-Jan
Reply to: