[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bug#765512: general: distrust old crypto algos and protocols perdefault

On Wed, 2014-10-15 at 13:58 -0700, Russ Allbery wrote: 
> The approach that you are taking to this discussion is destroying my
> desire and willingness to explain to you all of the nuance that you're
> ignoring.
Well I respect that you have another opinion on security, but I didn't
demand you to explain it to me (if you, as you say, don't like my
approach); admittedly I don't understand your security philosophy.

So what's wrong about my approach, apart from the paradigm "security

> your view of RC4 is simplistic, and is
> ignoring the fact that Kerberos uses RC4 considerably differently than how
> SSL does.  Many of the SSL attacks on RC4 rely on the properties of HTTPS
> and the nature of the data being encrypted, whereas Kerberos uses RC4 in a
> much different mode.  There's a lot of discussion in the crypto community
> about to what extent the same techniques carry over.
Sure, and the same is true for other algorithms or modes,... e.g. CBC
isn't unsafe per se.
But I guess you wouldn't call RC4 rock solid, would you? Well at least
all crypto folks and paper I know say something in the range from "don't
use it unless you really must" to "stop using it. now."

Of course it depends on how something is used, but I remember some time
ago, when the first attacks in SSL/TLS where found against CBC and the
padding issues, where people said "well, we all know it has it's issues,
but for SSL/TLS it's okay for now". How long did that hold?

And I don't think, that you really believe that this won't happen for
RC4 in other contexts, do you? And can you assure that the publicly
known cryptanalysis (which, as you say, may tell that RC4 is still okay
for krb) is having the last word? Who guarantees that there aren't any
organisations which can already easily break RC4 in the context of krb?

Of course one can always say "the NSA might already know how to break
it" but in case of RC4 we know about enough cracks that one can really
see that it's broken or that this point is at least knocking on the

> Disabling RC4 enctypes breaks interoperability with all Windows XP
> systems.
Which is basically known to be coming to it's end of life for how many
years now... and in the meantime it's fully out of maintenance.

To be honest, I see no reason why to provide interoperability to an
insecure system, which users have known for years that the should act.
I rather feel that all other users, who did their homework or who aren't
using Windows or other incompatible clients at all, have to potentially
suffer from questionable defaults.

I understand you see this different, but I guess I may also have my own

> That's clearly going to be the right thing to do soon
I just think it's a big problem if we always wait to the last minute.
That's what I've said in the initial post, that we rather should try to
move on to new algorithms proactively, even if there is not yet strict

Moving onwards earlier is definitely better for security, even if you
have systems that use PFS. Because as practise shows, some people always
need very log to update their stuff, so not starting the migration in
the last minute is surely not the worst idea ever.

> and both
> MIT and Heimdal have future plans to do this, which they're coordinating
> with support cycles and feedback from large sites.
Well especially large sites should have probably had some people dealing
with security, which should have educated themselves about what they
should rather try to phase out.
So again I don't see the point why other users should potentially suffer
for some group being slow.
It's the same as with SSLv3 which was basically kept alive for some
years now just because of a minority still using a long dead browser -
and even if it would have been a large fraction, I still think it wasn't
fair for the others to live with it just because of those wo didn't move

And as you mention large sites... at least sometimes I experienced
exactly the contrary.
I work for the LHC Computing Grid which is probably one (if not the) of
the largest distributed computing networks in the world... hundreds of
computing centres in a three Tier level hierarchy spread over all the
world with many thousands of scientists using its services and depending
on them for their work, thesises, etc..

As you said, often things go extremely slowly, but this also changes
completely if there is enough pressure or someone in charge simply says
"this is going to be done now, and all sites that don't comply are going
to be blacklisted".
Then new versions and even bigger changes are rolled out or activated in
few weeks or even days.

> It's unlikely that you're going to be able to make better cost/benefit
> decisions about these things than well-informed upstreams for general use
> cases.  Debian is targeted for general use cases.
Well as I've said several times before: I never said you should make it
impossible for people to use their older algorithms, if they need.
So what's wrong with the approach to disable unsafe or close-to-be
unsafe algorithms per default, add a NEWS entry and let people activate
it manually if they need to?

> If we were making a
> security-hardened distribution that chooses security over interoperability
> across the board, we may well want to make other decisions.
So you suggest against efforts of securing / hardening Debian? What
about introduction of hardened compiler flags, apparmor, selinux, etc.?

I personally don't think that hardening contradicts being a universal

> People want to use their
> computers to do things, not to just be secure.  Security is always a
> tradeoff.  This is inherent in the nature of the work.
Sure, security is a tradeoff,... but a) I don't think that the typical
Debian user is indifferent about security and even if, why "punishing"
those who are not?
And b) to use the example of "SSLv3" and your point of the user saying
"but I want my website to work and don't care for SSL or TLS or
security" - one can probably suggest those people to use plain http
instead since they don't loose a lot.

> Debian can
> help push the tradeoffs towards more secure software, but if we go too far
> and just break things, people are going to re-enable insecure stuff and
> not trust our defaults in the future.
Mhh, well I mean I agree that one shouldn't go too far, but OTOH, not
doing more as right now is basically again doing nothing. E.g. waiting
till even major browser vendors decide 5m after twelve that they really
need to react now.
And if people re-enable insecure stuff, well than at least they know and
it's their responsibility, not ours.

Also I didn't demand that we deactivate anything and only allow ED25519,
but especially those cases and algos I've mentioned in the beginning,
well at least in my opinion vendors, admins and users could and should
have known for years that they really should phase them out, and I don't
think one can justify to put all users at risk (*per default*), just
because some didn't move on.

>   That, in turn, can easily mean that
> the deployed systems in the wild end up being *less* secure than if we
> maintain users' trust in the defaults.
Okay but a user can always come along and shoot himself, can't he?!

E.g. some time ago we finally god rid of some even older algos (at least
in most places), things like DES or IDEA. Some software still allows to
enable this and if the user does - for whatever reason - it's his own


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply to: