[Freedombox-discuss] FBX Privacy Enabled UX
Fifty Four, thanks for continuing the conversation.
I disagree with a couple assumptions you make, but also want to point
you to a few somethings you might like to think about.
I think the underlying difference between us is in the threat-model. I
assume none of the folks I trust with are adversaries, while you assume
some of them are. This actually ties into your other message about
trust-models_. I'll get to that at the end.
.. trust-models: http://lists.alioth.debian.org/pipermail/freedombox-discuss/2012-April/003680.html
On Wed, 4 Apr 2012 15:49:57 +1000, "Fifty Four" <fiftyfour at waldevin.com> wrote:
> On Tue, 03 Apr 2012 09:48:14 -0400, Daniel Kahn Gillmor <dkg at fifthhorseman.net> wrote:
>> It's also worth remembering that (a) screenshots are not iron-clad
>> proof because they're trivially forgeable, and (b) an informant doesn't
>> even need a screenshot to snitch at all.
> (a) False Accusations can be easily dismissed. (b) The accused can
> deny it if there is no proof.
I heartily disagree with (a). False accusations are incredibly
dangerous. Whether or not the evidence is true, that it exists is
enough for it to be presented as true. It's then up to the accused to
disprove it. That screenshots are unreliable evidence matters only when
the accuser is un-trustworthy. Accusing your accuser of lying is rarely
a winning strategy.
If the point of this strategy is to remove evidence, any evidence that
is pulled together is that much harder to counter with exculpatory
evidence, because it doesn't exist either.
>> Even if we could enforce this layer of identity obscurity, and limit
>> ourselves to attackers who inform by taking screenshots, it would mean
>> producing a tool that takes more cognitive effort to use safely and
>> securely. Is "Blue" my sister, or is it that colleague from work who
>> i'm currently frustrated by? This is a high cost to pay, especially if
>> the goal is to make a tool that "just works" for regular humans.
> I agree it takes more cognitive effort and that's the reason I posed
> the email as a UX question. Is the extra assurance of privacy worth
> the cognitive effort? Is there much cognitive effort anyway?... Do you
> know/care who made the inline comment 3 levels deep?
I'm probably not the target audience, but I do. Taking this mailing
list, for example, I don't really care who you are outside of this list,
because I assume all names here are pseudonymous. However, within this
list, I want to know who the comment was said by, within the context of
this list. If James says it, I'll give it much more weight than if
tad-the-guy-who-got-drunk-down-the-street-that-one-time says it. Same
goes for my conversations with family: I'll say different things to
If we're trying to make the software as easy to use as possible, this is
running the other way.
> In my original email, I said the contact page does not include the
> pseudonymous name. The accused could easily deny that was them because
> there is no proof on the contact page.
In the best-case privacy scenario, where the hosting client is
responsible for sending each recipient a copy of the message, that's
certainly possible. I might recommend doing this by signing each
recipient's message with a different PGP key, which the client could
sort out on its end. Were you thinking of something like identicons_?
.. identicons: https://en.wikipedia.org/wiki/Identicon
If it's the client's responsibility to change the received message, then
an adversary just mods its client to show the original names. Requiring
the server to deliver the messages does make multi-hop routing
impossible, meaning that both you and your recipient need to be online
at the same time.
With regard to your other email:
> My understanding of key signing is that you only sign for what you
> believe to be true... in most cases people would only want their
> online identifiers (email, IM/video call, blog) signed. That being the
> case why does the OpenPGP community require you to attend a key
> signing party?
> To me, the key signing criteria for OpenPGP Certs seems unnecessarily
> too high, preventing mainstream adoption of what I see as a better
> model. Please help me understand why the criteria is so high compared
> to CA Certs.
It's not. It's up to each person what they'll require before signing a
key. Keep in mind, though, you're not just signing a key for yourself.
You're building a web of trust that other people will use to validate
the identities of other folks they don't know. *If you only trust folks
who're trustworthy, who don't need to be deceived about your identity
before you'll communicate with them*, the Blue-sister concept becomes
meaningless. In fact, it probably hurts communities by saying that you
don't trust their members.
You're free to require a only video call before signing somebody else's
key. That isn't enough for me, so I won't trust your signatures:
they'll be meaningless to me. I won't trust your signatures because you
don't require out-of-band authentication. Any adversary with control
over your Internet connection could manipulate the incoming bits you're
For comparison, I'll sign your key if:
1. You send me an encrypted email signed by your public key. This shows
me that you are one of the people who controls the key's private
part. I can never know who you share your private key with.
2. You authenticate that email by telling me something about the email
that's not in the email itself. This tells me that the email wasn't
tampered with and that you were in fact the sender. This can be
performed by writing half a statement in the email and then finishing
the rest verbally.
Your email says: Three blind
You say: mice
Doing it in person is the one way to make sure nobody's mucking with
your incoming bits.
3. We go out and have a beer over the fact that we were able to
authenticate one another's identities. This forms community on top
of trusted identity. This last bit is, I think, why key-signing
parties are important.
Making folks jump through this many hoops makes my identity assertions
really strong. Granted, nothing here implicitly associates a public key
with a real world identity, but nothing can make that guarantee. I've
looked at people's government IDs and compared those to known Internet
photos before signing keys, but both of those are fakable. All we can
ever prove is that we have an internally consistent identity
declaration. This is where trust comes into play.
Sorry for going on for so long about this, I've been thinking about this
all week and haven't had a chance to reply till tonight.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 835 bytes
Desc: not available