[Freedombox-discuss] FBX Privacy Enabled UX
> I disagree with a couple assumptions you make, but also want to point
> you to a few somethings you might like to think about.
> I think the underlying difference between us is in the threat-model. I
> assume none of the folks I trust with are adversaries, while you assume
> some of them are. This actually ties into your other message about
> trust-models_. I'll get to that at the end.
I trust my friends, but I don't necessarily trust the friends of my children
- immature teenagers. I have to protect my children from themselves.
> >> It's also worth remembering that (a) screenshots are not iron-clad
> >> proof because they're trivially forgeable, and (b) an informant
> >> doesn't even need a screenshot to snitch at all.
> > (a) False Accusations can be easily dismissed. (b) The accused can
> > deny it if there is no proof.
> I heartily disagree with (a). False accusations are incredibly
> dangerous. Whether or not the evidence is true, that it exists is
> enough for it to be presented as true. It's then up to the accused to
> disprove it. That screenshots are unreliable evidence matters only
> when the accuser is un-trustworthy. Accusing your accuser of lying is
> rarely a winning strategy.
I agree that false accusations are incredibly dangerous. However, if you
share the same name as 50 other people it's difficult to prove which person
you are referring too.
> If the point of this strategy is to remove evidence, any evidence that
> is pulled together is that much harder to counter with exculpatory
> evidence, because it doesn't exist either.
The intention is to protect ones privacy and not remove the evidence. If
there is abuse, then the evidence is needed to report the abuse to the FBX
Owner. To me, Privacy and the ability to Report Abuse are two sides of the
> >> Tradeoffs:
> >> ----------
> >> Even if we could enforce this layer of identity obscurity, and limit
> >> ourselves to attackers who inform by taking screenshots, it would
> >> mean producing a tool that takes more cognitive effort to use safely
> >> and securely. Is "Blue" my sister, or is it that colleague from
> >> who i'm currently frustrated by? This is a high cost to pay,
> >> especially if the goal is to make a tool that "just works" for
> regular humans.
> > I agree it takes more cognitive effort and that's the reason I posed
> > the email as a UX question. Is the extra assurance of privacy worth
> > the cognitive effort? Is there much cognitive effort anyway?... Do
> > know/care who made the inline comment 3 levels deep?
> I'm probably not the target audience, but I do. Taking this mailing
> list, for example, I don't really care who you are outside of this
> list, because I assume all names here are pseudonymous. However,
> within this list, I want to know who the comment was said by, within
> the context of this list. If James says it, I'll give it much more
> weight than if tad-the-guy-who-got-drunk-down-the-street-that-one-time
> says it. Same goes for my conversations with family: I'll say
> different things to different audiences.
People learn to know the things that are important to them. There was a time
people would have to remember the telephone numbers that were important to
them and look in their address book for the less important numbers. People
filter the noise.
> If we're trying to make the software as easy to use as possible, this
> is running the other way.
I was hoping we make our software/communications as private as possible.
> > In my original email, I said the contact page does not include the
> > pseudonymous name. The accused could easily deny that was them
> > there is no proof on the contact page.
> In the best-case privacy scenario, where the hosting client is
> responsible for sending each recipient a copy of the message, that's
> certainly possible. I might recommend doing this by signing each
> recipient's message with a different PGP key, which the client could
> sort out on its end. Were you thinking of something like identicons_?
> .. identicons: https://en.wikipedia.org/wiki/Identicon
> If it's the client's responsibility to change the received message,
> then an adversary just mods its client to show the original names.
> Requiring the server to deliver the messages does make multi-hop
> routing impossible, meaning that both you and your recipient need to be
> online at the same time.
I was just thinking it would work similar to the way you add a comment to a
wordpress blog. Wordpress hides your email address, but instead of you
choosing an identity FBX would automatically choose one for you. Each time
you add a comment you are linked to your original identity/icon.
Communication is still via my email address. I don't know the underlying
> With regard to your other email:
> > My understanding of key signing is that you only sign for what you
> > believe to be true... in most cases people would only want their
> > online identifiers (email, IM/video call, blog) signed. That being
> > case why does the OpenPGP community require you to attend a key
> > signing party?
> > To me, the key signing criteria for OpenPGP Certs seems unnecessarily
> > too high, preventing mainstream adoption of what I see as a better
> > model. Please help me understand why the criteria is so high compared
> > to CA Certs.
> It's not. It's up to each person what they'll require before signing a
> key. Keep in mind, though, you're not just signing a key for yourself.
> You're building a web of trust that other people will use to validate
> the identities of other folks they don't know. *If you only trust
> folks who're trustworthy, who don't need to be deceived about your
> identity before you'll communicate with them*, the Blue-sister concept
> becomes meaningless. In fact, it probably hurts communities by saying
> that you don't trust their members.
It's my wish that there be a minimum list of requirements baked into the FBX
software. People are still free to have extra requirements before they
"accept" a requester as a friend. I just think if CA Certs accept a verified
email address why can't this be the minimum requirements for an FBX
connection. An FBX could have the boast rights that an FBX minimum identity
requirements match those of a CA.
> You're free to require a only video call before signing somebody else's
> key. That isn't enough for me, so I won't trust your signatures:
> they'll be meaningless to me. I won't trust your signatures because
> you don't require out-of-band authentication. Any adversary with
> control over your Internet connection could manipulate the incoming
> bits you're agreeing to.
I think you are well within your rights to require out-of-band
> For comparison, I'll sign your key if:
> 1. You send me an encrypted email signed by your public key. This
> me that you are one of the people who controls the key's private
> part. I can never know who you share your private key with.
I agree this should be a minimum requirement
> 2. You authenticate that email by telling me something about the email
> that's not in the email itself. This tells me that the email wasn't
> tampered with and that you were in fact the sender. This can be
> performed by writing half a statement in the email and then
> the rest verbally.
> Your email says: Three blind
> You say: mice
> Doing it in person is the one way to make sure nobody's mucking with
> your incoming bits.
I would agree to your requirement if the time difference wasn't too great,
although personally I think your requirements are quite high for a
> 3. We go out and have a beer over the fact that we were able to
> authenticate one another's identities. This forms community on top
> of trusted identity. This last bit is, I think, why key-signing
> parties are important.
I would agree to your requirement if the time difference wasn't too great!!
> Making folks jump through this many hoops makes my identity assertions
> really strong. Granted, nothing here implicitly associates a public
> key with a real world identity, but nothing can make that guarantee.
> I've looked at people's government IDs and compared those to known
> Internet photos before signing keys, but both of those are fakable.
> All we can ever prove is that we have an internally consistent identity
> declaration. This is where trust comes into play.
Interesting to know.
> Sorry for going on for so long about this, I've been thinking about
> this all week and haven't had a chance to reply till tonight.
Thanks for your comments.