[Freedombox-discuss] In-the-cloud infrastructure and business involvement (was: distributed DNS)
-----BEGIN PGP SIGNED MESSAGE-----
On 03/17/2011 09:34 AM, Yannick wrote:
> Le jeudi 17 mars 2011 ? 13:01 +0000, Bjarni R?nar Einarsson a ?crit :
>> On Thu, Mar 17, 2011 at 7:41 AM, Yannick <sevmek at free.fr> wrote:
>> The freedombox project is based on softwares in debian, which
>> all do
>> have built in standard protocols today in use ; except if you
>> disable them. e.g. by simply shipping a web browser, it will
>> people using the "cloud", as you call it, for any purposes
>> they want.
>> There should be no fear about compatibility here. The issue
>> raising up here is quite different: as in some case the
>> internet core
>> principle "end to end communication" is broken, you are
>> talking about
>> some servers in the middle to provide communication services.
>> Which I
>> will name as: moving bits as a business model.
>> Yes, this is all correct. The Internet's end-to-end principle IS
>> broken, and if we want Freedom Boxes to be able to communicate, we
>> have to work around that limitation. I propose that businesses may
>> have a role to play by helping push bits at various layers of the
> Let's clarify one thing: end to end principle is broken by ISP, it is
> not broken by design. In some case they do not give you a real internet
> address, i.e. a world wide IP address (even better a fixed one ; I do
> have one with my ISP), in some other case they filter content, e.g.
> forbidding some protocols. It is because they do act as an administrator
> of the network by taking measure against what you can do with the
> network, against your freedom, not as a service provider.
Let me add something to Yannick excellent summary on the evilness of ISP:
There is, in my opinion, a way to improve things that is mostly neglected, which
is to have use multiple ISP. Lets get clear on this - not only are ISP evil,
they are also idiots: My business partner was without Internet for one week
because they spliced a cable that was directly buried in the ground, after the
cable was cut by a mechanical shovel. It generally does not rain in the Silicon
Valley, so all was OK until this winter's floods. You can imagine what went
wrong... This is the reason why I have two Internet providers (and I had 3 for
a long time).
Now the thing to understand about the Internet is that it was designed to be
unreliable. When your computer sends a sequence of two IP packets, the packets
can 1) never arrive 2) arrive in a different order and 3) arrive multiple times.
Protocols like TCP are built on top on this reality to provide something more
or less reliable, but the fundamental assumption of the Internet does not
change, whatever promise your ISP made. We do not need premium Internet
connection with guarantees that they cannot deliver anyway. What I would argue
that that we need *less* reliable and cheaper multiple Internet connections.
Think about it, two Internet connections with 3 nine availability (one hour of
downtime) give an aggregated availability of 6 nine (2 seconds of downtime per
month). I did some work in the last decade on E.911 (emergency services over
the Internet) and one of the main issue was the reliability of the Internet
connection (Murphy's Law: Your Internet connection will be down when you will
need it to call the cops). With two connections, it will be always available,
with 3 it will be better than the PSTN (as long as you use different medium for
each connection - cable + DSL + wireless for example).
So step 1: cheaper, less reliable Internet connections, so we can aggregate 2 or
3 of them for the same cost and having 2 or 3 order of magnitude more reliability.
Now there is a right way of integrating these multiple connections and there is
a wrong way. The wrong way would be to have a NAT/router where the 2 or 3
providers are connected, and just do the aggregation here. That would be the
wrong way because you are breaking the end to end argument, that say their
should not be any intermediary (especially a "smart" one that will choose which
provider to use at a specific time). No the right way is to have *all*
applications enabled to work directly with multiple connections (i.e multiple IP
addresses on different networks). We are going in this direction with dual
stack, but it really need to be extended. The reason is that only the
application can make the right decision on which connection to use at a specific
time. The poster child for this would be the ICE framework, which is used to
select the local IP address to use when sending RTP packets.
Step 2: All freedom box applications should work with multiple IP addresses.
Now there is an additional problem for most people, which is how to connect
*each* computer in the house to multiple ISPs. Having an aggregator like the
one I argue against would solve the problem, but we do not want that. The
solution in my opinion is to use a different VLAN for each ISP. I personnaly
have a managed switch that take care of this (I have 3 VLANs, one for Comcast,
one for AT&T, and one for the IPv6 beta from Comcast, in addition to the IPv6
tunnel to Hurricane Electric), but people probably do not want to invest on
something like this. One project I have is to use a WRT54GL running OpenWrt to
do the same thing. If you look at this box, it has one port for the WAN
connection and 4 for the LAN connection. What I did was simply to reverse this
- - I now have 4 WAN connections, which are send over the one LAN connection using
Step 3: Provide a cheap NAT/Router replacement so multiple Internet connections
can be available for each device/computer in the house.
Personal email: marc at petit-huguenin.org
Professional email: petithug at acm.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
-----END PGP SIGNATURE-----