So Egypt has its internet back, but I still can't figure out precisely
what was gone when it was gone. Can you help? So far as I can determine:
- Cellular service was only shut off in regions (eg, at the sites of the
major protests) while left on elsewhere in the country.
- Landlines continued functioning everywhere.
- I've seen no sign that domestic internet was affected. For example,
it's possible that all homes and businesses still had live network
connections that simply weren't resolving DNS, or perhaps *were*
resolving DNS internally. It's even possible that local DNS caches were
resolving completely normally -- even for international domains --
except there were no routes to the IP addresses to which they resolved.
- Indeed, even if all ISPs turned off all broadband everywhere, there
would still be large pockets of functioning LANs (universities, housing
complexes, hotels, etc).
- The consequences of total domestic internet blackout are very severe.
I don't see any sign that government and critical services run their
own network (though the military might), nor any sign that the internet
was selectively disabled for only individuals and businesses while
sparing hospitals, police stations, power plants, etc. Furthermore, I
haven't heard that any critical services lost domestic internet or
telephone access, even though I imagine that would be a very interesting
story if true.
I think understanding what actually happened is important such that we
can plan and act in a way that is optimized for the real world, rather
than a (potentially) unreasonable worst-case scenario that never
actually occurs. I'd love your help in fact-checking the above
assumptions by providing evidence (links) to the contrary.
Regardless, none of this really changes my core thesis, which is that
whatever solution built must:
- Somehow become very popular and widely deployed *before* the event,
requiring substantial "added value" even when the internet is accessible.
- Define "added value" in terms that the average person cares about,
which is *not* anonymity, security, privacy, etc. Rather, it needs to
be speed, convenience, reliability, and so on.
- Take an approach of "adding to" rather than "replacing" whatever
more-popular alternatives are already in place (twitter, aim,
bittorrent, skype, etc) so as to ensure users sacrifice nothing by using it.
- Take best, simultaneous advantage of whatever resources are available,
at all times. If there is BlueTooth, use it. If there is a functioning
LAN, use it. If there is a functioning sitewide/domestic/international
WAN, use it. And so on.
- Anticipate the imminent failure of any of these methods at any time by
proactively establishing fallbacks (eg, a DHT in case DNS fails, gossip
in case the DHT fails, sneakernet in case wireless fails, etc.).
- Require no change in user behavior when one or more methods fail. So
the interface used to tweet, fileshare, make a call, etc -- all these
need to work the same for the user (to the greatest possible degree)
irrespective of what transport is used.
- Work on standard, unaltered consumer hardware (no custom firmware, mod
kits, jailbreaks, etc) with standard installation methods (app stores,
web, etc).
- Be incredibly easy for people to use who aren't tech-savvy. This
means spending 10x more time testing and refining the usability of the
system than actually developing sexy esoteric features.
I really do think this is a relatively easy thing to build (at least, in
a minimal form), using existing hardware and proven algorithms. I'd
suggest something like:
1) Start with an open-source twitter application. Google suggests this
one, though I haven't personally used it:
http://getbuzzbird.com/bb/
2) Add a central server, just to make it really easy for nodes to
communicate directly. (We'll replace this with a NAT-penetrating mesh
*after* the much more difficult task of getting this popular and widely
deployed.)
3) When you start up, connect to this central server. Furthermore,
whenever you see a tweet from anyone else using this client, "subscribe"
to that user via this central server. (Eventually you'd establish a
direct connection here, but we'll deal with that later.)
4) Every time you tweet, also post your tweet to this central server,
which rebroadcasts it in realtime to everyone subscribed to you. Voila:
we've just built an overlay on top of twitter, without the user even
knowing. All they will know is that tweets from other BuzzBird users
for some reason appear instantly. And the next time twitter goes down,
all tweets between buzzbird users will continue functioning as normal.
5) Then start layering features on top of this, focused on making a
twitter client that is legitimately the best -- irrespective of the
secret overlay.
6) For example, add a photo tweeting service. Publicly it'll use
twitpic or instagram or whatever, so all all other users will see your
photos just fine. But buzzbird users will broadcast the photo via this
central server, faster and more reliably than the other services, as
well as locally cached for offline viewing. Repeat for video, files,
phone calls, etc.
7) At some point when you've established that people actually like and
use buzzbird with its very simple and fast central server, THEN start
thinking about P2P. (Seriously, do NOT think about P2P before then as
it'll only slow you down and ensure your project fails.)
8) To start, keep the central server and just use it as a rendezvous
service for NAT penetration. So still centrally managed, but with
direct P2P connections. Then when you come online, you immediately try
to establish NAT-penetrated direct connections with everyone you're
following. This of course immediately presents you with challenges: do
you need to connect to *everyone* you follow? If only some, which? Take
these problems one at a time knowing you can always fall back on the
central server until you perfect it. In other words, the goal is to
remove the central server, but you can take baby steps there by weening
yourself off of it.
9) Similarly, add BlueTooth, ad-hoc wifi, USB hard drive sync (aka
"sneakernet"), etc. These would all be presented to the users in terms
of real world benefit ("Keep chatting while on the airplane!" "Share
huge files with a USB flash drive!" and so on.), while simultaneously
refining the tools that they'll use during a partial or total internet
blackout.
10) Eventually when you've figured out how to move all functions off the
central server -- the nodes start up, establish direct connections to
some relevant subset of each other, build a DHT or mesh network, nodes
that can relay for nodes that can't, etc -- the last function will be
"how does a newly-installed node find its first peer?" This is called
the "bootstrapping" problem, and is typically done with a central
server. But it needn't be done with *your* server. Just use twitter
for this: every time you start, re-watermark your profile image with
your latest IP address (or perhaps put it into your twitter signature
line, or location, or something). This way the moment you do your first
tweet, everybody who sees it will try to contact you (or some subset, so
as to not overload you). Then you can turn off your own central service
and just use twitters.
11) When the "dark days" come and twitter goes offline, your nodes won't
even notice. They'll continue to establish their DHT with whatever
subset of the network is interconnected, relay for each other, etc. If
the internet is totally gone, your users will use bluetooth, wifi,
sneakernets. They'll be ready because you *trained* them to survive on
their own *before* they needed to, rather than just handing them a knife
assuming they'll know how to use it when the time comes.
Really, the biggest challenge in all of this is whoever gets to step (6)
will immediately be acquired by Twitter for an enormous sum of money.
But hopefully that person will, with their new-found wealth, continue on
to (7) and beyond.
This is a doable thing. One person motivated person could do most if
not all of this. Is that person you?
-david
3 comments:
This is loaded with wisdom. We need to see this vision spread.
Implementing mesh routing protocols on consumer equipment may prove difficult due to the fact that the software must often update the routing tables directly. This is problematic. It's a problem that a few of us have on our plate but haven't gathered enough information up front to decide the direction things need to go in yet.
Buzzbird is a Twitter client, not an implementation of a microblog. It's not bad, actually - lots better than New Twitter(tm), I think (YMMV). For what you describe you may wish to check out Tahrir: https://github.com/sanity/tahrir/wiki
Making something like Tahrir both a Twitter client and a microblogging app in itself would not be difficult. It is possible now to connect an instance of status.net to Twitter and use it to update one's Twitter account.
I disagree with your stance on not integrating peer-to-peer networking until later in the project. It would be easier to "bake it in" from the design onward than it would be to stick it onto the side. Perhaps if there was an on/off switch in the peer-to-peer functionality, off by default until someone turns it on?
I like how you framed getting people to use mesh apps. I'll definitely pass along a link to this post.
As for why I suggest holding off on the P2P, in general I think it's best to focused on the hardest thing first -- and P2P isn't even remotely the hardest thing. Rather, getting software that people use in mass during the "good times" is the priority. Granted, you *might* be able to use P2P to differentiate your app during the good days. But I doubt it. Rather, I think you should take whatever Twitter client you can get your hands on, and then start super-polishing it to make it the best frickin' twitter client out there. Make it incredibly easy to tweet, post photos, videos, etc. Give it seamless integration with Twitter, Flickr, Youtube, etc. *Then*, once people start really using it, quietly slip in the P2P part. But the P2P part needs to come *after* th popularity part, because without popularity, P2P ain't nothing.
Post a Comment