Quinthar

How to build a system to "Opt Out" of warrantless wiretapping

I love the p2p-hackers mailing list.  So many smart people talking about so many cool things.  The conversation has recently turned to how to build a VoIP system that would bypass the NSA's warrantless wiretapping to the greatest possible degree, while still being usable in the real world.  Here's my proposal:

--

I think the challenge with building a new phone system is that the existing phone systems are already amazing.  Skype filled a void for cheap long distance.  But now that that void is filled, and you can call anybody in the world from your phone easily (unlimited nationwide plans are common, and Skype covers the rest of the world), it'll be very difficult for any new voice service to take root.  But I could imagine it happening if:

1) It's backwards compatible with the current voice services (eg, you can call anybody with it, and they can call you, regardless of whether you use the service)

2) It offers tangible value to you even if the person you're calling isn't using it

3) It offers tangible value to you even if the people calling you don't use it

4) Those values increase as the number of people who use it increases

5) It automatically advertises itself

With these, then you can fully adopt this service -- without any downsides -- and gain value from it regardless of whether anybody else does.  Furthermore, the value increases as the network size increases, so you have an incentive to encourage others to use it.  As for what that service might be, that's a tall bar.  But I could imagine protection against dragnet-style government surveillance being compelling to a certain demographic.

As for how that might work, that's tough.  But imagine a new VoIP client like the old Skype (eg, P2P with a distributed relay service for NATs/firewalls), except truly encrypted.  That would be pretty straightforward to do: the audio/video codecs are pretty refined, and there are great P2P libraries ready to go.  The problem is: nobody is using is, so you have no reason to use it either.

But what if everybody registered their "real" phone number with some DHT, and then coupled this app with a collection of VoIP->POTS (Plain Old Telephone System) gateways.  So when I type in your phone number, first it checks to see if I can use this secure system, and contacts you directly via VoIP.  But if you aren't in the system, it just calls you via a POTS gateway.

Ok, so now we're backwards compatible, but it still doesn't really give me any advantage if nobody else is using it.  So what if rather than just using one VoIP gateway, there were hundred, scattered across every area code, and every network.  Then when I call you, if I can't use my truly secure VoIP connection, instead it just routes you through one of hundreds of random gateways. Voila -- we both get protection from dragnet collection of metadata (the NSA just sees that someone called you through one of these many gateways, without knowing it's me) *even though* you don't use the system.

Next, every time I call someone through this system and it falls back on the POTS gateway, it plays a message saying something like "This line is only partially secured; install XXXX app to get fully secured.  Connecting..."  Now every user who uses this thing is automatically advertising what it is to recipients.  The more it's used, the more it grows.  Indeed, you could also couple it with SMS such that the first time anybody calls a new number, it texts a link to that number explaining what it is and linking to an app download.

Ok, so now we have a system that is backwards compatible, breaks the "chicken and the egg" dilemma by offering value "out of the box" even to a single user, and automatically promotes itself.  But what about incoming calls?  How can I get the benefit of anonymity, but still give you a number that you can reliably call to get me?

This one is a lot harder.  One approach would be to let me generate new phone numbers on the fly, such that I can give out different numbers to everyone and they all go back to me.  Again, anybody who calls these numbers with POTS would get connected to me transparently via the VoIP gateway (and might hear the marketing message / receive the SMS), and anybody who calls inside the system gets me directly.

A problem with this is there are only so many phone numbers, and they cost money.  So a different approach might be to just maintain like a hundred numbers, each of which has an "extension".  So I give you a number like (XXX) XXX-XXX x XXXX -- it's a bit of a pain to use extensions, but it gives the same effect.

Then tie this with a Gmail plugin that auto-randomizes your phone number in emails you send out (so you enter your own phone number, and it provisions/randomizes before delivery), and maybe something that just provisions a bunch of random numbers and prints out business cards to make it easy to deliver.

Oh, and all this could work for SMS as well.

Anyway, something like this might allow individuals to "opt in" to a new secure platform, without needing to "opt out" from the real world.

The Future of Copyright

My entry to this contest:

http://www.indiegogo.com/Future-of-Copyright

I predict the future of copyright will look a lot like today's war on
drugs: the only people supporting it will be people either profiting
from it or unaffected by it -- the people it purports to protect (not
to mention the people actually targeted) will have apathetic tolerance
for it, to futile resistance against it. Copyright will render the
entire content industry a wasteland of legal risk and weak offerings,
where all the top innovation and product development happens by
criminals. Piracy will become increasingly widespread and socially
acceptable, and future generations will just accept the absurd status
quo as "something previous generations did that we've just got to deal
with; thanks grandpa for fucking it up for everyone."

Fwd: Pho: How litigation only spurred on P2P file sharing

About time somebody wrote a book on this!

-david

------

http://www.itnews.com.au/News/279763,how-litigation-only-spurred-on-p2p-file-sharing.aspx


How litigation only spurred on P2P file sharing
By Rebecca Giblin on Nov 11, 2011 12:30 PM (1 day 10 hours ago)
Filed under Telco/ISP

Analysis: Did the content industry lose the legal battle?

Do you remember back in 2001 when Napster shut down its servers? US
courts found Napster Inc was likely to be liable for the copyright
infringements of its users. Many of Napster's successors were also
shut down.

Aimster and its controversial CEO were forced into bankruptcy, the
highest court in the US strongly suggested that those behind Grokster
and Morpheus ought to be held liable for "inducing" their users to
infringe, and Kazaa's owners were held liable for authorisation by our
own Federal Court. Countless others fled the market in the wake of
these decisions with some, like the formerly defiant owners of
Bearshare and eDonkey, paying big settlements on the way out.

By most measures, this sounds like an emphatic victory for content
owners. But a funny thing happened in the wake of all of these
injunctions, shutdowns and settlements: the number of P2P file sharing
apps available in the market exploded.

By 2007, two years after the US Supreme Court decided Grokster, there
were more individual P2P applications available than there had ever
been before. The average number of users sharing files on file sharing
networks at any one time was nudging ten million and it was estimated
that P2P traffic had grown to comprise up to 90 percent of global
internet traffic. At that point content owners tacitly admitted
defeat, largely abandoning their long-time strategy of suing key P2P
software providers and diverting enforcement resources to alternatives
like graduated response or "three strikes" laws.

Why is it that, despite being ultimately successful in holding
individual P2P software providers liable for their users'
infringement, content owners' litigation strategy has failed to bring
about any meaningful reduction in the amount of P2P development and
infringement?

Physical vs digital

I would argue pre-P2P era law was based on a number of "physical
world" assumptions. That makes sense, since it evolved almost
exclusively with reference to physical world scenarios and
technologies. However, as it turns out, there is often a gap between
those assumptions and the realities of P2P software development.

Four such physical world assumptions are particularly notable in
explaining this phenomenon.

The first is that everybody is bound by physical world rules. Assuming
this rule had universal application, various secondary liability
principles evolved to make knowledge and control pre-requisites to
liability. But software has no such constraint. Programmers can write
software that will do things that are simply not possible or feasible
in the physical world. So once the Napster litigation made P2P
programmers aware of the rules about knowledge and control, they
simply coded Napster's successors to eliminate them – something no
provider of a physical world distribution technology ever managed to
do.

In response, the US Supreme Court in Grokster created a brand new
legal doctrine, called inducement, that did not rely on either
knowledge or control. That rule was aimed at capturing "bad actors" -
those P2P providers who aimed to profit from their users' infringement
and whose nefarious intent was demonstrated by "smoking guns" in their
marketing and other communications. But the inducement law failed to
appreciate some of the other differences that make the software world
special and thus led directly to the explosion in the number of P2P
technologies. In understanding why, three other physical world
assumptions come into play.

One is that it is expensive to create distribution technologies that
are capable of vast amounts of infringement. Of course in the physical
world, the creation of such technologies, like printing presses,
photocopiers, and VCRs required large investment. Research and
development, mass-manufacturing, marketing and delivery all require
massive amounts of cash. Thus, the law came to assume that the
creation of such technologies was expensive.

That led directly to the next assumption – that distribution
technologies are developed for profit. After all, nobody would be
investing those massive sums without some prospect of a return.

Finally comes the fourth assumption: that rational developers of
distribution technologies won't share their secrets with consumers or
competitors. Since they needed to recoup those massive investments,
they had no interest at all in giving them away.

All of these assumptions certainly can hold up in the software
development context. For example, those behind Kazaa spent a lot on
its development, squeezed out the maximum possible profit and kept its
source code a closely guarded secret. By creating a law that focused
on profits, business models and marketing, the Supreme Court succeeded
in shaking out Kazaa and its ilk from the market.

But the Court failed to appreciate that none of these things are
actually necessary to the creation of P2P file sharing software. It
can be so inexpensive to develop that some university programming
courses actually require students to make an app as part of an
assignment. When the software provider puts in such a small
investment, there's much less need to realise a profit. This, combined
with widespread norms within the software development community
encouraging sharing and collaboration, also leads to some individuals
making the source code of their software publicly available for others
to adapt and copy.

When the US Supreme Court created its new law holding P2P providers
liable where they "fostered" third party infringement, as evidenced by
such things as business models, marketing and internal communications,
the result was an enormous number of programmers choosing to create
new applications without any of those liability attracting elements.
In the absence of any evidence that they had set out to foster
infringement, they could not be liable for inducement, and having
coded out of knowledge and control they could not be held liable under
the pre-P2P law either.

The end result? The mismatch between the law's physical world
assumptions and the realities of the software world meant that the law
created to respond to the challenges of P2P file sharing led to the
opposite of the desired result: a massive increase in the availability
of P2P file sharing software. The failure of the law to recognise the
unique characteristics of software and software development meant the
abandonment of the litigation campaign against P2P providers was only
a matter of time.

Dr Rebecca Giblin is a member of Monash University's law faculty in
Melbourne. Her new book Code Wars tells the story of the decade-long
struggle between content owners and P2P software providers, tracing
the development of the fledgling technologies, the attempts to crush
them through litigation and legislation, and the remarkable ways in
which they evolved as their programmers sought ever more ingenious
means to remain one step ahead of the law. The book explains why the
litigation strategy against P2P providers was ultimately unsuccessful
in bringing about any meaningful reduction in the amount of P2P
development of infringement.

Visit codewarsbook.com where you can read the first chapter in full.
Physical copies can be ordered online from stores like Amazon and Book
Depository, and electronic copies are available via Google books at a
heavily discounted price.

Copyright © iTnews.com.au . All rights reserved.

---------------------------------------------------------------------

This is the Pho mailing list, hosted by griffinatonehousedotcom and
johnparresatgmaildotcom. Email one or both of us to unsubscribe,
subscribe or otherwise address any issues related to this list.

Our Place in Eternity

After an all-night reading binge, I finished re-reading Isaac Asimov's "The End of Eternity" -- one of my favorite books by my favorite author.  I hadn't read it since, I don't even know, high school?  In the nearly twenty years since it's always stuck with me.  Like all of Asimov's work, I think he wraps an adequate story around some core, brilliant concepts.

In this case it's exploring the consequences of humanity inventing, in essence, a "time elevator" -- step in at one year, and step out at any other.  It can go backwards through time as far back as when it was created (in this case, around 2400), and forward as as far as when the sun becomes a supernova.  The story centers around a group of people called The Eternals who manage and use the elevator, ostensibly for the purpose of enabling trade between times (your century have deforestation? import wood from the future!), but secretly also to tweak time for the greater good of humanity: something as simple as shifting a jar from one shelf to another could prevent disease, war, and the ever-present threat of nuclear apocalypse.  Wrap in a mystery (why can't you get out of the elevator between the years 10M-10.5M, and why aren't there any people after 10.5M years?) a romance (between an Eternal and a Timer), and enough paradox-management to make Primer seem sensible, and you've got yourself a heck of a book.

The first time I read it all those years ago, I was most interested in the ultimate consequences of a well-intentioned organization devoted to the ostensibly positive goal of mitigating the worst disasters of human history and ensuring trillions of lives achieve the happiest possible existence.  (Read the book to learn how that all ends up.)  

But this time I was more interested in the minutiae of time alteration itself: the so called "ripple" of small changes having profound long-term consequences.  This isn't a remotely new concept, but a few other recent developments made me think more on it.


Jumping to the present: I'm writing this from a balcony overlooking a river full of long, low boats in Hoi An, Vietnam.  I'm here with my company for our annual month-long retreat, where we leave the real world behind for some foreign location far removed by space and time(zones) to work hard, get to know each other, and ultimately get a new perspective on all the things we take for granted -- in both our personal and professional lives.  This is our fifth annual trip, spanning this startup and the last, and it's always a very interesting experience.

Most of it involves a of sitting around in cafes working on our laptops, but on weekends we typically head out on some adventure.  Yesterday was such a day, and we rented moto-scooters and zipped over to the Marble Mountains -- five tall pillars of rock jutting out of an otherwise perfectly perfectly flat region.  Atop the tallest of the five is an active Buddhist monastery, with tall pagodas and stunning vistas in all directions.

But more interesting still were the caves worn into this rock over the millennia, each of which was repurposed into a different temple -- turning the entire mountain into a single huge temple, with awe-inspiring statues nestled deep within the earth.  The improbable size of the underground caverns and their enormous carved-in-place statues was, at times, overwhelming.  It's almost inconceivable to imagine the manpower required to first haul the materials for such a monastery up the near-vertical mountain trail, by hand, and then continue an equal distance back down into the earth to build within the caves.  As a foreigner who doesn't share the religion (or in my case, any religion), I couldn't help but wonder: Who would do such a thing, and why?

The "who" part of that is of of course clear: the large  number of Buddhists in the region built the temple at enormous expense over a tremendous period of time.  But the "why" is what captured my attention.

Why spend so much energy on such a magnificent creation, in a region that clearly could have benefited by that energy being spent elsewhere?

Now again, there are a host of obvious answers, each of which plays a part.  For one, there's the allure of earning favor from the gods.  When trying to influence your fate in this life (or position in the next), there are obvious advantages to participating in such an exercise.

And of course there's the political and religious order that depends on the physical show of strength such imposing structures create.  If we can cause this to happen in this world, so the reasoning goes, imagine what we can do in the next.

But I think these are only symptoms of something more fundamental and universal.  After all, imposing structures aren't the exclusive work of theistic religions.  Athiestic ancestor worship nearly always includes lavish shrines or temples to honor the dead -- the people least likely to benefit from the attention.  And this isn't just a religious desire.  Nations, businesses, and even individuals invest in physical structures rich in symbolism: the national monuments, the stone facades of banks, or even a marble headstone to carry your name forward into history long after you're gone.

What is it with the universal desire to transfigure the temporary into the permanent using tangible symbology?  And why is the primary medium used almost always stone?

And this brings me back to the book: I think there's a strong desire in nearly all people to create a "ripple in time", as big as they can.  It's almost literally like dropping a stone into a river: the bigger the stone, the bigger the ripple.  After all, diamonds may last forever, but they tend to move without your permission once you turn your back.  A giant stone statue has a bit more permanence, especially when hidden on top of a giant pillar of rock, down in a deep cave, elevated on a tall platform out of reach, and physically larger than all openings.  *That's* forever.  

So I could take this thought experiment in a few directions.  One would be to challenge whether stone is the best medium for creating a splash.  Other common ones include DNA, religion, ideology, myth, lore, legend, teaching, post-dated letters, memoirs, commissioned artwork, family heirlooms, etc.  You can't help but create a ripple of some sort, or avoid being a product of the ripples that came before you.  But you *can* take action in your life to maximize the extent of your ripple.

(And there are high tech solutions like Kiva.org enabling micro-loans that re-invest to people in need when repaid.  This has the effect of identifying people with entrepreneurial spirit, giving them the capital to grow their business and improve their standing in the world, enabling them to spread the entrepreneurial spirit through all the same ways everyone else has -- but with more means at their disposal.  A single Kiva investment could be re-invested every 6 months for 50 years (based on a 2% default rate), meaning a single investment can help a hundred people over a major fraction of your lifetime.  *That's* a ripple. -- Thanks to Matt McNamara for pointing this out to me!)

I could also question why we have this innate need to make a splash, and whether it's universal or limited to a subset, whether this desire to make a splash can itself be taught (making the biggest ripple of all), or even whether that's something anyone might want to do.

And I'm sure there are a dozen other interesting directions.  But the direction I want to go with is to expand on an idea I wrote about previously, regarding the relationships between consciousness and tools.

Now, you can read all about it in a rambling essay even longer than this one.  But in short: I feel what separates humans from all other creatures is our exceptional ability to invent and use tools.  (Yes, other animals do this too, but I think it's safe to say we're the best.)  Furthermore, it's my belief that using a tool doesn't merely extend your reach, it physically -- in the most literal sense -- extends *you*.  When you hold a hammer in your hand, the hammer is every bit as much a part of you as your hand, your spleen, or any other tool.  We are in fact nothing but a collection of tools, all under some miraculous and ambiguous and sort of "conscious" control.

Building on this notion, with tools being literal extensions of yourself, what are the ideas you have, the books you write, or the stone statues you build  -- other than more tools?  Sure, like any tools they're not all equally effective at achieving whatever intent you set the tool upon.  But the right tools, maintained in the right context, might continue to be effective even after your body dies.

And if the tools are literally a part of you, and if those tools continue to achieve your desired effects long after you die, did you really die at all?

So long as there's some part of you -- some tool of you that's still functioning -- you're still alive.  And if "life" is measured as the scope of your tools, is it possible that you might grow *more* alive over time?

After all, the Buddha was just one guy in his day.  But now he is a vast organization of billions of people.  Maybe the secret to eternal life isn't through the supernatural, ascending to Heaven or Nirvana.  Maybe it's just leaving a part of you -- the best part of you -- behind as your body degrades,  such that it can grow eternally, freed from its confines?

Net Neutrality and In Flight Wireless

I'm on a Delta flight equipped with GoGo in-flight wireless, and they
have an interesting campaign going on: free Twitter for all. It's a
pretty slick campaign, but I think it raises interesting net neutrality
issues because, in essence, Twitter is paying for preferred access.

Personally, I'm ok with it: I don't have any problem with an internet
carrier creating a "fast lane" that either side of the connection can
pay extra to use, so long as the lane is made equally available to all
comers, on the same terms.

That's not to say that all advertisers are required to accept
advertisements from all organizations -- I'm not excited about it, but I
wouldn't outlaw GoGo from accepting an ad for the Catholic Church on the
GoGo website while refusing an ad for atheism. As a publisher, GoGo can
choose what message to put on its own website, even if that message is
discriminatory.

But as a communication medium, GoGo shouldn't be allowed to grant free
access to websites hosted by the Catholic Church, while simultaneously
refusing the same deal to an atheist organization.

I understand it's a tricky and totally arbitrary line, but I think
content-discrimination should be legal (to enable free speech), while
communications-discrimination should be outlawed (to prevent restriction
of free speech).

I think too much of the NN debate is wrapped up in thinly-veiled
anti-corporate fearmongering (the little guys need to be protected from
the big guys!!). Even if it's a fine goal (and I don't think it is), it
doesn't seem to have any Constitutional or free/fair-market basis that I
can see.

Net neutrality shouldn't be about mandating equal performance, but equal
opportunity.

I'm curious what you think?

-david

avg(exception) = nothing

I'm on this mailing list where everybody is suddenly raving over this new book "The Information".  Amazon describes it as:

In a sense, The Information is a book about everything, from words themselves to talking drums, writing and lexicography, early attempts at an analytical engine, the telegraph and telephone, ENIAC, and the ubiquitous computers that followed. But that's just the "History." The "Theory" focuses on such 20th-century notables as Claude Shannon, Norbert Wiener, Alan Turing, and others who worked on coding, decoding, and re-coding both the meaning and the myriad messages transmitted via the media of their times. In the "Flood," Gleick explains genetics as biology's mechanism for informational exchange--Is a chicken just an egg's way of making another egg?--and discusses self-replicating memes (ideas as different as earworms and racism) as information's own evolving meta-life forms. Along the way, readers learn about music and quantum mechanics, why forgetting takes work, the meaning of an "interesting number," and why "[t]he bit is the ultimate unsplittable particle." What results is a visceral sense of information's contemporary precedence as a way of understanding the world, a physical/symbolic palimpsest of self-propelled exchange, the universe itself as the ultimate analytical engine. If Borges's "Library of Babel" is literature's iconic cautionary tale about the extreme of informational overload, Gleick sees the opposite, the world as an endlessly unfolding opportunity in which "creatures of the information" may just recognize themselves. --Jason Kirk

I don't know about you, but I can't piece together anything meaningful other than "Wow wow wow!!!!!"

I'm really curious to hear if anybody who reads the book actually changes their opinion on anything as a result.  I fear a lot of these books just have "something for everybody" such that you walk away feeling stronger in your belief no matter what that belief is.  Sorta like MSG: it makes everything taste better, without having any flavor by itself.  I'd love to hear somebody say "I've held this passionate belief my entire life, but as a result of reading this book I've changed my mind."


Somewhat related, I spoke at a conference recently, and the other presenters had these really incredible, well-researched, inspiring presentations.  But I realized afterwards that a major problem with so many of these broad trend analyzes is they lack statistical relevance.

For example, I find everybody talks about Twitter, Facebook, Google, and a half-dozen mega names -- and then draws inferences based on them.  But that's equivalent to "averaging the exceptions", which just isn't a valid technique: the problem with outliers is they're *outliers* and by definition defy the baseline trends.  They are too few and too different to be summarized in any meaningful way.

Rather, I think these business-fad, pop-psychology, averaging-the-exception techniques just create hysteria and excitement where perhaps none is really warranted.  Even if they're 100% "accurate", they're so incredibly imprecise as to be non-actionable.  Said another way, even if you're totally right on predicting the wave, if you can't say with any certainty the time and magnitude when it will hit, it's not worth getting excited about.

Don't get me wrong, hysteria and excitement are great ways to sell books or promote products.  But as the people being sold and promoted *to*, it's in our interests to take these fantastic claims -- each of which seems increasingly fantastic with increasing frequency -- with a corresponding amount of skepticism and composure.


iPad is a Handbag

I'm here at the Kynetx Impact conference (come see me talk tomorrow at 11am!) learning about the "live web" through a series of keynotes.  One of those keynotes will be moderated by Robert Scoble, and he happens to be sitting 5' to my left as I type these words.  A few minutes ago I was labeled a "curmudgeon" (I didn't know that word was used anymore!  but I managed to spell it right on first shot, so go me) for being an iPad skeptic.  Robert took it upon himself to explain to me why the iPad is so incredible... and alas, it didn't take.  But while he was trying, I think I learned *why* I'm an iPad skeptic: because it's primarily a fashion accessory, and I'm not fashionable.

Now that's a bold statement.  (The first one, not the second.)  You might say "but it clearly has better workmanship than any competitor!" and "it does all sorts of genuinely helpful things!"  And those statements are definitely true.  But the same could be said of a haute couture handbag -- many of which cost vastly more than an iPad despite doing so much less.

I've been toying with this notion for a while, but it really rung for me as Robert was trying to extol the virtues of the iPad -- clearly incredulous that I wasn't blown away. 

He brought up an app that shows a ton of videos in a huge virtual wall: an impressive work that looks super cool for browsing random videos.  But I never do that; I probably look at a video sent to me by some friend maybe once a week, probably less.  I'd never ever sit down and just randomly browse videos.

Then he brought up Wolfram Alpha, showing the periodic table in an amazingly gorgeous, exquisite way.  But I haven't needed a periodic table since high school.

Then there was the cool news reader, this neat app for learning fiddles, etc.  All of them are really neat, fantastic executions of their concept.  Executions that simply couldn't be done on any other device -- executions that are made *possible* by the iPad.

But their incredible executions of concepts that range from mildly to totally uninteresting.  Given that, I just couldn't get excited about them, and that was clearly not the reaction he intended.

At this point we highlighted that I'm incredibly far off the edge when it comes to my habits.  I don't watch TV, I don't have a car, I work more or less continuously, and when I'm not on my absurdly-small laptop I'm drinking wine with my wife and walking my beagle.  I carry a Palm Pre (which replaced my Sidekick), I use Verizon Broadband (and Ricochet back in the day), etc, etc.  He said "you make me look mainstream".

Given all that, it's possible that I'm just so overworked and socially deficient that I simply cannot conceive of this value that is universally recognized by everyone else.  It's possible.

But I don't buy it.  I think a more simple explanation is that I'm simply not fashionable.

I think when most people see an iPad, they see this incredible world of possibilities -- and they want to participate in that world, even if  
they don't personally use those possibilities in any meaningful way (or even if many of those possibilities don't actually exist yet).  And I actually think that feeling of participation is akin or even equivalent to fashion.

For example, Robert said Android wouldn't compete with iPhone until it had 10,000 *good* apps.  But then he acknowledged that virtually everyone is always playing Angry Birds, or one of a tiny set of other apps.  So I don't think the 10K app collection is important because people actually use those apps.  I think it's necessary to create this image of endless possibility -- without that, the suspension of disbelief that's so critical to fashion just isn't there.

Similar to fashionable clothing.  A common theme is they always use the best materials, the highest quality stitching, the most exotic product placements and high-class endorsements, etc.  I think all of these are necessary to create this image of supreme quality that justifies a 10x purchase price (or 10x brand loyalty) despite only being marginally better in any measurable way.

Indeed, when I look back on my extreme product choices in the past, they actually *were* the best.  I was doing email and browsing real webpages on my phone in 2002.  I had wireless broadband in 2000.  Compared to any Mac laptop, mine has a longer battery life, higher resolution screen, a smaller form factor, and built-in Verizon Broadband, etc.  They were genuinely better than the other options at the time, but those options just weren't fashionable.

But my point isn't to tout my awesomeness (though I could do that all day).  Nor is my point to say the iPad isn't awesome (it is), or that tablets aren't superior to laptops for certain use cases (they are, though in far fewer cases than is usually claimed).

Rather, I'm saying the iPad -- like any fashion accessory -- isn't nearly awesome as people say it is, and most of its differentiating value over other tablets is simply the strength of Apple's brand in telling a story of infinite possibilities, most of which don't actually matter, and many of which don't yet exist.

Google testing new amazing knowledge feature?

I haven't seen this mentioned anywhere, but see screenshot below.  I was curious when PayCycle was founded, so I searched "paycycle founded".  Google apparently saw enough similarity in the search results that rather than just giving me the links, it gave me *the answer*. Especially interesting because not all of the answers were right (eg, the second search result is clearly wrong).  Pretty amazing!



-david
Founder and CEO of Expensify
Follow us at http://twitter.com/expensify

McDonalds isn't the problem; we are.

Everybody knows obesity is a problem, and that it's inflating medical costs that are gradually bankrupting our nation.  But I think most people have a misguided sense that obesity is the result of fast-food using poor-quality ingredients and somehow tricking people into eating them.  For example, I saw this article on BoingBoing talking about the uproar over the high calorie count in McDonald's new oatmeal.

Basically, it has "as much sugar as a Snicker's bar and as many calories as a hamburger".  That sounds really alarming, but it made me wonder: how many calories does oatmeal normally have?  What could McDonald's have possibly done to take something good and make it bad?  So I did some research on oatmeal, only to eventually find that the BoingBoing commentators had done a lot more.

To make a long story short, the McDonald's oatmeal is totally fine.  The oatmeal itself is mostly normal, and most of "extra" calories really come from them adding a bunch of dried fruit (which is hardly an atrocity) and adding brown sugar and cream by default (which is commonly done at home anyway).  So... false alarm.

Again and again I think people overreact when it comes to the "quality" of fast food.  Yes it's made fast and in high volume, but even with the freshest possible ingredients on hand I think the results would come out looking, tasting, and nourishing about the same.  For example, In-N-Out arguably uses the freshest ingredients of any fast-food burger joint, and to compare:

McDonald's hamburger = 100g serving size, 250 calories, 9g fat
In-N-Out hamburger = 243g serving size, 310 calories, 10g fat

(Incidentally, the standard In-N-Out burger comes with a spread that adds another 80 calories and 9g of fat.  But I'm going with the mustard/ketchup option to compare more equally to McDonald's.)

So the McDonald's burger has 2.5 calories/g, while the In-N-Out burger has only 1.3 calories/g.  But both have about the same fat.  What gives?  My sense is the difference has nothing to do with the quality of the ingredients, and everything to do with In-N-Out putting heavy, water-filled veggies (lettuce, onion, tomato) on while McDonalds doesn't.  I don't have the data in front of me, but I bet if you took all the veggies off the In-N-Out burger (or added an equal amount of veggies to the McDonald's burger) -- basically assembling them the same way -- you'd get largely the same results.

In other words, both use more or less the same quality ingredients, with essentially the same nutrition, despite McDonalds being demonized as the culinary antichrist while In-N-Out being some kind of organic savior.

In my opinion, the problem with McDonald's (or any other fast food chain) isn't that their food is so much higher calorie than if you were to fix it yourself.  Rather, the problem is they cater to a customer base who is actively looking for high-calorie, high-fat food.  Said another way, given a fully-stocked kitchen (and the willpower and expertise to actually cook), I wager most people would basically fix something as bad or worse than McDonald's, intentionally.

This is somewhat reinforced by this study that suggests that NY's "label the calories as big as the price" plan is failing to produce results.  I'll admit, I thought the plan was a good one, and I'm disappointed it didn't work.  This suggests people know they're eating crap food (even if composed of reasonable-quality ingredients), but simply don't care.


So where am I going with all this?  I think the solution can't just demonize the quality of fast food ingredients (because they're fine) or emphasize how many calories people are buying (because they don't care).  And it's not enough to highlight the long-term effects of those decisions; those are already pretty apparent and non-motivational.

Rather, we need some way to identify people who are on a bad long-term path and create short-term consequences.  And by "we need" I mean "given that our country is being bankrupt by vast medical insurance programs with out-of-control cost increases driven by health epidemics such as obesity, taxpayers should demand" that something be done to prevent people from taking actions that leave us on the hook for massive medical bills down the road.

Similar to how people with good driving records and safe-driving courses get lower insurance premiums, I think we should do the same for Medicare/Medicaid.  Create programs where people can earn better care by making healthy choices.  Granted, healthy people need less medical care so it doesn't make sense to give them *more* of it as a reward for needing *less* of it.  But what if healthy people got tax credits and prioritized non-emergency care.  Shorter waits, nicer rooms, more choice.  Everybody still gets the same quality of medical attention (for better or worse), but people who actively maintain healthy lifestyles are rewarded with status, convenience, and comfort.

Furthermore -- and this is the most important point -- it should be made very clear to you which "service tier" you're in at all times, creating an *immediate* positive consequence for healthy actions that normally only have long-term effects.  So everybody who does nothing is lumped into the "standard" tier; you needn't do anything special.  But you should be constantly encouraged to upgrade to the "premium" tier by just demonstrating healthy decisions.  How exactly that is done is obviously a big question, but some ideas:

- Get credit for healthy-eating, healthy-lifestyle training courses
- Demonstrate participation in preventative care programs
- Get regular checkups to certify you haven't been abusing drugs
- Wear an electronic patch that measures caloric intake and expenditure
- Join a gym and hire a certified trainer who reports activity to your doctor

And so on.  Every problem has a ton of complications, don't get me wrong.  And it'll be a horribly political process to decide what's "healthy".  But perhaps something like this can start to gradually steer us in the right direction?


Admittedly, that won't be enough.  Not even remotely close to what's needed to actually get things under control.  But it might be a step in the right direction of preparing people to resume individual accountability for their health given we probably have little choice but to vastly scale back coverage (perhaps starting with reducing end-of-life care, which is estimated to take roughly 30% of Medicare's budget), followed by probable rationing of key medical resources.  (Read here for a hyperbolic freakout session about kidney rationing, which obscures a few good ideas under a heap of total garbage.)

Ultimately, I'm all for reducing government involvement in a lot of things.  But it will mean *reducing*, not eliminating.  I think we should provide a *minimum* level of universal healthcare, recognizing that it's simply not possible to give maximum care to everybody.  And we should eliminate barriers that prevent private insurance health plans from operating at maximum competitive effectiveness.

At the end of the day, very expensive or end-of-life treatment is a luxury for the rich, just like helicopters and fast cars.  Whether we like it or not, that's just the way it is.  But like helicopters and fast cars, they're terrible investments on which only the rich should waste their money.  Instead, we should focus on expanding coverage of inexpensive, early-life care to everybody because it's an investment in society that's returns dividends to us all.  And that's what the government is there to help us do.

-david

Egypt's Internet Blackout and How to Build a Decentralized Twitter

So Egypt has its internet back, but I still can't figure out precisely
what was gone when it was gone. Can you help? So far as I can determine:

- Cellular service was only shut off in regions (eg, at the sites of the
major protests) while left on elsewhere in the country.

- Landlines continued functioning everywhere.

- I've seen no sign that domestic internet was affected. For example,
it's possible that all homes and businesses still had live network
connections that simply weren't resolving DNS, or perhaps *were*
resolving DNS internally. It's even possible that local DNS caches were
resolving completely normally -- even for international domains --
except there were no routes to the IP addresses to which they resolved.

- Indeed, even if all ISPs turned off all broadband everywhere, there
would still be large pockets of functioning LANs (universities, housing
complexes, hotels, etc).

- The consequences of total domestic internet blackout are very severe.
I don't see any sign that government and critical services run their
own network (though the military might), nor any sign that the internet
was selectively disabled for only individuals and businesses while
sparing hospitals, police stations, power plants, etc. Furthermore, I
haven't heard that any critical services lost domestic internet or
telephone access, even though I imagine that would be a very interesting
story if true.

I think understanding what actually happened is important such that we
can plan and act in a way that is optimized for the real world, rather
than a (potentially) unreasonable worst-case scenario that never
actually occurs. I'd love your help in fact-checking the above
assumptions by providing evidence (links) to the contrary.

Regardless, none of this really changes my core thesis, which is that
whatever solution built must:

- Somehow become very popular and widely deployed *before* the event,
requiring substantial "added value" even when the internet is accessible.

- Define "added value" in terms that the average person cares about,
which is *not* anonymity, security, privacy, etc. Rather, it needs to
be speed, convenience, reliability, and so on.

- Take an approach of "adding to" rather than "replacing" whatever
more-popular alternatives are already in place (twitter, aim,
bittorrent, skype, etc) so as to ensure users sacrifice nothing by using it.

- Take best, simultaneous advantage of whatever resources are available,
at all times. If there is BlueTooth, use it. If there is a functioning
LAN, use it. If there is a functioning sitewide/domestic/international
WAN, use it. And so on.

- Anticipate the imminent failure of any of these methods at any time by
proactively establishing fallbacks (eg, a DHT in case DNS fails, gossip
in case the DHT fails, sneakernet in case wireless fails, etc.).

- Require no change in user behavior when one or more methods fail. So
the interface used to tweet, fileshare, make a call, etc -- all these
need to work the same for the user (to the greatest possible degree)
irrespective of what transport is used.

- Work on standard, unaltered consumer hardware (no custom firmware, mod
kits, jailbreaks, etc) with standard installation methods (app stores,
web, etc).

- Be incredibly easy for people to use who aren't tech-savvy. This
means spending 10x more time testing and refining the usability of the
system than actually developing sexy esoteric features.

I really do think this is a relatively easy thing to build (at least, in
a minimal form), using existing hardware and proven algorithms. I'd
suggest something like:

1) Start with an open-source twitter application. Google suggests this
one, though I haven't personally used it:
http://getbuzzbird.com/bb/

2) Add a central server, just to make it really easy for nodes to
communicate directly. (We'll replace this with a NAT-penetrating mesh
*after* the much more difficult task of getting this popular and widely
deployed.)

3) When you start up, connect to this central server. Furthermore,
whenever you see a tweet from anyone else using this client, "subscribe"
to that user via this central server. (Eventually you'd establish a
direct connection here, but we'll deal with that later.)

4) Every time you tweet, also post your tweet to this central server,
which rebroadcasts it in realtime to everyone subscribed to you. Voila:
we've just built an overlay on top of twitter, without the user even
knowing. All they will know is that tweets from other BuzzBird users
for some reason appear instantly. And the next time twitter goes down,
all tweets between buzzbird users will continue functioning as normal.

5) Then start layering features on top of this, focused on making a
twitter client that is legitimately the best -- irrespective of the
secret overlay.

6) For example, add a photo tweeting service. Publicly it'll use
twitpic or instagram or whatever, so all all other users will see your
photos just fine. But buzzbird users will broadcast the photo via this
central server, faster and more reliably than the other services, as
well as locally cached for offline viewing. Repeat for video, files,
phone calls, etc.

7) At some point when you've established that people actually like and
use buzzbird with its very simple and fast central server, THEN start
thinking about P2P. (Seriously, do NOT think about P2P before then as
it'll only slow you down and ensure your project fails.)

8) To start, keep the central server and just use it as a rendezvous
service for NAT penetration. So still centrally managed, but with
direct P2P connections. Then when you come online, you immediately try
to establish NAT-penetrated direct connections with everyone you're
following. This of course immediately presents you with challenges: do
you need to connect to *everyone* you follow? If only some, which? Take
these problems one at a time knowing you can always fall back on the
central server until you perfect it. In other words, the goal is to
remove the central server, but you can take baby steps there by weening
yourself off of it.

9) Similarly, add BlueTooth, ad-hoc wifi, USB hard drive sync (aka
"sneakernet"), etc. These would all be presented to the users in terms
of real world benefit ("Keep chatting while on the airplane!" "Share
huge files with a USB flash drive!" and so on.), while simultaneously
refining the tools that they'll use during a partial or total internet
blackout.

10) Eventually when you've figured out how to move all functions off the
central server -- the nodes start up, establish direct connections to
some relevant subset of each other, build a DHT or mesh network, nodes
that can relay for nodes that can't, etc -- the last function will be
"how does a newly-installed node find its first peer?" This is called
the "bootstrapping" problem, and is typically done with a central
server. But it needn't be done with *your* server. Just use twitter
for this: every time you start, re-watermark your profile image with
your latest IP address (or perhaps put it into your twitter signature
line, or location, or something). This way the moment you do your first
tweet, everybody who sees it will try to contact you (or some subset, so
as to not overload you). Then you can turn off your own central service
and just use twitters.

11) When the "dark days" come and twitter goes offline, your nodes won't
even notice. They'll continue to establish their DHT with whatever
subset of the network is interconnected, relay for each other, etc. If
the internet is totally gone, your users will use bluetooth, wifi,
sneakernets. They'll be ready because you *trained* them to survive on
their own *before* they needed to, rather than just handing them a knife
assuming they'll know how to use it when the time comes.

Really, the biggest challenge in all of this is whoever gets to step (6)
will immediately be acquired by Twitter for an enormous sum of money.
But hopefully that person will, with their new-found wealth, continue on
to (7) and beyond.

This is a doable thing. One person motivated person could do most if
not all of this. Is that person you?

-david

What we should build for the Egyptian (and other) protesters

Egypt appears to have cut all internet connectivity with the rest of the
world in an attempt to quell its use in organizing protests. The only
reason this makes any sense is if the tools used to organize the
protests (Twitter, Facebook, Gmail, etc) are hosted outside Egypt.

To this you might say "Let's just host protest-organizing tools on
servers inside protest-likely nations in anticipation of them using this
strategy again." But that won't work because odds are the government
would just seize all protest-organizing servers within their borders.

So the only protest-tools that will continue to work reliably are those
that continue to work without access to the outside world, without
relying on locally-hosted servers, and *without even relying on the
internet at all*. It's a tall order, but here's how I'd do it.

1) Recognize that this service needs to be used in the good days, such
that there is adequate distribution already in place when the bad days
happen. THIS IS THE HARDEST PART. I say this in all caps because this
is why no meaningful system like this exists today: the people most
likely to build it are too obsessed with esoteric technical problems
than solving the issues that actually matter in the real world.
Asymmetric, anonymized, mesh-distributed, onionskin-routed communication
doesn't mean anything if nobody uses it. So before even thinking about
the technology, we need to think how to make it relevant to users who
*aren't* protesting (yet).

2) At an absolute minimum, it needs to be no worse than then existing
alternatives. So if it's going to replicate Twitter, it needs to be at
*least* as good as Twitter, otherwise everybody will use the *real*
Twitter (until it's turned off by their local neighborhood dictator).
On way to be better than Twitter is to actually be better than Twitter.
Good luck with that. Another way is to just make your tool post to
Twitter. I think that's a much better idea: if this tool (let's call it
"anoninet" just for kicks) offers some Twitter-like functionality, it
should be completely compatible with the real Twitter in the
99.99999999999% of situations where the real Twitter is actually
available. Same goes for Facebook, Flickr, etc.

3) Ok, so anoninet's primary value in "good times" is starting to take
shape: it's a one-stop-shop to post to all your social networks. So you
install this thing, type in all your passwords (You could store them
locally in some encrypted keychain decrypted by a master password, but
that's the sort of technomasturbation thinking that obscures real-world
requirements; in reality just store it unencrypted because those who
don't care don't care, and those who do should really just encrypt their
whole hard drive), then you can post status updates, photos, videos, and
everything will automatically go to the right place. Indeed, before you
even think about making this into some sort of resilient
protest-enabling tool, you should make this the best possible
social-network posting tool. (Because if it's not that, then nobody
will have it installed when they want it most.) I'd suggest emphasizing
how this thing works even with unreliable internet, essentially letting
you queue up everything locally and it does background uploading as the
network becomes available. Similarly, it downloads everything locally
for offline reading. Odds are your protest-likely environment has
shitty internet to start, so this feature will likely have immediate
value. Add in really good support for USB-connected devices (cameras,
videocams), and basically present it as the single best way to do social
networking in a nation with shitty internet.

4) Step 4 is to succeed with step (3). Don't even think of anything
else until you've done that. Seriously, it's a waste of your time and a
disservice to your users. (3) needs to be totally nailed and immensely
popular before anything else matters. I'd say something like 10% of
your target population needs to be using it before you consider continuing.

5) Once you've got huge distribution of your client-side
social-network-optimizer, then you can start to raise the bar. Because
it's targeted to environments that have expensive and/or unreliable
internet, P2P starts to sound interesting. Throw in a network-localized
DHT and build out a distribution network that "rides" on these other
networks. So every time they post to Twitter, Facebook, Flickr,
YouTube, or whatever -- they're also posting to anoninet. And when
another anoninet is reading your Twitter stream, somehow they detect
each other and rather than getting the data from Twitter (for example),
they get it directly via some localized P2P connection. Present this to
the user as faster, more reliable, and cheaper than getting it from the
*real* YouTube.

6) Quietly encrypt everything and tunnel over commonly-used ports.
Don't talk about this, just do it. Users don't care until they do, and
by then it's too late.

7) Ok, so at this point we have wide distribution of a very popular
social networking tool that uses a localized P2P mesh as an optimized
fallback to the major global tools. Its major advantage is it works
over networks that are slow, unreliable, or expensive. This'll save you
in the Egypt case; these users would continue using the tools they
already use, to talk to the people they already talk with, and
everything will continue functioning as normal. They won't be able to
talk with the rest of the world, but they *will* be able to talk amongst
themselves, which is the important thing. Furthermore, because it's all
P2P, there are no servers to seize, and because it's all encrypted over
common ports, it's indistinguishable from all other encrypted traffic.

8) However, if this had existed in Egypt, odds are Egypt would have just
shut down the internet, period. If a dictator is willing kill you, odds
are they wouldn't blink at turning off your email. So how to make this
work without internet? The answer is: make it incredibly easy to batch
and retransmit data like Fidonet back in the day. So when shit is
*really* going down, you whip out your favorite 4GB, 32GB, or 640GB USB
drive and just sync your local repository (remember how everything was
conveniently cached locally for fast offline access?) with the device.
Optimize it to sync the most popular content first, basically ensuring
that the most intersting/important message is also the most widely and
redundantly distributed.

9) Finally, this needs to spit out an installable copy of itself to
whatever removable media is available. This way when the shit starts to
*really* go down, as people realize the true value of this system it can
spread fast to the people who need it.

Voila. A tool that supports communication amongst protesters even in
the face of total internet blackout. Some other random thoughts:

- Ideally it'd piggyback on existing credentials. So when you install
this thing you don't need to think "I'm creating a new account".
Rather, you just install this thing, type in your Twitter username and
password, and whatever giant asymmetric keypair it creates internally is
just some nameless thing associated with that Twitter account. (And you
might have multiple.)

- This thing needs to broadcast itself via existing networks in a
totally transparent way, so if we're both users and I read your Twitter
stream, I should know you're also a user without you ever telling me.
The first way that comes to mind is this thing could watermark your
profile image with maybe a digital signature (or perhaps just jam it
into some sort of extra field in the image). Then when I follow you, my
client sees the watermark, reaches out to the DHT, sees that you're
signed in (or not), and establishes a NAT-tunneled P2P connection directly.

- Social networks are particularly good for this sort of architecture as
they map well to the "publish/subscribe" model. This works easily on a
P2P network (you register yourself with the DHT by name and
keyword/hashtag, and then when you post there everybody who is
"following" you or a particular hashtag gets your data), as well as
create an implicit "value" metric for use when synchronizing data in
"sneakernet mode" (publishers/hashtags with a high follower count are
assumed to be more valuable and thus beat out less-popular content).

- This sort of system actually isn't that useful to terrorists,
criminals, drug-dealers, and so on because it's designed for mass public
communication (not indvidual private communications). Granted, nothing
in this protects the individual from being targeted, but that's an
entirely different problem. (And I wager one that could be layered on
top of this in a straightforward manner.)

In all honesty, this isn't that hard a thing to build. One dude could
do it. I could personally do it, and know several others who could as
well. But I'm busy. Hopefully a better person than me with more time
on their hands will pick up on this and do what needs to be done. The
world will thank them for it, though its dictators won't.

-david
My blog (including this post) is at http://quinthar.com
Follow me at http://twitter.com/quinthar

From the archive: David's Voluntary Payment Plan

This one is from 2008.  I was asked something along the lines of "Well if you're so so smart, how would you fix the music industry?"  Here's my answer:

http://quinthar.com/DavidsVoluntaryPaymentPlan.html

David's Voluntary Payment Plan

David Barrett

dbarrett@quinthar.com

2008/3/20


Abstract

This plan recommends creating “music registrars” to authoritatively manage song metadata in a fashion similar to how domain registrars authoritatively do the same for domain names. Artists (or their representatives) upload songs to registrars, who in turn check their waveform fingerprints against a master database of all known songs. If the song has already been registered by another owner, a conflict resolution process is started. Otherwise, the song is transcoded to a MP3 and tagged with a variety of metadata (artist and song name, artist website, etc), including “payment protocols” that enable fans to support the artist in a standardized way. iPods and other MP3 players are gradually outfitted with integrated support for various payment protocols, as well as methods for receiving artist communication or learning of and purchasing artist merchandise, concert tickets, and so forth.

I. Example of Operation

First, here's a quick walkthrough of how the system would be used in common operation:

A. Adding a new song

Alice, an independent musician, selects from one of several music registrars, creates a free account, uploads her track in the FLAC format, assigns it a name, optionally organizes it in one or more albums, and is done. The entire operation is free, takes less than 10 minutes, and requires no personal information beyond an email address.

B. Downloading a song

Bob, a music aficionado, browses a variety of free music outlets for new songs. One of those locations has an active online community around indie music, and the forum is buzzing around a new musician, Alice. The forum links to a page where Alice's music can be downloaded -- he clicks the link, chooses the format and bitrate, and downloads the MP3 for free. Though the website allows low-quality 128Kbps versions of the song to be downloaded or streamed straight from the server, for cost reasons it only allows 256Kbps and FLAC versions to be downloaded via a P2P network. He's all about quality, so he whips out his favorite P2P application and downloads the FLAC.

C. Listening to a song.

When the download completes, Bob copies the file several places -- his laptop, his home stereo, his iPod, his phone -- all of which support the completely standard, unprotected audio format.

D. Supporting Alice

Bob decides that he really likes Alice's music and wants to see more of it get played. He has several ways to help that happen:


  • One way is to go back to the website where he downloaded the music in the first place. There there's a small (but growing) forum where Alice fans discuss her music, links to other music by Alice, recommendations of other music by Alice, and so on. Furthermore, there's a quick note by Alice herself saying "Hi, I'm trying to raise $1000 to fund my next album, please help me out!" Bob sees she's up to $950 right now. He's got a few options of how to help. One is to just do a simple cash contribution, one is to help raise up to $1000 (at $950 so far) with the caveat that if she doesn't raise the full amount within a set timeframe, the money is given back. Another is a subscription of $1/mo that gets his name put on a list of True Fans. Yet another is to buy the last limited-edition autographed copy of Alice's first Vinyl album for $50. All of these options can be paid with PayPal or a credit card.

  • Another way is to use a feature built into iTunes and his iPod to auto-support any any song he listens to more than 5 times, to the default (but adjustable) amount of $0.05/listen. Similarly, whenever he looks at the face of his iPod to remember who he's listening to, he sees Alice's message that she's trying to raise $1000 and is up to $950. Likewise, he sees there's one more copy of the limited edition vinyl available.


Ultimately, he decides to go for the vinyl recommended by his iPod. He goes to iTunes, chooses "open musician's website", and buys the vinyl online.

E. Getting paid

When Alice signed up, she had no idea her music would be such a hit. But her inbox is full of messages, donations, and all her vinyl copies (which she hasn't even made yet) have already been sold.


  • Getting to work, she uploads the cover art design and asks her registrar to press the given number of vinyl records and FedEx to her for signing. When she sends them back, the company redistributes them to the customers who purchased them, and the money is deposited into her account.

  • As for how to get her money, she has a couple options. The classic approach is to just give her direct deposit information and it's deposited via the ACH network (automated clearing house). Another is to give her PayPal information. She doesn't like any of those options, so she goes with a third option of just having a reloadable prepaid Visa card sent her way -- any money added to her account is instantly available for use at any merchant, or even to be withdrawn from any ATM.

II. Music Registrars

Core to this plan is the notion of "music registrars". Like DNS registrars (from which this draws inspiration), there are many and all provide compatible functionality while competing aggressively on price and value-added services. Musicians are free at any time to sign up with any number of registrars, or move tracks between registrars at a later date. But each track ultimately maps back to a single registrar that manages (at least) standardized metadata operations around that track. In essence, a registrar provides at least the following:


  • Account creation. Generally with a username/password, though optionally with more secure mechanisms (multi-factor authentication, PKI, etc).

  • FLAC storage. For every track managed, permanently store a master FLAC version.

  • Metadata hosting. For a given track, host its authoritative name, artist, album, etc. (essentially, ID3 tags) in one or more languages.


Though not strictly required, in general a registrar will offer a wide variety of additional services, including some subset of:


  • Transcoding and hosting. Generates a variety of file formats from the master FLAC, including MP3, Flash, etc. and hosts them on the web and P2P networks.

  • Payment gateway. Accepts payments from fans according to a variety of payment protocols and securely deposits into the artist's account.

  • Fan management. Forums, blogs, RSS feeds, and all the accouterments of web 2.0.

  • eCommerce. Anything ranging from a Yahoo Store-like checkout system to a CafePress-style product generation assistant.

  • Recommendation engines, playlist management, webcasting radio stations, promotion services, gig management, tour assistance, discount music equipment, etc. Basically, each registrar will attempt to provide artists with a complete one-stop-shop of all things they could possibly need to be a happy, successful musician.


A service exists that lets anybody look up the latest metadata on any track. (Typically you would just download the metadata straight from its registrar, but there would be a mechanism to determine who the registrar is -- if any -- for an unknown piece of music.) This service uses a combination of servers hosted by the registrars, as well as servers hosted by an independent organization that manages the registrars themselves. This organization is focused exclusively on the operation of enabling transfers of music between registrars, resolving disputes between registrars (and between users and registrars), and authoritatively stating which registrar is currently managing which track. This organization is funded through annual re-certification fees paid to the organization by registrars.


One operation that is particularly interesting is: how does this organization uniquely identify each track in order to guarantee that each is only being represented by a single registrar? The answer is by using waveform fingerprints. Each registrar holds onto the master FLAC for every song in its management. Upon adding a new song, it uploads a "fingerprint" of the song to the master organization, which then confirms no other song has the same signature. (If there is a conflict, the organization investigates and resolves it.) The organization will make the choice as to which signature function to use (and it needn't be perfect, it's just a tool in helping proactively identify and resolve conflicts), and it can at any point decide to use a new function by simply having all registrars re-fingerprint all FLACs with the new function. Again, the fingerprinting doesn't need to be (and won't be) perfect -- it's just a flag that triggers manual corrective action. The better the function, the less wasted work.

III. MP3, ID3, and Metadata

In general practice, a musician would upload a track's master FLAC to her music registrar, and the registrar would generate a series of MP3s that have all the ID3 tags correctly set. The musician could then do whatever she liked with those MP3s -- email them, post them to P2P networks, post them on forums, burn them to CDs, etc -- and the ID3 tags would just be carried along with them.


However, the metadata can be indexed, distributed, and used in any way, even outside of MP3s -- the same information can be downloaded from the registrar at any time.

IV. Music Metadata and Player Support

In general, the metadata associated with a particular song can be any arbitrary name/value pair that the owner sees fit to associate with the song. There are no strict requirements or limitations on what sort of metadata must be associated. Similarly, players can choose to support all, none, or any subset of the metadata contained within a file. Any metadata not understood should be simply ignored. Some types of metadata include:


  • The standard ID3 tags: The obvious metadata includes artist name, song name, album, genre, and everything else you typically see in MP3 players. Example:
    Name: Before Today
    Artist: Everything but the Girl
    Album: Walking Wounded
    Track: 1


  • Unique song GUID: A globally unique identifier assigned by the registrar to this song. A given song would have the same GUID across all bitrates and encodings, for example, but different mixes of this song would have different GUIDs. In general, all MP3s with the same GUID should have the same waveform fingerprint; similarly, in general, no two tracks with different GUIDs should have the same waveform fingerprint. This GUID can be used by the player, website, or other service for whatever purpose it likes (it's handy to have a key by which to index the song). Example:
    GUID: s8d9fgfud6s6d6f8ds8sys6s65

  • Metadata URL: A new tag would be a HTTP URL from which the latest authoritative metadata can always be downloaded in some standard format (I'd propose JSON, others might argue XML, but the specific choice is TBD). Any player or service can download the latest metadata for this track at any time, possibly rewriting the MP3 itself with the new information. Example:
    MetadataURL: http://mytunes.com/meta/s8d9fgfud6s6d6f8ds8sys6s65


  • Payment protocols: A series of descriptions through which this artist can be automatically compensated according to some predetermined protocol. There will be many different payment protocols (and new ones all the time), some of which might include direct deposits into bank accounts, charging to phone bills, reverse charges to prepaid credit cards, PayPal transfers, eGold transfers, or whatever. It's likely each registrar would offer one or more of the most well-known payment protocols by default, but there is no restriction on somebody coming out with a new payment protocol and then associating it with their song. (More details on this below.) Example:
    Payment: ach://<bankaccount>,<institution ID>
    Payment: paypal://<email address>
    Payment: 
    http://mytunes.com/s8d9fgfud6s6d6f8ds8sys6s65
    Payment: raise://amount=$1000&current=$950&by=2008/4/1


  • Hash: Though there's no strict requirement that a given song be distributed universally as a binary-identical MP3 for each given bitrate, it's reasonable to assume that this convergence would occur. Thus a valid piece of metadata would be the hash of a given encoding, which can be used by the player to verify that the file hasn't been corrupted. Example:
    Hash: MP3/256/SHA1(3da3f0afc0d772825c43e310fe34eacf0dea204b)

  • Message of the day: A general message that the artist wants to associate with this song. Can be anything from a simple hello, a description of the song, a request for help, an advertisement, or anything. This could appear on the face of an MP3 player, or in a bubble on your desktop, or however the player feels fit to show it. Example:
    MoTD: Only 1 copy left of my limited edition vinyl album, $50!
    MoTD: Don't forget, I'm playing the Fillmore tonight at 8pm!


  • Lyrics: The lyrics of the song itself could be easily included in the song, or perhaps a URL where the lyrics can be downloaded.

  • Other songs by this artist / recommended by this artist: Links to other songs by this artist. A player could be configured to poll this at some frequency to be automatically notified when new music by an artist becomes available.


The important thing to take away is that metadata can contain anything, and registrars merely record and host it -- it might or might not have any awareness of what the various name/value pairs actually mean. You needn't ask anybody's permission or get the approval of any standards body to create new metadata: just add it to your song, and any player that doesn't expect it will ignore it.

V. Artist Compensation via Player Integration

The basis of this system is to enable fans who want to compensate artists whenever and wherever the mood strikes them, in whatever amount, for whatever reason they come up with. This is enabled through integration with the players themselves, as this reduces the latency between hearing the song, making the decision to support the artist, and actually conducting the transaction.


The specific method of the integration is up to the designer of the player or service. But some examples that could be applied to any general MP3 player include a "thumb's up button" where $0.50 is sent to the artist when pressed, or an "auto-tip" option where $0.05 is sent to the artist each time his song is played in entirety, etc. All of this would be opt-in and configurable by the user in regards to the amount being paid and the frequency of payment.


Similarly, metadata and players could generally conform to standard ways of advertising merchandise and concert tickets related to the music. Depending on the player's form factor, it could even provide basic storefronts, one-click additions of tour dates to Google Calendars, or whatever type of interaction the device feels is appropriate to facilitate between artists and fans (perhaps even with a commission for the transaction paid to the device manufacturer). Ultimately, this is left up to the artists, fans, and player manufacturers to decide – the music registrar just manages the metadata without being aware of what it means or how it's used.


As for how the payment would be technically conducted, this would depend on the payment protocol and would likely be decided by a period of competition ultimately leading to a few widely supported "de facto" standards. For example, a phone-integrated player might use a payment protocol that puts song contributions straight onto your phone bill. An iPod might keep an internal count of what payouts are left to be done, and then upload the transactions to an iTunes-integrated micropayment engine when synchronized. WinAMP might accumulate transactions until they exceed some threshold where paying the artist directly via PayPal makes sense. And so on. Payment providers will compete vigorously for adoption by players and registrars alike, but the ultimate decision for who to pay, how, and how much rests with the listener.

VI. Conclusion

In summary, the above proposal outlines a global framework where fans can voluntarily support fans through a competitive ecosystem of compatible service providers. The design separates functionality along clear layers of accountability and enables competition between multiple parties within the layers. The goal is to create a flexible, powerful system that enables a degree of innovation yet unseen in the music industry (at least, in the legal music industry). Much like the web and internet itself have transitioned from small, non-profit research projects into engines of global commerce, music -- both its creation and consumption -- has the capability to be a similarly innovative and powerful force. It just needs a framework that encourages it.

VII. FAQ

Here are the questions I've heard asked on this list before, and some quick answers to each:


  1. What if nobody decides to pay?
    The base assumption of the entire music industry is that music is valuable, and that fans actually do exist. If fans -- people who value art and wish to support their artists -- do not in fact exist, then this system won't create them.

  2. What if no music players decide to support payment options?
    The system works best if the payment protocols are implemented in the players themselves. In the meantime, until these are widespread, music registrars can offer web-based gateways that help fans support artists using today's technology.

  3. What's to prevent me from uploading the Beatles as my own mine?
    The standard solution to this problem is to have a "sunrise period" where prominent trademark and copyright owners are given early access to submit their own songs to the database. The expectation is each of the labels would run its own "private" registrar to manage its songs, and thus they would simply upload a complete list of fingerprints for all their songs to the registrar-management agency. In the event anybody uploads one of the label's songs to a different registrar, a flag would be raised when the fingerprint conflicts with the existing database, and would be resolved through manual action.

  4. So... where's the big pool of money? Where's the sampling?
    That's right, this system doesn't need to globally sample listening demographics in order to disperse a central pool of money according to some arbitrary measure of value. Rather, the money is never pooled -- it goes straight from the fan to the artists (via one of many competing payment gateways). The samples are never taken -- it's not really practical in the first place, and it's just not needed. And no arbitrary measure of value is selected -- it's left up to every fan to decide how much to give his artists.

  5. What about piracy?
    What about it? It already happens today in vast amounts, and no plan on the books even claims to have a chance of doing anything about it. Piracy *is* online music -- everything else is just an aberration. This plan seeks to capitalize on the real world as it exists today, tapping into the vast sums of money that fans currently aren't giving to music labels.

  6. What about privacy?
    This system gives exceptional privacy protections to all involved because there is no one entity that sees all activity. As such, it doesn't centrally aggregate sampling data, demographic profiling, historical traffic, personally identifiable information, or any of the problems that people are generally skittish about. The centermost entity of this plan is an organization that just has anonymous fingerprints of unnamed songs, and knows absolutely nothing about the songs themselves, the artists who make them, the users who listen to them, or the interactions in between.


  1. X got paid $Y before, will he still be?
    Possibly. Maybe he'll get paid more. Or maybe less. The same can be said about every other solution on the table.


  1. But it's not fair! How will X get paid for Y?
    This plan recognizes that every fan has a different idea of what is or is not fair, and fully empowers him to act upon that notion. Even the old system that is rapidly dying wasn't "fair", it's merely "what was". This plan does not attempt to blindly copy what was, nor invent some new notion of "fair" and mandate that all fans obey it under threat of force. So in this sense, it is arguably the most fair of all.

  2. Hasn't this been tried before?
    Everything's been tried before, and everything has failed – all plans have failed – due to lack of support and outright opposition by “old guard” music industry. Virtually every innovative plan, both voluntary and compulsory, has been crippled through lawsuit, squeezed through impossible pricing, or bypassed through refusal to participate. There's very little in this plan that's new, and without action by the existing industry, this plan to create a feasible commercial alternative to raw, uncompensated piracy will fail just like all the others have and are failing. But this proposal isn't intended as a panacea. It's intended as a review of what's possible should the music industry decide to begin acting reasonably and in the interests of artists, fans, and society at large. There are signs that the industry is starting to have reason forced upon it by investors, artists, and even a gradual awakening of common sense after a decade of complete destruction of shareholder value. One day, they will either become irrelevant or will sign up to one of the many, many plans proposed and nurtured over the years. Maybe they'll choose this one. Maybe not. The point of you reading this is to be aware that the vision presented herein is in fact possible, and to either encourage the industry to adopt this proposal, or to encourage congress to strip the industry of its abused and overzealous tools of copyright enforcement such that we can continue on without them. How many more decades are we willing to wait?


  1. So that's all well and good, but seriously... Where's the sampling?
    Seriously, it's not needed. Take it in reverse.

Q: Why sample?

A: Well, we know how to at least try to sample music fingerprints transferred over the backbone, and we think that samples are somehow related to how often songs are listened to, so by sampling we can get a sense of which songs are most often listened to.

Q: Why do we care how often songs are listened to?

A: Well, we're assuming that the number of times a song is listened to is representative of how valuable it is to fans.

Q: Why do we care if a song is valuable to fans?

A: Because artists must be paid in proportion to value, obviously!

Q: Paid by whom?

A: Well... by fans, I guess... obviously.

Q: Why don't fans pay artists directly?

A: Well they *were*, through CD sales, until piracy ruined everything.

Q: I thought CD sales largely didn't go to artists.

A: Well... if you want to get *technical*, no, but they sorta "trickled down to artists"... It's complicated.

Q: Ok, again, why don't they pay artists directly?

A: Because that's impossible! What, are they supposed to track down every artist in their playlist and give them a nickel each time they play the song?

Q: Sure, why not?

A: Because... because you just can't. It's complicated. Fans can't be trusted to support their artists directly. They need help.

Q: Help from whom?

A: Well, help from me, of course. And my friends. Only we can get artists compensated.

Q: But I thought your CD sales largely didn't go to artists?

A: Yes they do! They trickle!

Q: So let me get this straight: the goal is to help artists get paid by fans in proportion to how much fans like them. But fans can't be trusted to do it directly, and instead artists need the help of organizations that historically take the lion's share of the profit and leave a trickle for the artists themselves? And the best way to do this is to force everyone to pay you a bunch of money that you distribute based on relative estimated value to fans calculated by sampling backbone traffic for a small set of music fingerprints, extrapolating global traffic, inferring total music listens from that, and then converting that sampled/extrapolated/inferred number into "value to fans" with an arbitrary formula selected by... by whom again?

A: By me.

Q: Got it.

A: That's right! Now you're getting it.

Q: And why not just let fans give artists money directly?

A: You just... you just can't! And... it's different, and therefore scary. Artists talking to fans? Fans talking to artists? What an absurd thought. Fans can't be trusted! Artists don't want to talk to fans! There need to be a middleman. Lots and lots of middlemen. And formulas! And sampling! And most importantly -- a huge, enormous pool of money. That I control. Trust the trickle. It worked for your grandpa. Why can't it work for you?


- Jan 2014 (1) - Mar 2012 (1) - Nov 2011 (1) - Oct 2011 (1) - Apr 2011 (1) - Mar 2011 (3) - Feb 2011 (2) - Jan 2011 (9) - Nov 2010 (1) - May 2010 (1) - Mar 2010 (1) - Feb 2010 (1) - Jan 2010 (1) - Dec 2009 (1) - Nov 2009 (1) - Oct 2009 (1) - Sep 2009 (1) - Aug 2009 (2) - Jul 2009 (1) - Jun 2009 (4) - May 2009 (3) - Apr 2009 (3) - Mar 2009 (10) - Feb 2009 (5) - Jan 2009 (3) - Dec 2008 (5) - Nov 2008 (5) - Oct 2008 (5) - Sep 2008 (4) - Aug 2008 (5) - Jul 2008 (11) - Jun 2008 (8) - Feb 2008 (1) - Aug 2007 (1) -