Every once in a while someone gets a brilliant idea for dealing with piracy: why not just assemble a big pool of money and then distribute it in proportion to how often content is pirated?
Both parts of that (filling the pool, and then selectively emptying it) are atrociously bad ideas for a huge number of reasons, but let me zero in on the latter half here. In essence:
Under no circumstance proposed or envisioned will backbone measurement ever estimate volume to even the barest degree of accuracy, darknet or otherwise.
Consider what is ostensibly the most widely viewed image on the internet: the Google logo:
It's unprotected, unencrypted, no darknet, no P2P file sharing, no copying to an iPod for offline consumption. In short, if backbone measurement could ever estimate *anything* then surely this would be the ideal use case, right?
But the Google image is cached locally -- in my case (according to about:cache in Firefox) until 2038. No matter how many times I visit Google.com, I won't redownload it. So estimating visits to Google.com by sampling the number of times the logo is downloaded is completely and irreparably flawed.
(And the most common caching solution is LRU so content that is accessed *more* often is actually re-downloaded *less*.)
Thus estimating the number of times a song is listened to by measuring how often it is downloaded is even more flawed -- as all the reasons I gave for why Google is the ideal case are precisely inverted for music.
Even if we can't agree on anything else, we should all at least agree that backbone sampling is a patently absurd notion for estimating popularity, and thus is intrinsically unsuitable for redistributing some big pool of money -- regardless of how it's filled.
- David Barrett
Twitter: Follow @quinthar