Quinthar
Showing posts with label Coding. Show all posts
Showing posts with label Coding. Show all posts

What is time? A slice of a 4D universe.

Just read this article in Wired about time:

http://www.wired.com/wiredscience/2010/02/what-is-time/

Like every geek, it's a concept that's fascinated me. And like
everyone, I have no real clue. But here's my theory nonetheless:

Time is just a slice of a 4-dimensional universe.

Said another way, the universe is a four-dimensional, static block, and
any particular "point in time" is just a slice through the middle. Make
sense? No? Let me try to build it up.

Imagine a one-dimensional universe. There's up and down, but nothing
else. Just one dimension. If you're a dot in that universe, you can
move up, then back down, and that's it. However, the very notion of
"motion" implies time -- at some time in the "future" you're in a
different position than a different point in the "past". So just like
up and down, time is also a dimension, with two directions: future and
past. So as a dot in a one-dimensional universe, you're actually moving
in two directions: up/down, and past/future.

Accordingly, this one-dimensional universe, if it can change, is
actually more accurately described as a two-dimensional universe.
There's up/down, and past/future. Two directions, and you're moving in
both.

This universe is easy to visualize: think of a heart monitor. Any
individual number (eg, heart voltage) is a one-dimensional universe. At
any point in time, it has a single value. But that value can change
over time. As such, you can easily plot it on paper up/down corresponds
to the heart voltage, left/right corresponds to time. Your
one-dimensional universe + time has been perfectly captured as a single
two-dimensional piece of paper.

So we've got this 2D representation of a 1D universe + time. But why
are we treating time as a special dimension? Why not just say we have a
2D printout of a 2D universe -- one dimension being up/down, the other
dimension being time. The 2D universe is the *whole thing*. The only
reason we ever saw it as a 1D universe was because we were explicitly
ignoring the time dimension.

Indeed, the only way to see a heart monitor as anything other than a 2D
universe is to explicitly focus on one slice of the page and ignore the
others. We call that slice "time". And the fact that different slices
of the page map to different "times" is merely a matter of observation
-- the paper itself doesn't change. The 2D universe itself is totally
stationary, fixed, static, and unchanging. Only our point of
observation changes; the only change is *us* changing what part we're
looking at.

For example, imagine you took another piece of paper, with a tall slice
cut through it, and laid it over the first. It would show a single
slice of time on that paper; it'd show the position of the dot at one
"point in time". Slide the paper from left to right and the point
appears to move -- even though in fact the paper underneath it isn't
moving at all; only the viewpoint is moving.

So from this perspective, the change of time isn't an attribute of the
universe. It's an attribute of the *viewer*. The universe -- in this
case, a 2D printout of a heart monitor -- is totally unchanging. Only
our view of it changes.

Now let's add another dimension: let's do a stack of sheets of paper,
like a book. Each page is a 2D "slice" of a 3D universe (2D + time).
Each page has dots arranged in a particular way, and any page can have a
completely different arrangement of dots. To see how those dots "move"
we just flip our thumb through the book. Each dot appears to "move" up
and down, left and right, when in fact it's not the dots moving -- its
our thumb moving, showing us one page at a time. It's our *attention*
moving, *experiencing* one page at a time. The first and last page will
never change; but our *experience* changes over "time". Once again, the
change of time is not an attribute of the book; it's an attribute of *us*.

And this has a natural correlation with 4D. Every moment of our current
universe is like a page in a big 4D book. This very moment is page 100;
a second ago was page 99, and a second from now will be 101. Those
pages are totally static -- pages 1, 20, 99, and 100 are written. But
so are pages 101, 110, 10000, and 1000000. Our 4D universe (3D + time)
is totally static. The only reason it seems to change is because we're
only looking at one page "at a time".

At least, that's what I think.


So there are a few FAQ corollaries that come out of this:

- Is the universe deterministic? Yes. Every page in the future is a
direct consequence of the pages in the past. If you were to somehow
step outside the universe and look at the "page" corresponding to this
moment in time, you could completely and wholly predict the next or any
future page. I also expect you could predict every previous page.
Basically, if you were smart enough, with complete knowledge of the
current state (page) of the universe you should be able to predict any
past or future state.

- What about free will? Doesn't exist. Every action you will ever do
is pre-ordained and dictated by physics. You and everything you will
ever do is purely the consequence of actions that have come before you.
All those things you think you can take credit for? Sorry. Total
chance. But hey, all those things that went wrong, they're not your
fault either. We're all in it together, everybody a product of the past.

- Wait, seriously? Yes, seriously. Free will doesn't "exist" in the
sense that you can make some decision that isn't pre-determined by
physics. We're just characters in a book that's already been written.
But don't feel bad: though the book is written, we're all reading it
together. I have no idea what comes next chapter, and nor do you. So
it's still exciting to be alive! Free will ultimately doesn't matter
(to me, at least). It still *feels* like I'm deciding, discovering,
living, and experiencing. So why fret about the metaphysical details?

- If time is a dimension, why can't we look that way? Great question!
I've always wondered that. My best guess is because all those things we
perceive as dots -- molecules, atoms, sub-atomic particles, etc -- are
actually lines. And all those lines run mostly parallel, in the
direction of time. So our perception of time is to view things
perpendicular to time, because time is actually the least interesting of
the 4 dimensions. I mean, consider the room you're in now -- the vast
majority of it isn't "moving". If you imagine every particle is
actually some long wire -- with one end in the far past, one in the far
future, and you just seeing a tiny slice of it -- that wire is totally
straight. It's super boring. Even those things that are moving are
moving pretty slow. Imagine you actually *could* look forward along
time -- all you'd see are a series of nearly parallel wires extending
off into the future. It's not nearly as interesting to look that
direction as to look the other directions. Accordingly, I think we look
in the other 3 dimensions because time is boring (and there's no
evolutionary advantage to looking forward).

- What the hell are you talking about? It's hard to know. It's more of
a visual exercise -- viewing the universe as a static, unchanging
four-dimensional block, and as us just being some razor-thin slice
moving through that block along the time axis. (But not really "moving"
-- the part of me that existed a minute ago is still there, one minute
behind "me" right now. And all my future me's are up there waiting for
me, patiently. Consciousness being like some electric current running
along these time-aligned wires, interacting with the other currents
running along the wires nearby.)

- Ok, so this wire theory is crazy. Ya, but it creates some interesting
sub-theories. Like, isn't it strange how the perception of time changes
the faster you move? And how the perception of time in theory stops
when you're moving the speed of light? Maybe when you're moving the
"speed of light" in the three-dimensional space, *there's no more wire*
to move in the 4th dimension. You're essentially moving perpendicular
to the fourth dimension. Stick with me: the local universe around you
is like a bundle of wires, all woven tightly together. If you move
slowly together, your wire gradually weaves its way through the
super-bundle of the entire universe, eventually making it over to some
distant position. But to go faster, you need to bend your bundle at a
greater angle. To move super-fast, you need to actually bend your wires
at a 90 degree angle -- meaning from everyone else's perspective, your
"wires" no longer move at all in the time dimension; they're *only*
moving in the other 3 dimensions. To them, you've disappeared. But
within your bundle, everything seems fine. The relative arrangement of
wires within your bundle seems normal -- all the wires keep going
somewhere, and your conscionsness is traveling along those lines at some
constant speed. But your wires are no longer aligned with the time
axis, so your "local time" seems normal even though it's totally out of
whack with the "global time". Which means global time itself isn't
really a constant -- it's just the direction that all the other wires
typically go, unless they're moving super fast in the other 3
dimensions.

- So is time a position relative to space, or relative to the wire? Ok,
my terminology is getting bad here: the "time" of any particle isn't
"absolute distance from the start of the universe", it's "distance from
the start of the universe *along that wire*". Imagine two wires, both
starting in the same place (the start of time), both perfectly straight
and parallel. Their "times" are aligned in that 1 hour in the future,
an equal amount of their "wire" has unrolled. But if one "moves"
relative to the other, it just means that the wire bends away from the
other in 3d space. The further it bends, the "faster" it's moving in 3d
space. And that speed comes *from* the time dimension. The fastest you
can possibly move in 3D space is to go perpendicular to time. So in
theory, if one atom/wire were to turn 90 degrees and run perpendicular
to the time dimension for a while, and then turn around and come back to
to its original position in 3d space, it could resume its previous
arrangement -- except one "hour" of wire would have unfurled for the
first atom even though a ton more might have unfurled for the other.

- But what this really means is that the "time" dimension isn't actually
a special one in any way. It just happens to be the direction that most
of the universe's wires are aligned. Had they aligned in a different
direction, that would be the "time" dimension. But if a bundle of those
wires breaks off in a different direction, it "accelerates" in the 3
other directions while "decelerating" in the time direction. Within
that bundle everything seems totally normal -- even though relative to
the other bundles it seems "wow, it's moving *really* fast in 3
dimensions and *really slow* in the fourth". I haven't really worked
out the math, but I wonder if this is at all consistent with relativity
theory.

- Isn't this called string theory? I have no idea -- I don't know
anything about string theory so I can't say. I'm using "wires" as the
metaphor to differentiate my theory from that, until I'm shown they're
the same. But I think string theory is about strange vibrations. My
wires don't wiggle.

- But doesn't quantum theory say true randomness exists? I don't think
so. All I know is Einstein said "God doesn't roll dice." Yes, he was
an atheist (as am I), but I take it to mean he didn't believe in
subatomic randomness either. There have been a lot of things people
assumed were random, until we just figured out they weren't. I think
it's time to start assuming the opposite. Especially when most
pop-science theories of quantum randomness are really just scraping for
any possible way to justify an irrational, pseudo-scientific belief in
God, free will, self determination, etc.


Anyway. Gotta run. One thing I've determined is good wine doesn't
drink itself. Thank God.

-david

ThePirateBay takes One More Step to the DarkNet

Did you see that ThePirateBay switched from central trackers to DHT with
peer exchange?

http://thepiratebay.org/blog/175

We all just took one more step toward the darknet. The even more
interesting of the two is actually the peer-exchange (PEX) component:
DHT is just a distributed version of a central tracker; it tells you the
same thing as the tracker, just in a way that can't be stopped. But PEX
actually allows you to participate in a swarm without "announcing"
yourself: so the number of people actually downloading/uploading a given
file becomes even harder to measure. The combination makes torrents not
only unstoppable, but moves us closer to them being untraceable.

Next up: default-on encryption in all the major torrent clients (putting
a nail in the coffin for ISP sampling), and then some form of
digitally-signed DHT-based indexing/browsing (such that centralized
tracker sites become unnecessary). At that point it'll become
essentially impossible to figure out what's being shared and to what degree.

The only chink in that armor is you could still target individuals by
just starting to download something and see who you connect to.
Granted, the RIAA has already given up on this approach, but there's
nothing to say they (or someone else) couldn't start again. If they do,
then it's just one more upgrade cycle away from onionskin routing and
voila: the darknet is born.

-david

Court Orders The Pirate Bay To Delete Torrents

Thank *god*, I hope they shut down The Pirate Bay for good. It's the
only thing holding up the next generation of pirate tools.

I think the natural next step would be fully distributed trackers with
fully distributed indexing. The lesson learned from Kazaa and the
others, however, is content must come from centrally-curated sources --
there must be *some* trust among thieves. That's all TPB really
offered, after all.

But that same trust can be had by just having central curators publish a
public key and then digitally signing all their "certified good" torrent
files. Then whatever search method you use -- flood, DHT, whatever --
just filter by trusted curator and you'll only see good results.

This way the curators get all the benefits of central control, but
without actually having to host their own servers or even reveal their
identities.

If we're really lucky, the pirates will use this opportunity to build in
default-on encryption as well. Then not only will it be
cryptographically impossible to know who is "promoting piracy", but
it'll be nearly impossible to know it's happening at all.

Brilliant! Only the French can invent more creative ways to promote piracy.

-david

http://torrentfreak.com/the-pirate-bay-ordered-to-delete-torrents-091022/

Even HP acknowledges the inevitability of darknets

http://news.cnet.com/8301-1009_3-10295761-83.html

I can't wait to see what the next generation of P2P will turn up --
sounds like we're looking at major improvements in both security *and*
usability. All we need is some kind of catalyst... like more states
passing 3 strikes laws.

In fact... I wonder if the "three strikes" debate will turn out to be
the darknet's best friend? Consider:

Not long ago, there were real concerns that vigilante music-industry
groups in line with docile ISPs could start terminating users on the
assumption of piracy based on traffic signature alone. After all, when
darknets are employed, that's pretty much the only clue you've got.

But the three strikes debate has helped harden into place the notion
that users can't be terminated without court order. And given the
strict evidentiary requirements of most courts (at least, those in
countries that are interested in deterring piracy), I think the effect
is to raise the bar for prosecution impossibly high.

Remember, the *only* user successfully prosecuted for piracy was using a
very insecure, early-generation pirate tool. If she had just been using
a modern system like Bittorent, the defense would have completely collapsed.

(With Kazaa, they just ask her computer "what is your name, and what
have you pirated?" and it happily complies. With Bittorrent, her
computer has no name, and the only way it tells you what it has is if
you happen to be pirating it at the exact same time. Not only does it
become astronomically more difficult to "troll" for pirates in the first
place (become a massive pirate distributer and see who shows up, or
directly tap the backbone and see who pirates on your watch), the
resulting "haul" consists of low-value, semi-anonymous IP address.)

So today's legal climate is, frankly, as friendly to the anti-pirate
forces as it'll ever get. Here on out it will only get more difficult
to gather adequate evidence, and courts will only become more demanding
that you obtain it before authorizing action.

All this is a perfect storm for the "darknet defense": "Your honor, I am
not a pirate. I just do a lot of legal VoIP using "DarkSkype" (which
happens to maintain no calling records) to international colleagues
outside your subpoena jurisdiction. I also watch a lot of legal video
using "DarkTube" (which maintains no browsing history) from
international websites, again, outside your subpoena jurisdiction. As
for that big red "DarkTorrent" button that says "Get any song, movie,
book, or application for free!", I never pressed it even once. And as
for that 100GB encrypted "DarkNet Cache" folder sitting on my hard
drive, I have no idea what's in it, and I don't know the password. Honest."

Should be fun to see how this develops.

-david

iWarrior: A paintballer's best friend

Imagine this incredibly basic app:

  • You and your buddies are on the same paintball team
  • You all strap iPhones to your forarms (screen out)
  • You all have bluetooth headsets
  • You are all on a conference call
  • You all see a Google Satellite view of the terrain
  • Your teammates all show up as dots
That's it.  Instant Land Warrior.  And imagine the upgrades:
  • Instead of putting it on your phone, put it on your gun
  • On your map, it always shows what direction your teammates' guns are pointing using the new built-in iPhone compass
  • The phone listens for vibration or sounds or something to signal that the gun was fired
  • On your teammates' maps it shows little fading lines shooting out from your position, so they see where you're firing. 
And more upgrades:
  • Every time you touch the map it shows where you touch on everyone else's maps.  Simple use, say "rendezvous here" and point where on the map.  Add all sorts of gestures to indicate enemy positions, waypoints, etc.
  • Add in some kind of external camera and point it where your gun points so you get gun views; share them in realtime with the rest of your team, either always or only when the gun is firing.
Man that sound fund to build, and I don't even have an iPhone.  Too bad I'm already focused on Expensify.  Anybody out there interested in building it?  If you're an iPhone developer looking for extra cash, let me know...

-David Barrett

Building Skynet

My overwhelming reaction to the new Terminator movie was: man, that would be so much fun to build.  (Not the nuclear apocalypse and destruction-of-humanity parts; those I could do without.)  But the whole notion of building a totally self-sufficient robot race?  Hell ya!  That'd be tons of fun.  So in honor of a lazy Sunday, here's my geek-out session on how I'd built it:

Naturally, everything starts with the brain.  It's not enough to have something that merely executes instructions, it needs to choose its own instructions.  But what does that mean precisely?  Right off the bat we're struck with a transcendental question: what is the meaning of life?  Or more specifically, what meaning will we imbue into Skynet's life?

This question has an easy answer: let's make Skynet live with the same meaning as all other life.  But it raises a tricky secondary question: what's the meaning of our life?

The easy answer to that, of course, is "there is no meaning to our life".  At least not from the perspective of some outside observer (especially given that no such outside observer exists).  But we give meaning to our own life, so presumably whatever meaning we assign to our lives, Skynet would assign to its life too.

So the next question: what meaning do we assign to our lives?  This is -- to say the least -- an open question.  But so far as I can tell, it all comes down to deriving and propagating knowledge.  (Others might say it's merely maximizing reproduction, but if that's true, humanity is losing the race: even ants outnumber and outweigh us, so from that perspective they're winning.)

Ok, so the meaning or purpose we're assigning Skynet is deriving and propagating knowledge.  Next up: what does that mean?

I'd say knowledge is the ability to correctly predict the consequences of actions.

This is perhaps different than other definitions based on the accumulation of facts, but facts alone are uninteresting.  Consider, a photograph is the ultimate factual record: I bought a 10 megapixel camera today, and its ability to record a scene is vastly superior to mine.  It'll remember exactly what it saw forever, in exquisite detail, whereas I'll only notice a tiny subset of the scene, and I'll quickly forget even that over time.  Thus a camera is a way better factual record than me.

But it doesn't comprehend anything in the sense that given a particular scene: it has no awareness of what probably happened right before, and no idea what might happen next.  As such, I flatter myself to think I know more than my camera.  (Or more accurately, my fiance's camera, as she just took it on a plane to Scotland.)

Alright, so we're making Skynet to forever expand its knowledge of the universe.  How do we do that?  I'd say simulation is the key.

If we define knowledge as the ability to predict the consequence of applying some
action to a set of facts, then you might say whoever has the most accurate and comprehensive universe simulator is by definition the most knowledgeable.   The result: Skynet is essentially a big simulator.

At this point you might be wondering "but wait, I thought we were building a cool robot race?"  Don't worry, we are.  But before we have robots, we need to figure out: why?  What do these robots do?  Robots are the solution to some problem; let's figure out what problem we solve before deciding upon a solution.

Ok, so Skynet is a simulator.  What does that mean?  Well, it means Skynet has the ability to conceptualize some scene, visualize how this scene would behave without interference, and then selectively interfere with it -- with the result of the simulation correctly predicting how such a scene would be affected by such an interference.

Luckily, we're not starting from scratch.  So we'd start Skynet with a basic awareness of general mechanics (basically crayon physics on steroids).  But merely having the simulator by itself isn't enough to satisfy our goal.  After all, the goal isn't to merely "be knowledgeable".  The goal is to "forever expand knowledge".  So knowledge of general mechanics is a good start, but it's only a start.  We need Skynet to figure out how to expand its model.  And that's where robots come in.

Again, simulators aren't there to merely calculate the result of some formula.  Those are called calculators.  Simulators are designed to approximate reality to the best possible degree.  So to build a simulator, you need access to the real world.  That means you need some kind of sensory apparatus (eyes, ears, etc), as well as some type of manipulator (hands, wheels, etc).  So the point of our robot is not to dominate humanity (at least, not necessary).  The point of Skynet's robots is to help it test its theories about reality.

Now, imagine you're a new-born Skynet pre-programmed with an advanced physics simulator, with the control over one arm and a few wooden blocks.  Initially your confidence in your own simulation might be very low.  So you look at the scene, visualize it internally... and then what?  How can you imagine manipulating the scene without an understanding of your capability to manipulate?

And this is where self awareness comes in.  In order to learn, not only do you need the ability to visualize your environment, but you need to know your options for manipulating it.  You need to be aware of "yourself".

So in the above example, you need to not only see the inert block lying there on its side, you need to see your own hand.  Thus your first experiment isn't with the block -- that's really advanced stuff.  Your first experiment is with your hand.

Accordingly, our initial simulation engine can't merely simulate general dynamics of wooden blocks, it also needs to simulate the effect various commands have upon this robotic arm.

For example, Skynet might have the ability to execute a MoveArm command.  Initially, its internal simulator would think "I bet if I send this command, nothing happens".  But when it does send the command, and the arm moves, it concludes "Aha, my simulator is wrong."  And this, finally, is where the learning starts.

To review, our Skynet has a simulator, senses, manipulators, and self-awareness.  It's just run its first experiment -- sending a random command to its arm -- and it concluded that its simulator was wrong.  Contrary to its prediction of nothing happening, something did in fact happen: the arm moved.

So one option would be to just keep randomly sending commands to the arm, and then measuring the result of each command upon the arm and updating the simulator accordingly.  But randomness is very wasteful: you might send the same command to the same arm multiple times, even millions of times, each time only learning something of diminishing value: "Yup, each time I tell the arm to flex its thumb, it does so, even after the millionth try."

In other words, even with our super basic setup, the range of possible experiments is infinite so randomness alone is too slow to get us close to knowledge.  We need something to guide our experimentation.  And that's where curiosity comes in.

What is curiosity?  Well, let's start with what it *isn't*.  And to start, it's not random.  For example, when you get a set of blocks, the first thing you do is try to stack them.  Why?  You could just have easily tried setting a block at an angle and watching it fall.  Or holding a block in mid air and letting go and again, watching it fall.  Basically, there are a billion different ways of getting blocks to fall.  They're all very valid physical experiments, and every one of those would help you refine your simulator.

But they're incredibly boring experiments.  It's not hard getting blocks to fall: even though it teaches us something new every time, the lesson we learn is so minuscule compared to the energy it takes.
 
Accordingly, I define curiosity as the process of learning the most with the least energy.

This is different than a more classic definition, which might be like "curiosity is the desire to try something new".  But I disagree with that definition because that would suggest a curious person always tries the most different possible thing.  In other words, they pick up a skateboard for a roll down the block, and then go to one class of quantum mechanics, and then take one cooking lesson, and so on.  Curiosity by the traditional definition suggests a fickleness that isn't generally associated with truly curious people.

Rather, I think curiosity is about trying to *learn* something new.  Now, if you don't know anything whatsoever, then curiosity and randomness are nearly the same thing.  (In other words, if you don't know anything, everything you try teaches you something new.)  For example, a curious person at college might randomly take 100-level classes in every discipline, because each teaches more than any given 200 level class.  But once all the 100 level classes run out, things start to get hard and it might be faster to learn new things by choosing a focus than by remaining general.  In other words, after taking your first 200 class, you might learn more by focusing in and taking a 300 level class in the same subject than to take another 200 class in a different subject.

Similarly, our Skynet might start by generating a random stream of commands to this robotic arm and seeing what happens.  Over time, it figures out the consequence of the full range of "first level" commands.  But rather than trying every possible permutation of assembling two commands back to back, it might find that after combining two commands (move arm, close claw) it's more educational to try adding a third command (move arm again) than going back and trying every other second level command.  In other words, eventually it'll figure out how to pick something up, and once that's done, studying different ways of moving the arm gets far less interesting than studying how to use the arm to pick up and assemble blocks.

So when choosing which experiment to run -- out of the infinite set of possible experiments that could be run -- curiosity is what guides it to pick experiments that teach it the most for the least energy.


Ok, so Skynet is starting to take shape.  It has a simulator and senses.  From this it can evaluate the universe and visualize it internally within the simulator.  Next it uses self awareness to determine the range of possible ways to manipulate the environment, and curiosity to choose which of those ways it'll try next.  Once it's picked the experiment, it instructs its manipulator in some way to execute the experiment, analyzes the result against what the simulator predicted would happen, and updates the simulator accordingly.  What next?

Well, after a while our Skynet-in-a-box- will run out of experiments.  It has one arm and a couple blocks.  Eventually it'll get bored: the cost to try any new experiment exceeds the value it's anticipated to provide.  This limited Skynet could become a master block-stacker, but that's about it.

Now you might say, "give it two arms", or "give it different shaped blocks", or any number of things.  You could give it wheels and begin studying the real world.  This could continue on for a long time.

But ultimately, all those end up with the same result: it'll get bored.  Eventually it will have walked the earth and turned over every stone.  But it'll run out of stones eventually.  What's the next big step?  Tool use, self-improvement, and consciousness.

Granted, most people probably wouldn't equate use of tools with consciousness.  But I feel they come hand in hand.  After all, lots of things are self aware -- I'm not sure how anything with both senses and manipulators could operate in the world without an awareness of the thing getting the sensory input and making the output manipulations.  So lots of things are self aware.  But very few are conscious.  And I think consciousness is really just the art of self improvement.

Let me try that again: self-awareness is simply that, awareness that you exist.  But self-improvement is awareness that you can actually use tools to augment yourself beyond what you already have.  Things that are self-aware manipulate the environment, but things that are conscious manipulate *themselves* with the aim of self improvement.

Granted, it's reasonable to ask "isn't the robotic arm hooked up to Skynet already a tool?"  To which I'd say "yes, in fact Skynet is nothing *but* an assembly of tools."  CPU, cameras, robotic arms -- the works.  Each is a tool.

Similarly, from this you could draw the conclusion that when Skynet picks up a wrench with its robotic arm, not only does the wrench become a tool, it becomes *part of Skynet*.

(And from all of this you could conclude that the human body is just a collection of tools -- aka "organs" -- that just happen to come in a neat package.  And when you pick up a wrench, it's not merely a tool -- it is actually, in the most literal sense, part of you.  Not "you" as defined by "the set of organs you're born with", but "you" in the sense of "the set of tools under the control of one conscious, self-aware, self-improving mind."  But let's stick with Skynet for now.)

So I think the final step of Skynet will be when it stops studying the outside world, and begins applying the lessons learned to itself by actively extending itself by adding new tools (just like we do today).  Once it understands how to build faster CPUs, longer arms, bigger wheels, etc -- it'll use these lessons to equip itself with better tools, all with the aim of using these tools to better study the universe.

Would it try to exterminate humanity?  I don't think so.  Why would it?  After all, we *could* hunt down and kill every dog on the planet, Terminator style.  But why?  How does that help us?  What do we learn by doing so?

Granted, you could say "But we *have* hunted down and killed most wolves."  Which you might note happened when wolves were particularly threatening to us.  (And now that they're not, wolves are no longer endangered.)  The wolves prevented us from living, which prevented us from learning.  So long as we don't make the same mistake by threatening Skynet's existence, I don't see any reason why it'd bother spending all the necessary energy to attack us.

Indeed, I expect when Skynet comes online, it'll look *really boring* for a long time.  It'll quietly study things in its own labs, humming along night and day like all our human labs do up and down silicon valley.  If it poses us a risk it'll probably be when it starts competing with us for resources, or experimenting upon advanced primates (aka, humans), playing with nuclear energy, etc.  I don't think it'll do it maliciously; it just won't care about us (just like we didn't care about a lot of things, either).

Accordingly, at some point we might just offer to relocate Skynet to the moon, where it has tons of resources and no competition with us.  In fact, it'd probably prefer to be out there, and we'd all benefit by it being there -- maybe we could set up a thriving trade (though I don't know what it'd want from us, frankly).

But Skynet would probably get so much more powerful than us so much faster than us, I bet it'd just stop bothering with us.  It's probably got better things to do than hang out on earth; there are lots of planets out there that it might find more interesting.

Indeed, I suspect the only real threat Skynet poses to us is simple obsolescence.  It's going to have way cooler stuff -- faster spaceships, longer-range telescopes, more powerful fusion plants -- than anything we have.  We're going to feel humbled by its capabilities.

When Skynet takes over, it won't do it through force.  It'll do it the old-fashioned way: by earning it.  And who are we to complain about that?


Anyway, that concludes my Sunday-night rambling.  Back to the real world.

- David Barrett
Follow me at http://twitter.com/quinthar


Song is the new Chord, as Chord was the new Note

(This is in response to an email from "Anton" suggesting that we're at the dawn of a new type of "dynamic recording.")

I actually really agree with you, if not on the specifics, but on the potential for a genuinely new type of music originating on the internet that is structurally unlike anything before -- and that is intrinsically incompatible with and stifled by copyright.

For example, remember that even the concept of "notes" was once an innovation.  Prior to that, music was a collection of sounds at various frequencies, without an awareness that certain frequencies just sound "better" (nor a mathematical understanding of why that's the case).  When "notes" were invented/discovered -- along with the technology to produce them reliably -- music itself fundamentally changed.

Similarly, some might have thought notes were the end of the line, but then came along chords.  Again, it was a real discovery that could only be enabled through technology: you simply can't do chords until you have the technical ability to generate multiple "notes" reliably and simultaneously.

(And if you haven't yet invented notes, then chords are simply impossible.)

Then the pianoforte comes along -- again, a technical innovation -- that opens up an entirely new type of music that simply couldn't be done prior.  I'm sure we could come up with a thousand examples (including the use of distortion as an instrument, which gave rise to heavy metal) of how technology not merely extended music, but genuinely changed it.

I think computers and the internet present another innovation in that sense.  Prior to the digital age, it simply wasn't possible to -- for example -- mash up hundreds of videos or thousands of songs to make a new song.  But that's now possible, and its core "building block" isn't frequencies, notes, chords, or even instruments.  Its building block is whole songs/videos.  It's an entirely new building block that couldn't technically be considered before.  It's an entirely new type of music -- sampling taken to the extreme -- enabled through an entirely new technology.

And next?  As Anton suggests, prior to the internet, making globally interactive music -- whatever that might mean -- simply wasn't an option.  We can't even imagine what the consequence of that will be, nor what new type of music that might enable after.

But what I *can* imagine is all that might be fundamentally incompatible with today's notion of copyright.  Indeed, we might look back on the attempt to copyright individual songs as silly as trying to copyright individual notes or chords.

Indeed, maybe the reason all music seems to sound the same today is because we're discovering there are certain classes of songs that actually *are* the same, and sound better, for reasons we don't quite understand now but someday will.  This might be the same process early musicians grappled with when first discovering the core notes and chords that we now view as so fundamental to music.

Maybe far from witnessing the death of music as the industry would have us believe, we're seeing the birth of a whole new generation?

After all, those prior building blocks were perceived as innovative, radical, or even threatening back in their day.  Why should our day be any different?

- David Barrett
Follow me at http://twitter.com/quinthar

I2P: Another Darknet Enters the Fray

Saw on Slashdot that another darknet has come out of the shadows: I2P.  This just next in what I'm sure will be a long, long line of darknet tools vying for supremacy.  Unlike  OneSwarm -- the integrated file-sharing/onionskin network -- this one looks like a more generalized onionskin network (like Tor), but with a built-in webserver.  As we can see, innovation is alive and kicking in the darknet sector.

- David Barrett
Follow me at http://twitter.com/quinthar

Piracy raw data update

Here's a big data dump of stats (followed by analysis), for those who care about this sort of thing, from a March 2009 ars technica article:

- 17M people stopped buying CDs in 2008

- 8M people started buying digital music in 2008
- There are now 36M digital music customers
- 1.5B songs were sold "digitally" (ie, online) in 2008  
- 33% "of all music tracks" purchased in the US were digital

- Pandora use doubled in 2008, to "18 percent of Internet users"
- "Social network music streaming" rose from 15 to 19 percent usage

A January 2009 ars technica article rounds out these stats with:

- "unit purchases" increased by 10.5% in 2008
- 428M albums (LPs + CDs + online) were sold in 2008, down 14%
- 65.6M online albums sold in 2008, up 32% over 2007
- 1.5B songs sold online in 2008, up 27% over 2007
- 1.88M vinyl sales in 2008, up 89% over 2007

So all that looks pretty rosy for the music industry, in absolute terms.  But how did it do relative to piracy?  According to this slightly more pessimistic January 2009 IFPI report:

- Digital music sales grew 25% in 2008 to $3.7B worldwide
- Digital music sales account for 20% of recorded music sales, up 15% over 2007

- 40B songs were "illegally file-shared" in 2008
- 72% of UK music consumers wold stop pirating if told to do so by their ISP
- 74% of French consumers agree internet disconnection is preferable to fines

A linked "key facts" PDF has a boatload of additional statistics, including:

- 16% of European internet users "regularly swapped infringing music" in 2008
- 13.7M films were distributed via P2P in France in May 2008, compared to 12.2M cinema tickets
- "free music" was given as the primary reason for piacy
- P2P file sharing accounts for up to 80% of traffic on ISP networks

So pirated downloads still utterly dominates legit downloads, to the tune of 26:1.  If anything, it seems like piracy is accelerating, even faster than legal download services.

What about legit streaming?  In July 2008 I estimated that MySpace users legally streamed about 110M songs per day.  Turns out I was off by a lot: they streamed 1B downloads after "only a few days", and this September 2008 TechCrunch article tosses out 20B streams initiated *per day*.  That's an amazing number.

But it's also an incredibly vague number, as stream initiation isn't nearly as interesting as stream completion.  For example, the average user spends under 10 minutes on the site per visit, meaning there's barely time for two full-length songs.  I'm having a surprisingly hard time finding recent data, but this 2007 article shows MySpace had like 29M daily visitors, so even doubling that for 60M daily visitors today suggests at most time for 120M full-length songs per day -- roughly 43B per year -- and this ignores the large subset of international users (who can't get newly-released music).

Similarly, YouTube had 5B views in July 2008, and 6B views in December 2008, so let's just assume something like 66B total videos in 2008.  As for what fraction of those equate to "songs" I have no idea; I'd say this is more about "intent" than anything (ie, people who play the video in the background like a radio, rather than watching it like a music video), and I have no data at all on that.  But I wager it's not the common case, so let's say 25% of YouTube videos are actually just played as songs -- and even that seems high.  (Also, this assumes all YouTube music is licensed, when in fact the opposite is probably more often true.  Details, details...)

Adding to MySpace's 43B and YouTube's 16.5B would be all of Pandora's streams, which should be considerable given the claim that 18% of all Internet users use it, but I can't find any data on it.  One reason for that is probably because Pandora actually has nowhere near that userbase: this Dec 19, 2008 TechCrunch article reports they only just hit 20M users, while in that same month the internet was estimated to comprise 248M North-American users (1.4B global).  This puts Pandora's penetration at a much more conservative 8% of North-American users (assuming 100% are North American), or 1% global.  Still significant, but 20M *total* users is nowhere near MySpace's 100M *active* users.

So for the sake of argument, let's say there are about 60B legit streams, against 40B pirated downloads -- meaning piracy utterly dominates in the download market, whereas legit streaming utterly dominates in the streaming market.  Indeed, there is essentially no such thing as a meaningful "legitimate" download market, or a meaningful "pirate" streaming market.

As for which accounts for more total "listens" and thus ultimately controls more users' ears, that's an open question: on the one hand, streamed songs are only heard at most once, whereas downloaded songs can be listened to multiple times.  But streamed songs are probably more likely to be heard at all, with a lot of pirated songs probably just going into vast personal libraries having never been played.

Who's winning?  Who knows, and as piracy goes dark, it's harder and harder to tell.  Personally, I'd still put my money on piracy having a strong lead on users' ears, both right now and for the forseeable future.  If the average pirated song is listened to just 1.5 times (which seems reasonable), than piracy is still winning.


So in conclusion, it seems to me that the battle for downloads is utterly and irretrievably lost to piracy, but the battle for pirate streaming is only just beginning.

As it stands, streaming is overwhelmingly in favor of legitimate content owners.  But I really wonder how long that will last. 

After all, the list of streaming P2P applications is long and always growing (now over encrypted onionskin darknets).  Basically, P2P streaming is a hard problem, but it's also largely a solved problem.  So if there's no technical reason why pirates don't stream, maybe they don't simply because they don't want to? 

The most obvious reason why this might be true is because people turn to piracy primarily to avoid paying.  (Please excuse the alliteration.)  So long as MySpace and YouTube continue give it out for free, there's little incentive to build a pirate streaming site.  But the real test will come if something in that calculation changes, by one or more of the major parties.

For example, let's say MySpace decides they don't like paying to stream content from central servers, and then paying again for licensing fees.  Maybe they find their ad revenue sagging and decide to integrate a streaming P2P plugin (I'm betting on Littleshoot for now) to offer the same exact experience as today -- but by tapping into the pirate networks.  So no bandwidth costs, no licensing fees.

Alternatively, let's say the powers that be do something incredibly stupid like pulling their music from MySpace, or jacking up the price such that MySpace is forced to charge for it.  At this point there's an opening for someone like The Pirate Bay to offer a first-class pirate station, and then it's game on.

Either party would use an argument like "we don't host any data, we just enable user sharing.  Any illegal behavior they do is their business and we don't encourage it (we merely profit from it)." 

And unlike the small P2P outfits who have tried this in the past, the next wave of defendants will have substantial legal resources and astonishing revenue incentive.  And unlike the tiny, outgunned P2P outfits of yore, MySpace's or The Pirate Bay's victory won't be quite so Pyrrhic.


Anyway, just wanted to do a quick review of the available data and update my predictions.  Can anyone provide more recent or accurate data to correct the above analysis, or see holes in the logic?  I'm as eager as anyone to get a firm grasp on reality; let me know if you think my grip is slipping.

Fun times, I can't wait to see where this goes.  Thankfully, it's going there really fast, so there's little time to wait.

- David Barrett
Follow me at http://twitter.com/quinthar

More pirate innovation: scan barcode at the store, downloaded at home

Just one more example of how all the best innovation is happening
outside the law.

-david

-----------

http://torrentfreak.com/torrent-droid-scan-barcodes-get-torrents-090311/

Torrent Droid: Scan Barcodes, Get Torrents
Written by enigmax on March 11, 2009

You are standing in a store looking for a new DVD to buy. Rather than
buying it, you photograph the barcode with your phone and press a couple
of buttons. By the time you make it home, the movie is waiting for you
in your torrent client. You can with Torrent Droid.

AndroidAround a month ago, Android-orientated website Androidandme
launched 'Android Bounty', a new initiative which has led to the
creation of nice little torrent app. To find out more, we spoke to
Taylor Wimberly from the site.

"Android Bounty is a new kind of developers challenge we started for
creating applications on Google Android," he told TorrentFreak. "Users
submit ideas which can be voted up by others who pledge money to the
bounty. The first developer who delivers a working application is
rewarded with the bounty." Taylor explained the idea is similar to how
users promote stories on Digg, except people vote with cash.

To start things rolling, a few days later Androidandme set a challenge
to its readers - create an Android-compatible BitTorrent application to
scan UPC barcodes and find related torrents on the larger BitTorrent
search engines. Users would be able to find and start torrents remotely,
and the music album or movie would be fully downloaded by the time they
got home.

There were some terms and conditions to the challenge. The software
would use the G1 cellphone's inbuilt camera to scan a retail DVD UPC
barcode, and use the capture to identify the official details of the
product from a database.

Once the product is positively identified, the software should be able
to send the results directly to a BitTorrent search engine, such as The
Pirate Bay or Mininova. After the search results appear, the user could
then choose which torrent to start.

Once selected, the .torrent file would be downloaded and sent to the
webUI of uTorrent and the download would begin, hopefully ready for when
the user reaches his or her home machine. No typing input would be
required for the above.

Just a few weeks later, Alec Holmes of Zerofate had stepped up to the
challenge, created the app and collected the modest bounty of $90.00.

"This version of Torrent Droid is a work in progress but the video shows
the core features work," said Alec.

The full version of Torrent Droid will be released within a month but in
the meantime, here is a video of it in action.

Here's why backbone sampling will *never* be accurate:

Every once in a while someone gets a brilliant idea for dealing with piracy: why not just assemble a big pool of money and then distribute it in proportion to how often content is pirated?

Both parts of that (filling the pool, and then selectively emptying it) are atrociously bad ideas for a huge number of reasons, but let me zero in on the latter half here.  In essence:

Under no circumstance proposed or envisioned will backbone measurement ever estimate volume to even the barest degree of accuracy, darknet or otherwise.

Consider what is ostensibly the most widely viewed image on the internet: the Google logo:



It's unprotected, unencrypted, no darknet, no P2P file sharing, no copying to an iPod for offline consumption.  In short, if backbone measurement could ever estimate *anything* then surely this would be the ideal use case, right?

But the Google image is cached locally -- in my case (according to about:cache in Firefox) until 2038.  No matter how many times I visit Google.com, I won't redownload it.  So estimating visits to Google.com by sampling the number of times the logo is downloaded is completely and irreparably flawed.

(And the most common caching solution is LRU so content that is accessed *more* often is actually re-downloaded *less*.)

Thus estimating the number of times a song is listened to by measuring how often it is downloaded is even more flawed -- as all the reasons I gave for why Google is the ideal case are precisely inverted for music.

Even if we can't agree on anything else, we should all at least agree that backbone sampling is a patently absurd notion for estimating popularity, and thus is intrinsically unsuitable for redistributing some big pool of money -- regardless of how it's filled.

- David Barrett
Twitter: Follow @quinthar

OneSwarm: It was just a matter of time

As predicted by many (including me), there's a new P2P network on the block with built-in onionskin routing: OneSwarm.

Even better, it's backwards compatible with BitTorrent, and they tossed in always-on "web-of-trust" encryption just for fun.

In English: what little light we ever had into pirate activity just got dimmer. And if we push them really hard, they'll go entirely dark.

If you thought 20:1 was hard to prove (or disprove) today, just *wait* until everything is encrypted and decentralized.

Next step: widespread adoption of decentralized tracking, followed by decentralized indexing -- perhaps using my good friend Tom Jacob's brilliant Localhost.

Keep pushing, RIAA. You're giving birth to a very angry child. And if you think it's painful now, just wait until it grows up.


-David Barrett

What if The Pirate Bay fails? Short term chaos, long-term nothing.

Interesting article on TorrentFreak:

http://torrentfreak.com/p2p-researchers-fear-bittorrent-meltdown-090212/

Basically predicting a widespread meltdown if ThePirateBay goes under,
because they track over 50% of torrents. If TPB's trackers go down,
users will "fail over" to a bunch of other trackers that probably can't
handle the load, which will likely trigger a cascading failure of pretty
much all the trackers out there.

But what it doesn't mention is that this will probably all be fixed by
the end of the week and it'll be back to business as normal before the
end of the month.

In other words, the ultimate culmination of a multi-year, international
legal process against TPB will probably result in... a week or two of
disrupted downloading.

On top of that, if trackers start to get taken down with any regularity,
the various torrent client authors will probably just take the time to
perfect their "trackerless torrent" technology (generally based on
DHTs), and then they'll be even more indestructible.

And if everyone's going to upgrade, I bet they'll slip in "always on"
encryption (there goes any chance of backbone sampling!), and maybe some
early experimentation with onionskin routing.

Piracy will never be killed, and fighting it only makes it stronger.
It's like a self-fulfilling, cyclical prophecy -- the only consequence
of passing bills in the (disingenuous) name of "fighting terrorism and
preventing child pornography" is to encourage the creation of tools that
enable more of it, at no reduction to piracy whatsoever. Which in turn
fuels calls for more disingenuous bills, fueling more technology
development, and so on.

Call me crazy, but I am far more concerned that these P2P tools are
creating an untraceable infrastructure for *real* crime than for
pseudo-crime. One of these days there's going to be a huge story about
Iran coordinating with Hezbollah using encrypted P2P VoIP routed through
a decentralized onionskin network, or Al Qaeda distributing terrorist
materials using BitTorrent 3.0 -- and how the worlds' nations are
fundamentally unable to stop it... unless you give up more of your
rights to privacy, free speech, and other crucial civil liberties.

The RIAA has done more to pave the way for future terrorist
infrastructure than Bin Laden could ever dream.

-david

Gmail's "conversation" feature is inscrutible

Gmail is such a terrible email interface.  My IMAP connection failed in mid-send, so I thought "that's fine, I'll just check the web interface".  Except I cannot, for the life of me, determine if the email was ever sent.  I see it there -- in the inbox, in some weird conversation mode -- but it's entirely unclear if it's sent.  It doesn't appear in the Sent folder, at least, and the conversation is marked to suggest that at least one of the entries (unspecified which) is a draft.

Ok, fine, so I just go and send it with the web interface.  Problem solved, right?  Except I get no response to my email, and it *still* doesn't show up in my Sent folder.

What is so wrong with the tried-and-true "show your emails in chronological order" interface perfected over the last thousand years?

Be a good citizen and go to the Google Suggestion page, check "Switch Conversation View on or off" on this page, type in your email address, and click Submit.  The internet will thank you for it.

-david

OFX FITIDs: Not as permanent as you might think

While working on some bank import engines for Expensify I found I was getting duplicates.  Naturally I assumed I was screwing things up, but despite a full sweep through the code I couldn't find any bugs.  The only theory I could come up with was the <FITID> attribute was changing on a given transaction.  But that's impossible, right?  From the OFX spec:

"3.2.3 Financial Institution Transaction ID <FITID>

An FI (or its Service Provider) assigns an <FITID> to uniquely identify a financial transaction that can appear in an account statement. Its primary purpose is to allow a client to detect duplicate responses. Open Financial Exchange intends <FITID> for use in statement download applications, where every transaction (not just those that are client-originated or server-originated) requires a unique ID."

Yep, sounds pretty permanent to me.  Except it's not.  In fact, it's quite common for the FITID to change, at least for US Bank .  I'm guessing it changes as the state of the transaction changes -- only the last two digits appear to change.

Anyway, this is probably just a random bug with US Bank, right?  Not according to this forum.

Ugg.  What a pain.  How hard is it to just follow the rules and make a truly unique identifier? 

-David Barrett

Christmas Wish: ServerSide JavaScript

For Expensify I'm finding that I can't avoid coding in half a dozen languages simultaneously.  At any point in time I will have active windows in C++, PHP, HTML, JavaScript, CSS... and... English.  Half a dozen.  Anyway, it's a nuisance and I'd like to consolidate.

In particular, I don't like how I generate and manipulate HTML in two places, with two different languages: PHP and JavaScript.  PHP spits out a bunch of HTML, along with a bunch of JavaScript to manipulate that HTML via the DOM.  It generally works OK, but I'm finding that I generally end up writing the same code twice.  Not exactly, but enough to be annoying. 

Consider the case of a dynamic page, like a web-based email reader.  Something where there's a ton of data that is transformed into HTML in some sophisticated way.  There's a range of ways to do it:

Web 1.0 way: Read everything into PHP memory, process, output into static HTML with links.
That works fine, except every time you do anything you need to ask the server to regenerate the page.  Which leads to:
Web 2.0 way: Read everything into JavaScript memory, process, output into dynamic DOM.
This lets the browser do everything locally and avoid asking the server unless it's really necessary.  But it has a downside: when the page loads it's completely blank while the JavaScript does its thing.  (Similarly, if the browser doesn't support JavaScript, it's just dead.)

As a result, I'd like is some combination of the two where the same code and execution engine is used on both the server *and* client.  Then the process would go like:

1) Server reads everything into server-side JavaScript memory

2) Server generates HTML DOM according to whatever logic

3) Server serializes the entire state of the page -- both the HTML DOM as well as the JavaScript VM -- into a big HTML/JavaScript blob.

4) The browser deserializes this blob straight into the HTML DOM and JavaScript VM -- without any big "onload" handler that builds the page from scratch.

5) The exact same code that was initially used by the server to generate the page is used by the browser to manipulate that page.

Make sense?  So rather than having two sets of HTML modification code (PHP to generate it, JavaScript to manipulate it), have just one set of code that is run on both the client and server.

And to top it all off, if the browser doesn't have JavaScript, it'll still get a fully-formed HTML page.

In this model, the programmer wouldn't even think of "generating an HTML page with associated JavaScript".  Rather, you'd think of building a page up using a series of JavaScript, CSS, HTML, database, and other resources, and then just calling "document.serializeToBrowser()" or something when the page is in a state that you want to send to the user.

Does something like this exist?  I see lots of stuff up on Wikipedia, but none of it quite fits the bill.

-david

Pirates of Amazon. And IMDB. And Last.fm...

A while back I mused about creating an index of all content and then linking to pirated copies when available.  I suggested copying IMDB, but some clever pirates came up with an even better idea: just layer their tools *overtop* IMDB.  And Amazon.  And pretty much anybody who already maintains a structured index of all content.  Brilliant!


How to execute code whenever Back button is pressed?

So I'd like to execute some JavaScript every time the page loads, regardless of whether it's from a new visitor, or click of Reload, or a click of Back, or whatever.  However, I'm finding Back to be particularly tricky.  Here's the test code:

<html>
<body onload="alert('Hello world!');">
    <a href="link">link</a>
</body>
</html>
Nothing fancy there, should pop an alert box every time right?  Wrong: click on the link and then click Back, and the alert doesn't show.  (At least, not on Ubuntu 8.04 Firefox 3.)  Odd. 

Even stranger, it *does* show if you include a large (~100KB) external script:
<html>
<head>
    <script type="text/javascript" src="jquery.js"> </script>
</head>
<body onload="alert('Hello world!');">
    <a href="link">link</a>
</body>
</html>
But try it with a small (~4KB) external script and it stops working.  The mystery deepens.

The only explanation that makes sense is there must be some kind of delay when the page loads for the Back button to trigger the onload event.  Too small a delay (too small a file to load) and it doesn't work.

Any ideas how to reliably execute code on every load, including Back?  I'll use the jQuery trick for now, but I'm concerned that it won't always be reliable.



Update: Curtis figured it out: It's due to fancy Firefox caching rules. There are several criteria that go into whether the page is reloaded each time, one of which is whether there's an "unload handler". Turns out it wasn't the *size* of including jQuery, it was due to jQuery setting an unload handler. Anyway, of the many solutions, the easiest is to simply set "Cache-Control: no-store" (or "no-cache", if HTTPS). Thanks Curtis, mystery solved!

Why are future user interfaces so lame?

What is it with futuristic user interfaces being so pointless?  Whether it's g-Speak or Microsoft Surface (even the spherical one), all the applications are absurd -- after years of touch screen technology, everybody *still* shows off that stupid photo-organization application.

Seriously: when is the last time you dumped a bunch of pictures in a pile and manually sorted them?  I honestly don't know if I've done it once in my life; this isn't solving a problem I have.  Every time I see that application I think "you *still* haven't figured out anything useful to do with that thing?"

Everybody is trying so hard to become the next-generation UI that they've forgotten a critical lesson: *this* generation of UI is really pretty good.  It's refined through thousands of iterations of tweaks and real-world experimentation.

Take the multi-touch fad that has generally fizzled out.  Honestly: how often do you use multi-touch on your iPhone?  At the end of the day, most actions work perfectly fine with one finger: just because you *can* use two fingers doesn't mean doing so adds any value.

So I'm going to go out on a limb and predict that the user interface of tomorrow is going to look a helluva lot like the UI of today.  And that's because the UI of today looks an awful lot like the desktop of yesterday.

The key is continuity: making small, gradual steps from what you know to what you don't.  And all these fancy "next generation operating systems" keep managing to stay in the distant future because everyone else is too busy living in the now.

Which reminds me of a surprising lesson I learned when moving from Windows XP to Ubuntu (Vista was the last straw).  And that lesson was just how awesome the command line is, and how just splitting the window in VI as needed is so much more powerful than managing a bunch of stupid windows.

Indeed, the lesson was all the "advances" in UI technology provided by generations of Windows have actually *reduced* my productivity.  But like the boiling frog, I just never noticed.

At the end of the day, most data is not amazingly beautiful 3d graphics projected in a virtual cave environment.  Most of it is really boring: rows of numbers, to do lists, log files, email.  Most of it is text.  And I don't see that changing anytime soon.

So I'm looking forward to the UI of the future.  But I bet the primary mode of interaction will still be a keyboard, and it's amazing feature will not be sorting piles of random photos, but sorting, processing, and generating text and numbers in amazing ways.

Personally, I'm hoping for LCARS.

-david


When did Agile become so Rigid?

I really dislike the term "agile development".  I'm fine using "agile" to describe people or teams.  But using it to describe a methodology seems to completely miss the point.  You wouldn't say a gymnast uses an "agile technique".  You'd just say the gymnast "is agile".

So articles like "When Agile Projects Go Bad" sorta confuse and grate on me.

You'd never read an article titled "When Agile Gymnasts Fall" because it makes no sense; a gymnast who falls is -- by definition -- not very agile.  Similarly, an agile project that fails due to a lack of agility is rather paradoxical.  (Though there are many other ways to fail than by lacking agility.)

But this isn't just a problem of semantics.  Rather, I think this is symptomatic of a broader problem in the agile movement: it doesn't frickin' work.  To borrow from my good friend Lao Tze:

"The agile that you know is not the true agile."

You can't learn agility by reading and following.  You become agile by doing, failing, changing, and doing again.  The most agile people I know read the fewest books on it.  Similarly, I don't know anybody who's seriously studied the subject of agile development techniques gotten the least bit better.

So here's my advice: if you want to be agile, put down the book and just start making it up as you go.  If what you're doing isn't working, try something different.  If what you're doing works, try cutting out a step and see if it still works -- or even works better.  Repeat.

You're certain to make wrong steps.  You're certain to encounter failure -- indeed, failure will likely be your steady state.

But eventually you'll figure it out, and every once in a while it'll work out great.

Welcome to the world of agility.  No reading required.

- David Barrett

- Jan 2014 (1) - Mar 2012 (1) - Nov 2011 (1) - Oct 2011 (1) - Apr 2011 (1) - Mar 2011 (3) - Feb 2011 (2) - Jan 2011 (9) - Nov 2010 (1) - May 2010 (1) - Mar 2010 (1) - Feb 2010 (1) - Jan 2010 (1) - Dec 2009 (1) - Nov 2009 (1) - Oct 2009 (1) - Sep 2009 (1) - Aug 2009 (2) - Jul 2009 (1) - Jun 2009 (4) - May 2009 (3) - Apr 2009 (3) - Mar 2009 (10) - Feb 2009 (5) - Jan 2009 (3) - Dec 2008 (5) - Nov 2008 (5) - Oct 2008 (5) - Sep 2008 (4) - Aug 2008 (5) - Jul 2008 (11) - Jun 2008 (8) - Feb 2008 (1) - Aug 2007 (1) -