I'm going to discuss an ethical and decision-theoretic intuition that underpins my support for intellectual property rights, and which seems to be absent, or unintuitive, among anti-IP libertarians. (See the
discussion linked in yesterday's post for lots of good examples.)
But first let's consider a puzzle in decision theory. This one is known as
Parfit's Hitchhiker and, as best I can tell, comes from Derek Parfit's book
Reasons and Persons, though the term "hitchhiker" didn't come up in a search of the book.
It goes like this (well, my version does anyway): Assume you're lost in the desert, with nothing of value on you. You're approached by a superpowerful, superintelligent being we'll call Omega. It is willing to take you back to civilization and stabilize you -- but only if you will withdraw $5 from your bank account and give it to Omega once that's over with. (Yes, such a being might have reason to do this.) It has no enforcement mechanism for if you don't pay.
But here's the catch: Omega can scan you in detail and find out if you're really intending to give it the $5 when you're safe, rather than -- I don't know -- reasoning that, "Hey, I'm already safe, I've already got what I need and all, and you know, this Omega thing is powerful enough
anyway, I think I'll just keep the $5." And if it finds that you
wouldn't give it the money upon reaching safety (i.e. you don't have a decision theory that outputs "pay $5 to Omega"
given that you are safe), then it just won't take you back and you can die in the desert.
At this point, a lot of you might be recoiling in horror: "
What? Keep a measly
five dollars when this thing
saved my life? Are you ****in'
nuts?" Yeah -- you're the people with the intuition I was referring to at the beginning -- the one that I have, and the anti-IP libertarians don't seem to. More about that in a minute.
Those of you who didn't recoil in horror may be thinking something like, "Whoa whoa whoa, I don't like dying. See, I would just make a contract -- or heck, even a simple promise -- that I will give Omega the $5. Since I feel honor-bound to abide by my promises, of course I would pay, and wouldn't have such diseased thoughts" as I referred to above. But I didn't make it that easy: note that Omega doesn't ask you anything and can't even receive your messages. Its decision is based entirely on what you
would do, given that you know the details of the situation.
Here's the neat thing to notice: you will
never find yourself in a position to be deciding whether to take that final step and give the Omega-like being $5
unless you adhere to a decision theory (or "ethic", "morals", etc.) that leads you to do things like "give Omega $5 for rescuing you at least in those cases where it rescued you conditional on expecting you to give it that $5"
even when you already know what the Omega-like being has decided, and that decision is irreversible.
(I know, I know, I'm doubling up on the italics. Bear with me here.)
Conversely, all of the beings who come out alive have a decision theory (or ethic, etc.) which regards it as an optimal action (or an "action they should do", etc.) to pay the $5. Omega's already selected for them!
Now at this point, those of you who don't have the recoiling intuition I referred to, or are still worried I'll derive implications from it you don't like, may insist that this is a contrived scenario, with no application to real world -- you can't make your decisions based on what capricious, weird, superpowerful agents will do, so why change your decision theory on that reasoning?
And there is something to that belief: You don't want to become a "person who always jumps off the nearest cliff" just because there's some rare instance where it's a good idea.
But that's not what's going on here, is it? Omega makes its decision based upon what you
would do, irrespective of what decision process led you to do it. So for purposes of this scenario, it simply doesn't matter whether you decide to pay that $5 because you:
- feel honor-bound to do so;
- feel so grateful to Omega that you think it deserves what it wanted from you;
- believe you would be punished with eternal hellfire if you didn't, and dislike hellfire;
- like to transfer money to Omega-like beings, just for the heck of it;
- or for any other reason.
So, then, is it normal for the world to decide how it treats you based on (a somewhat reliable assessment of) "what you would do"? Yes, it is, once you realize that we already have a term for "what you would do": it's called your "character" or "disposition" (or "decision theory" or "generating function").
Do people typically treat you differently based on estimations of your character? If you know where they don't, please let me know, so I can go there and let loose my sarcasm with impunity.
So, to wrap it up, what does Parfit's Hitchhiker have to do with intellectual property? Well:
-
Omega represents the
people who are deciding whether to
produce difficult, satisfying intellectual works, conditional on whether we will respect certain exclusivity rights that have historically been promised them.
- The decision to
rescue us is the decision to
produce those intellectual works.
- The decision to
pay the $5 represents the decision to
continue to respect that exclusivity once it is produced "even though" they're "not scarce anymore", and we
could choose otherwise.
The lesson: if you don't believe that the Omegas in your life "deserve", in an important sense, to be paid, you won't find yourself "rescued". We are where we are today
because of our beliefs about what "hitchhikers" should do, and we miss out on rescues whenever we decide to become ungrateful hitchhikers. (
Edit: that should probably be phrased as "... whenever we decide that it's right for hitchhikers to be ungrateful.")
(Note: this post was heavily influenced by Good and Real, Chapter 7, and by this article on Newcomb's problem.)