Saturday, November 23, 2013

Liberty vs efficiency: The real conflict

Liberty: Being free of constraints

Efficiency: Raising the state of the world as high as possible on everyone's preference ranking (or some aggregate measure thereof)

You might have heard of Amartya Sen's Liberal paradox, which purports to show that the two necessarily conflict. Of course, as I said a while back, it does no such thing; it only shows a problem with preventing people from waiving their liberties when they find it preferable to do so.

However, there is a real sense in which those two conflict, and it becomes most apparent in discussions of taxation, and how to make it better.

The conventional economist's view is that "The ideal tax system is the one that hurts efficiency the least."

But there's another view, exemplified by the Murphy article that I linked in my last post: "The ideal tax system is the one that's easiest to opt out of."

Naturally, these really do conflict. Why? Because generally speaking, if you want to levy a tax that merely transfers purchasing power to the government without also forcing people to bear other hardships, you have to do it by taxing goods with inelastic demand, like energy, as people will not respond to the tax by buying less of the good, which would indicate a reduction in efficiency.

But the harder a tax is to avoid, the harder it is to "opt-out" of!

So if you think it's good for people to be able to legally reduce government revenues by abstaining from a product at relatively little cost to themselves, then "economically efficient taxes" are no longer an unvarnished good, as they come at the direct expense of the goal of making it easier for people to change their behavior in a way that routes around taxation.

This, I think, is the true conflict between efficiency and liberty, as it doesn't hinge on confusing rights and obligations.

Saturday, November 9, 2013

I explain tax interaction effects (because I think the experts can't)

So it turns out there's a serious argument (HT and text summary: Bob Murphy) that a "green tax shift" may be welfare-worsening rather than welfare-improving. (The green tax shift is where you cut taxes on labor and capital while raising them on environmental "bads" like CO2 emission.)

Huh? How can a tax shift off of bads and onto goods be welfare worsening? It seems the argument is somewhat subtle; even Bob Murphy dismisses clarification requests in the comments, pleading that "it’s hard to point to 'what’s driving the result' except to say, 'Adding the carbon tax drove the result.'"

Well, it's actually not that hard, but the standard expositions don't make it explicit. After reading another of Murphy's articles, it finally clicked for me, although the better explanations still hid the true mechanism in unstated assumptions. Here's how I explained it in the comments (cleaned up a bit and sourced).
****
I think I have an explanation that conveys the intuition.

Insight 1: the harm of a tax is more-than-proportional to its magnitude. (This is the assumption that the writing on this seems to assume and which I wish was made explicit here and in your article.) Mankiw gives the rule of thumb that the deadweight loss of a tax increases with the square of the tax rate. Thus why you want to raise a given amount of revenue from as “broad a base” as possible -- to lower the rate each tax has to be.

Insight 2 (most important): Because of the above, each increase in tax above the Pigovian level is more harmful than the same increase from zero.

Insight 3: Taxes on anything chase back to their original land/labor/capital factors. So a carbon tax amounts to a tax on land, labor, and capital, divided up per their relative supply/demand curve elasticities (slopes).

Given the above, the intuition becomes a lot clearer: a tax on carbon is like an income tax (with different levels for different kinds of income). Levied at the Pigovian rate, it merely cancels out the carbon harms. But if you have an additional (direct) income tax, you get a disproportionate harm for each (potentially) taxed dollar above the Pigovian level (compare to taxing from the first dollar) — *that* is the tax interaction effect.

Furthermore, since the “green tax trade” tries to raise the same revenue on a smaller base (i.e. only those income sources touching carbon), the tax rates have to be much higher than they would be if they were on all income. This then causes major welfare-harming changes in behavior, far out of proportion to the assumed harms from carbon.
****
Problem solved, right?

Well, no; Bob insists that Insight 1 is irrelevant to the argument. But I don't see how this can be; you can only get the bad "tax interaction effects" if the tax's harms are more-than-proportional to ("superlinear in") the tax rate.

If it's merely proportional, the taxes don't "interact" at all -- raising taxes by 1 percentage point (on any kind of income) does just as much additional harm, regardless of whether it's on top of a 6% existing tax, or a zero. But when it's more than proportional, then that extra point of tax is (badly) "interacting" with whatever other taxes got it to that level. This is the key insight: that having income taxes in addition to the (implicit income tax resulting from a) carbon tax means those taxes are doing more harm than they otherwise would.

Likewise, if the harm (deadweight loss) of a tax were less than proportional to (sublinear in) the rate, then they would interact in the opposite way. It would make sense to have as few distinct taxes as possible, on a small a base as possible, with as high a rate as possible -- because in that case, each additional increase in the tax rate hurts less than the previous. (Obviously, we don't live in that world!)

I note, with some irony, that this point ultimately reduces to the reasoning behind standard mainstream economist's tax advice to "lower the rates, broaden the base", a mentality Bob actually criticized in another context...

Thursday, November 7, 2013

No politician has ever lied

Because gosh, you'd have to be an idiot to believe them in the first place, says Steve Landsburg.

Thursday, July 11, 2013

My discovery of "semantic security"

One interesting thing I forgot to mention in the previous post about homomorphic encryption: the concept of semantic security.

It was actually a major stumbling block for me. When I got to the passage that mentions the concept, the author casually remarks that "semantic security is an expected feature of public key cryptosystems", and then defines the term as follows: a system is semantically secure if, given two plaintexts and the ciphertext of one of them, an attacker cannot do better than chance in guessing which plaintext goes with that ciphertext.

That didn't make sense because I had always assumed that the defining feature of public key cryptography was that the attacker is permitted unlimited chosen-plaintext attacks, which -- I thought -- means that the attacker always gets to know what ciphertext goes with any plaintext. So how can you make it so that the attacker can -- as required for public key encryption -- produce valid ciphertexts from arbitrary plaintext, and yet still have a semantically secure cryptosystem? Couldn't the attacker just encrypt both plaintexts to figure out which one corresponds to the given ciphertext?

What I missed was that you can use a one-to-many cipher: that is, the same plaintext corresponds to many ciphertexts. What's more, it's actually trivial to convert a plain-vanilla one-to-one public key cipher into a one-to-many semantically secure version. Here's how: just before applying the encryption step, generate a random number (a "nonce") and append it to the plaintext, with the proviso that the recipient with look for it in that position and strip it off after decryption.

This way, in order for an attacker to try to guess the plaintext in the game above, it's no longer enough for them to simply encrypt both plaintexts: a random number was inserted in the process. This means that in order to find a match between a plaintext and a ciphertext, the attacker must encrypt each plaintext with every possible nonce, which requires resources that increase exponentially with the size of the nonce used. (That is, an n-bit nonce can have 2^n possible values.)

The more you know (tm).

Friday, May 24, 2013

"I added your numbers, and I have no idea what they are."

So it turns out there's a thesis arguing that polynomial-time, fully homomorphic encryption is possible. (Link is to the dumbed-down -- but still journal-published -- version that mortals like me are capable of understanding.)

It's hard to understate the significance of this. This means that it's possible for you to give someone your data in encrypted form, and for them to execute arbitrary operations and give it back to you, without ever knowing what the data is. That is, they transform an input ciphertext to and output ciphertext such that when you decrypt the output, you have the answer to your query about the data, but at no point did they decrypt it or learn what was inside.

In other words: "I just calculated the sum of the numbers you gave me, but I have no idea what the sum is, nor what any of the numbers are."

If it sounds impossible, it's not because you misunderstand it, but because that kind of thing shouldn't be possible -- how can you perform arbitrary operations on data without learning something about it? Sure, maybe there are edge cases, but a rich, Turing-complete set?

It would mean that "the cloud" can ensure your privacy, while *also* doing useful operations on your data (as the author, Craig Gentry, goes at great length to emphasize).

As best I can tell from the paper, here's the trick, and the intuition why it's possible:

1) The computation must be non-deterministic -- i.e. many encrypted outputs correspond to the correct decrypted output. This is the key part that keeps the computation provider from learning about the data.

2) The output must be fixed size, so you have a sort of built-in restriction of "limit to the first n bytes of the answer".

3) It does require a blowup in the computational resources expended to get the answer. However, as noted above, it's only a polynomial blowup. And thanks to comparative advantage, it can still make sense to offload the computation to someone else, for much the same reason that it makes sense for surgeons to hire secretaries even when the surgeon can do every secretarial task faster. (Generally, when the provider's opportunity cost of performing the task is less than yours.)

4) Finally, to be fully homomorphic -- capable of doing every computation, not just a restricted set of additions and such -- the encrypted computation has to find a way around the buildup of "noise" in the computation, i.e. properties of the output that put it outside the range of what can be decrypted (due to exceeding the modulus of the operation needed to extract the output). And to do that, in turn, its operation set must be sufficient to perform its own decryption.

I'm only about halfway through the paper, but it's been really enlightening to get the intuition behind why this kind of thing can work.

Monday, April 15, 2013

Our latest work on Bitcoin

Bob Murphy and I have collaborated on an attempt to explain Bitcoin -- and its economic implications -- for the masses.

For the three of you still following this blog, check out the link.