Saturday, February 27, 2016

Some of my geeky tech jokes -- with explanations!

I know the line: explaining a joke is like dissecting a frog; you understand it better, but it dies. Still, not everyone will get these, and I figure I might as well have a place where you at least get a chance. So here are some of my own creations, explained.



Girl, you make me feel like a fraudulent prover in a stochastic interactive zero-knowledge proof protocol ... because I really wish I had access to your random private bits!

Explanation: In a stochastic zero-knowledge proof protocol, there is a prover and a verifier, where the former wants to convince the latter of something. But for proof to work, the verifier must give the prover unpredictable challenges. Think of it like a quiz in school -- it's not much of a quiz if you know the exact questions that will be on it.

The information to predict the challenges is known as the verifier's private random bits Those with a legit proof don't need this, but a fraudulent prover does. Thus, a fraudulent prover in a stochastic interactive zero-knolwedge proof protocol wants access to the verifier's "random private bits".



A historian, a geologist, and a cryptographer are searching for buried treasure. The historian brings expertise on practices used by treasure hiders, the geologist brings expertise on ideal digging places, and the cryptographer brings expertise on hidden messages.

Shortly after they start working together, the cryptographer announces, "I've found it!!"

The others are delighted: 'Where is it?'

The cryptographer says, "It's underground."

'Okay, but where underground?'

"It's somewhere underground!"

'But where specifically?'

"I don't know, but I know it's underground!"

'Slow down there. If all you know is that it's underground, then in what sense did you "find" anything? We're scarcely better off than when we started!'

"Give me a break! I just gave you an efficiently-computable distinguishing attack that separates the location of the treasure from the output of a random oracle. What more could you want?"

Explanation: In cryptography, an encryption scheme is considered broken if an attacker can find some pattern to the encrypted message -- i.e. they can identify telltale signs that it wasn't generated by a perfect random number generator, a "random oracle". Such a flaw would be called a "distinguishing attack". So in the cryptography world, they don't care if the attack actually allows you to decrypt the message; they stop as soon as they find non-randomness to the encrypted data. Applied to a treasure hunt, this means they would give up as soon as they conclude that the treasure location is non-random, which the cryptographer here things s/he's done simply by concluding that it's "underground".



So, 16-year-old Johnnie walked into an Amazon Web Services-run bar...

"Welcome," said the bartender. "What are you drinking?"

Johnnie replied, 'What've you got?'

"Well, we have a selection of wines and the beers you see right here on tap. But if you prefer, we also have club soda and some juices."

Johnnie thought, Wait a second. Why is he telling me about the wines and beers? Does he even realize ... ?

'Okay, I'll take the Guinness.'

"Bottle or draft?"

'Draft.'

"Alright, and how will you be paying?"

Johnnie only had large bills from his summer job and gave the bartender a C-note.

"Sorry, but I gotta check to make sure this is real." The bartender took out a pen and marked it, then counted out the change. Johnnie reached for the beer.

"Hold on a second! Make sure to use a coaster!" The bartender slipped one under the glass. "Okay, now enjoy!"

Johnnie lifted up the glass to drink. Before he was able to sip, the bartender swatted it out of his hand.

"WHAT ARE YOU THINKING!?! Don't you know 16-year-olds can't drink!"

Explanation: On the AWS site, they will gladly let you click on the "Launch server" button and go through numerous screens and last-minute checks to configure it, and only at the very last stage does it say, "oops, turns out you don't have permission to do that" -- so it's like a bartender that takes you through a entire transaction, even verifying irrelevant things (like whether the money is real), while knowing the whole time he can't sell to you.



How is a Mongo replica set like an Iowa voter?

In primary elections, they only vote for candidates they think are electable!

Explanation: Databases can have "replica sets" where there are multiple servers that try to have the same data; secondary servers depend on an agreed-upon "primary" to be the "real" source of data. Often times, the primary server goes down, so they have to decide on a new primary, known as a "primary election". But there are some restrictions on who they will vote for -- if they e.g. have reason to believe that a server can't be seen by other members, and in those cases it will regard that server as unelectable. So you can get funny messages about "server42 won't vote for server45 in primary election because it doesn't think it's electable".

Saturday, February 20, 2016

More funny and insightful Slashdot posts I remember

If you liked the last post on this here are some more you might like. This time, I think they're more insightful than funny, but often times, insight is funny!

Also, I thought I'd point out the value of forums: in all of these exchanges, it seems to take three people to get the "aha" moment, not just one or two.



Topic: some new superstrong carbon nanotube material is discovered.

A: So it seems like they're planning to use this stuff for armor. But what about weapons? It seems that anything strong enough to make good armor would also have value as a weapon.

B: That doesn't follow at all! They're completely different use-cases. For example, leather is known to make good armor, but you never see a leather sword!

C: Sure you do -- it's called a "whip" and we dig them up all the time!




Topic: Some dating site is blocked from emailing University of Texas students (at their school email) because of too much spam.

A: Well, their defense is that they're complying with the CAN-SPAM Act, in that they have an unsubscribe link, use their real name, etc., so UT can't just block them wholesale.

B: What? That doesn't matter; that's just the *minimum* requirement for sending such emails. Obviously, any domain owner can impose whatever extra restrictions they want!

C: Yeah, it's like saying, "I have a valid driver's license. I am the legal owner of this vehicle. I have paid the appropriate taxes and registered it. I hold the required liability insurance, and the vehicle is in good working order. I have complied with all applicable traffic laws and operated it in a safe manner. Therefore, I have the right to park on your lawn."



I actually use C's last example when explaining to people the difference between authentication (proving who you are) and authorization (what that identity is allowed to do).

If the minimum wage prohibitions are so easily circumvented ...

The recent Talia Jane story just made me realize we have a possible inconsistently in policy. To get you up to speed, Jane took a low-wage job in the San Francisco Bay Area, hoping to work her way up to her passion of being a social media manager for a major company. But because of rental prices, she paid 85% of per post-tax pay just for rent (!), complained about her employer paying so little, and then was fired.

But as for the inconsistency:

Illegal: paying someone below $X/hour.

Legal: paying someone ($X + $Y)/hour (Y positive) to work in a place where their discretionary income would place them in extreme poverty (e.g. 85% of post-tax on rent).

And yes, that's just an (arguably trivial) corollary of "minimum wage (and tax brackets for that matter) is not automatically cost-of-living-adjusted". But if the goal is to stop people from being taken advantage of with low job offers that hold them in poverty, that seems like a pretty big loophole.

And it's not just that -- let's say someone moves farther out to be able to afford to live there. Then they're traveling an extra N hours just to make each shift which should rightly count against their effective hourly wage.

So, food for thought: what are we really trying to optimize for here? What would the law have to look like to not just avoid these loopholes, but "carve reality at the joints" such that it's fundamentally impossible to scalably circumvent such a law?

If you keep raising the minimum wage for a locality, and people keep commuting greater distances to get that income, what have you accomplished?

Thursday, January 7, 2016

Funny Slashdot exchanges, before they're lost to time

In the time that I was a regular reader of Slashdot, I saw a few exchanges that stayed in my mind. I later went back to find them, but was never able to. So that they're not lost to time, I figured I'd post all the ones I remember. What follows is from memory, and prettied up a bit. (Not trying to plagiarize, if you can find the original post for any of these, let me know.)

Enjoy.



[Story: Armadillo Aerospace has a failed rocket launch.]

A: Well, I think we can close the books on Carmack's little project.
B: Come on, now. Private space travel is still in its infancy. There are growing pains. Not everything works the first time. But what's important, is that we're learning from these events. Armadillo is learning. They'll adapt. And the next voyage will be better and safer!
C: You mean, even safer than a big orange fireball?



A: [long rant] So that's the problem with this ban on incandescent light bulbs.
B: Whoa whoa whoa, slow down. There is no "ban" on incandescent light bulbs. It's just that the government passed new efficiency standards, and incandescents don't meet them.
C: Oh, that's clever! I should try that some time: "See, I'm not breaking up with you! I'm just raising my standards to the point where you no longer qualify."



[Story: a pedophile was caught because he took pictures of his acts and tried to blur out the victims' faces, but police analysts were able to unblur them.]

A: Hah! What an amateur! Everyone knows you have to do a true Gaussian blur to destroy the information content of the picture!
B: Yeah, or entropize it by blacking out the whole face.
C: Right. Or, you know, you could just ... not molest children.

(IIRC, C was heavily voted down and criticized for assuming guilt.)



[Story: police used "big data" analytics techniques and discovered that most robberies occur on paydays near check-cashing places, which allowed them to ramp up arrests.]

A: I don't know, this seems kind of big-brothery...
B: Not at all! This is the kind of police work we should applaud! Working only off publicly available, non-private data, they found real, actionable correlations. It wasn't just some bigoted cop working off his gut: "Oh, this must be where the thugs go ..." No, they based it on real data. What's more, it let them avoid the trap of guessing the wrong paydays, which can actually vary! Some people get paid weekly, some biweekly, some of the 1st and 15th. For example, I get paid on the 7th and 21st.
C: So, uh ... where do you cash your checks, by chance?

Wednesday, December 30, 2015

What every programmer *really* should know about ...

So there's a common kind of article that comes up a lot, and annoys me. It's the "What every programmer should know" or "needs to know" about this or that. About time, or names, or memory, or solid state drives or floating point arithmetic, or file systems, or databases.

Here's a good sampling on Hacker News of the kinds of things someone out there is insisting you absolutely have to know before you can knock out "Hello, World", or otherwise meet some higher standard for being a "real programmer".

I hate these articles.

For any one of them, you can find about 99% who don't already know something from the article, and yet they manage to somehow be productive enough that someone is paying to produce a product of value. To me, they come off as, "I want to be important, so I want to make it seem like my esoteric knowledge is much more important than it really is." If taken seriously, these articles would heavily skew your priorities.

So here's my standard for when you're justified in saying "what every programmer needs to know about X":

Think about the tenacious, eager kid from your neighborhood. He picks up on things really quickly. Once he learned to program, he was showing off all kinds of impressive projects. But he also makes a lot of rookie mistakes that don't necessarily affect the output but would be a nightmare if he tried to scale up the app or extend its functionality. Things you, in your experience, with your hard-earned theoretical knowledge, would never do. When you explain it to him, he understands what you mean quickly enough, and (just barely) patches that up in future versions.

What are those shortcomings? What are those insights, that this kid is lacking? What of this kid's mistakes would you want to give advice about to head off as soon as possible? That's what every programmer needs to know.

The rest? Well, it can wait until your problem needs it.

Sunday, December 6, 2015

The Scylla-Charybdis Heuristic: if more X is good, why not infinite X?

(Charybdis = care-ib-dis)

A Scott Alexander post really resonated with me, when he talked about one "development milestone" (#4): the ability to understand and speak in terms of tradeoffs, rather than stubbornly try to insist that there are no downsides to your preferred course of action. Such "development milestones" indicate a certain maturity of one's thought process, and greatly change how you discuss ideas.

When a Hacker News discussed Alexander's post, I remembered that I had since started checking for this milestone whenever I engaged with someone's advocacy. I named my spot-check the "Scylla-Charybdis Heuristic", from the metaphor of having to steer between two dangers, either of which have their downsides. There are several ways to phrase the core idea (beyond the one in the title):

Any model that implies X is too low should also be capable of detecting when X is too high.

Or, from the metaphor,

Don't steer further from Scylla unless you know where -- and how bad -- Charybdis is.

It is, in my opinion, a remarkably powerful challenge to whether you are thinking about an issue correctly: are you modeling the downsides of this course of action? Would you be capable of noticing them? Does your worldview have a method for weighing advantages against downsides? (Note: that's not just utilitarian cost/benefit analysis ups and downs, but relative moral weight of doing one bad thing vs another.) And it neatly extends to any issue:

- If raising the minimum wage to $15/hour is a good idea, why not $100?

- If lifting restrictions on immigration is good, why not allow the entire Chinese army to cross the border?

- If there's nothing wrong with ever-increased punishments for convicts, why not the death penalty for everything?

One indicator that you have not reached the "tradeoff milestone" is that you will focus on the absurdity of the counter-proposal, or how you didn't advocate for it: "Hey, no one's advocating that." "That's not what we're talking about." "Well, that just seems so extreme." (Extra penalty points for calling such a challenge a "straw man".)

On the other hand, if you have reached this milestone, then your response will look more like, "Well, any time you increase X, you also end up increasing Y. That Y has the effect/implication of Z. With enough Z, the supposed benefits of X disappear. I advocate that X be moved to 3.6 because it's enough to help with Q, but not so much that it forces the effects of Z." (I emphasize again that this method does not assume a material, utilitarian, "tally up the costs" approach; all of those "effects" can include "violates moral code" type effects that don't directly correspond to a material cost.)

I've been surprised to catch some very intelligent, respected speakers failing this on their core

What about you? Do you make it a habit to identify the tradeoffs of the decisions you make? Do you recognize the costs and downsides of the policies you advocate? Do you have a mechanism for weighing the ups and downs?

Wednesday, December 10, 2014

Bitcoin mining pools attacking each other for profit?

So after posting our new Bitcoin book, my buddy Tim Swanson alerted me to the problem of Bitcoin mining pools attacking each other.

Reminds me of the recent Tax Interaction Effect post, where I had to unearth the core of a counterintuitive result: in this case, the claim that not redeeming some of your solutions can increase your return.

I don't think I'm entirely there, but I at least have an analogy to understand the mechanics of the attack.

Bitcoin mining model:As you might know, mining is like buying a bunch of (positive sum) lottery tickets. A fixed reward is given out every hour, then divided equally among the winning tickets. Some people join into pools, where they buy tickets with the proviso that *if* it's a winning ticket, they share the winnings equally among all the tickets in their pool.

The attack: You use some of your money to buy tickets for someone else's pool (call it the "attacked pool", but hide and destroy the winning tickets for that pool.

The effect: There are fewer total wins per period. Each (non-destroyed) ticket gets a larger fraction of the hourly reward. The attacked pool gets a smaller fraction the reward.

My response/confusion: This increases the return to all winning tickets, not just those of the attacking pool, so the attacking pool effectively subsidizes all the others, and dilutes the value of its own tickets across the set of all players.

But maybe I'm missing something here.

Tuesday, December 9, 2014

Our new Bitcoin eBook is up!

Phew, been a while, eh? Well, Bob Murphy and I have a new free eBook up about the economics and mechanics of Bitcoin! Check the site for it, or, if you're too lazy, just go straight to the book itself.

Sunday, March 16, 2014

Tax interaction effects, and the libertarian rejection of user fees

Phew! Been a while, hasn't it?

I want to come back to the tax interaction effect (TIE) issue from previous posts, and go over what I think has been bothering me about the TIE-based argument against the carbon tax shift.

So, a high-speed review of why a carbon tax shift (CTS) is inefficient. The CTS, remember, involves a revenue-neutral reduction of taxes on capital (including land) and labor, replaced by a tax on carbon emissions -- specifically, those fuels that, when used, release carbon dioxide, in proportion to how much CO2 they release per unit.

Review of the argument


And why could it be inefficient? Well, the harm of a tax increases faster than its rate. To have a revenue-neutral CTS, you have to "focus" the tax -- i.e. raise the same revenue from a smaller class of goods. This necessarily means a higher tax rate on the "focused" goods, and therefore higher induced inefficiencies (compared to the broader tax). When you further note that these taxes will, in effect, "stack on" to the existing labor and capital taxes, then the inefficiencies are even higher -- that's the TIE -- and could even swamp the environmental benefit from the emissions reduction."

But hold on. Those very same steps are a case against any correspondence between "who uses" and "who pays", whether or not the payment is a tax! That's because you can always point out how "concentrating costs" leads to disproportionate inefficiencies, even and especially for textbook "private goods".

That is, you could likewise say, "if people have to -- gasp! -- pay for their own cell phones, at $300/each, then that scares away all the people who can't pay $300 (after paying labor taxes, remember!), so you can an efficiency loss there. Plus, anyone who can steal the phone has a $300 incentive too, so people invest in ways to steal them, and you have to pay for countermeasures. Those go up quickly with the price of the good.

"Therefore, the government should just tax everyone to cover the cost, and then hand out the cell phones for free."

Wait, that doesn't sound right ...


What's wrong with that argument? Well, a lot. So much that you probably already know the answer. It's for the very same reasons that many advocate user fees for any good that's excludable. Generally, whoever benefits should be the one to pay. ("Cuius lubido, eius sumptum." -- "Whose desire, his expense.")

As with those reasons in favor of user fees, you can make the exact same argument regarding the purported inefficiency of a CTS:

"Yes, you get inefficiencies every time you concentrate costs like that. And yes, they disproportionately stack with whatever taxes you already had. But you need the fee structure to work that way in order to align incentives. The one who uses the scarce resource -- whether a cell phone, or atmospheric dumping capacity -- should be the one to pay for it, as this leads them to economize on the use of that resource, and if possible, route around it. That remains doubly so when exempting them from the expense would lead to further penalization of every other class of socially-useful activity."

And that, I think, goes to the core of my original balking at the CTS/TIE argument.

Saturday, November 23, 2013

Liberty vs efficiency: The real conflict

Liberty: Being free of constraints

Efficiency: Raising the state of the world as high as possible on everyone's preference ranking (or some aggregate measure thereof)

You might have heard of Amartya Sen's Liberal paradox, which purports to show that the two necessarily conflict. Of course, as I said a while back, it does no such thing; it only shows a problem with preventing people from waiving their liberties when they find it preferable to do so.

However, there is a real sense in which those two conflict, and it becomes most apparent in discussions of taxation, and how to make it better.

The conventional economist's view is that "The ideal tax system is the one that hurts efficiency the least."

But there's another view, exemplified by the Murphy article that I linked in my last post: "The ideal tax system is the one that's easiest to opt out of."

Naturally, these really do conflict. Why? Because generally speaking, if you want to levy a tax that merely transfers purchasing power to the government without also forcing people to bear other hardships, you have to do it by taxing goods with inelastic demand, like energy, as people will not respond to the tax by buying less of the good, which would indicate a reduction in efficiency.

But the harder a tax is to avoid, the harder it is to "opt-out" of!

So if you think it's good for people to be able to legally reduce government revenues by abstaining from a product at relatively little cost to themselves, then "economically efficient taxes" are no longer an unvarnished good, as they come at the direct expense of the goal of making it easier for people to change their behavior in a way that routes around taxation.

This, I think, is the true conflict between efficiency and liberty, as it doesn't hinge on confusing rights and obligations.

Saturday, November 9, 2013

I explain tax interaction effects (because I think the experts can't)

So it turns out there's a serious argument (HT and text summary: Bob Murphy) that a "green tax shift" may be welfare-worsening rather than welfare-improving. (The green tax shift is where you cut taxes on labor and capital while raising them on environmental "bads" like CO2 emission.)

Huh? How can a tax shift off of bads and onto goods be welfare worsening? It seems the argument is somewhat subtle; even Bob Murphy dismisses clarification requests in the comments, pleading that "it’s hard to point to 'what’s driving the result' except to say, 'Adding the carbon tax drove the result.'"

Well, it's actually not that hard, but the standard expositions don't make it explicit. After reading another of Murphy's articles, it finally clicked for me, although the better explanations still hid the true mechanism in unstated assumptions. Here's how I explained it in the comments (cleaned up a bit and sourced).
****
I think I have an explanation that conveys the intuition.

Insight 1: the harm of a tax is more-than-proportional to its magnitude. (This is the assumption that the writing on this seems to assume and which I wish was made explicit here and in your article.) Mankiw gives the rule of thumb that the deadweight loss of a tax increases with the square of the tax rate. Thus why you want to raise a given amount of revenue from as “broad a base” as possible -- to lower the rate each tax has to be.

Insight 2 (most important): Because of the above, each increase in tax above the Pigovian level is more harmful than the same increase from zero.

Insight 3: Taxes on anything chase back to their original land/labor/capital factors. So a carbon tax amounts to a tax on land, labor, and capital, divided up per their relative supply/demand curve elasticities (slopes).

Given the above, the intuition becomes a lot clearer: a tax on carbon is like an income tax (with different levels for different kinds of income). Levied at the Pigovian rate, it merely cancels out the carbon harms. But if you have an additional (direct) income tax, you get a disproportionate harm for each (potentially) taxed dollar above the Pigovian level (compare to taxing from the first dollar) — *that* is the tax interaction effect.

Furthermore, since the “green tax trade” tries to raise the same revenue on a smaller base (i.e. only those income sources touching carbon), the tax rates have to be much higher than they would be if they were on all income. This then causes major welfare-harming changes in behavior, far out of proportion to the assumed harms from carbon.
****
Problem solved, right?

Well, no; Bob insists that Insight 1 is irrelevant to the argument. But I don't see how this can be; you can only get the bad "tax interaction effects" if the tax's harms are more-than-proportional to ("superlinear in") the tax rate.

If it's merely proportional, the taxes don't "interact" at all -- raising taxes by 1 percentage point (on any kind of income) does just as much additional harm, regardless of whether it's on top of a 6% existing tax, or a zero. But when it's more than proportional, then that extra point of tax is (badly) "interacting" with whatever other taxes got it to that level. This is the key insight: that having income taxes in addition to the (implicit income tax resulting from a) carbon tax means those taxes are doing more harm than they otherwise would.

Likewise, if the harm (deadweight loss) of a tax were less than proportional to (sublinear in) the rate, then they would interact in the opposite way. It would make sense to have as few distinct taxes as possible, on a small a base as possible, with as high a rate as possible -- because in that case, each additional increase in the tax rate hurts less than the previous. (Obviously, we don't live in that world!)

I note, with some irony, that this point ultimately reduces to the reasoning behind standard mainstream economist's tax advice to "lower the rates, broaden the base", a mentality Bob actually criticized in another context...

Thursday, November 7, 2013

No politician has ever lied

Because gosh, you'd have to be an idiot to believe them in the first place, says Steve Landsburg.

Thursday, July 11, 2013

My discovery of "semantic security"

One interesting thing I forgot to mention in the previous post about homomorphic encryption: the concept of semantic security.

It was actually a major stumbling block for me. When I got to the passage that mentions the concept, the author casually remarks that "semantic security is an expected feature of public key cryptosystems", and then defines the term as follows: a system is semantically secure if, given two plaintexts and the ciphertext of one of them, an attacker cannot do better than chance in guessing which plaintext goes with that ciphertext.

That didn't make sense because I had always assumed that the defining feature of public key cryptography was that the attacker is permitted unlimited chosen-plaintext attacks, which -- I thought -- means that the attacker always gets to know what ciphertext goes with any plaintext. So how can you make it so that the attacker can -- as required for public key encryption -- produce valid ciphertexts from arbitrary plaintext, and yet still have a semantically secure cryptosystem? Couldn't the attacker just encrypt both plaintexts to figure out which one corresponds to the given ciphertext?

What I missed was that you can use a one-to-many cipher: that is, the same plaintext corresponds to many ciphertexts. What's more, it's actually trivial to convert a plain-vanilla one-to-one public key cipher into a one-to-many semantically secure version. Here's how: just before applying the encryption step, generate a random number (a "nonce") and append it to the plaintext, with the proviso that the recipient with look for it in that position and strip it off after decryption.

This way, in order for an attacker to try to guess the plaintext in the game above, it's no longer enough for them to simply encrypt both plaintexts: a random number was inserted in the process. This means that in order to find a match between a plaintext and a ciphertext, the attacker must encrypt each plaintext with every possible nonce, which requires resources that increase exponentially with the size of the nonce used. (That is, an n-bit nonce can have 2^n possible values.)

The more you know (tm).

Friday, May 24, 2013

"I added your numbers, and I have no idea what they are."

So it turns out there's a thesis arguing that polynomial-time, fully homomorphic encryption is possible. (Link is to the dumbed-down -- but still journal-published -- version that mortals like me are capable of understanding.)

It's hard to understate the significance of this. This means that it's possible for you to give someone your data in encrypted form, and for them to execute arbitrary operations and give it back to you, without ever knowing what the data is. That is, they transform an input ciphertext to and output ciphertext such that when you decrypt the output, you have the answer to your query about the data, but at no point did they decrypt it or learn what was inside.

In other words: "I just calculated the sum of the numbers you gave me, but I have no idea what the sum is, nor what any of the numbers are."

If it sounds impossible, it's not because you misunderstand it, but because that kind of thing shouldn't be possible -- how can you perform arbitrary operations on data without learning something about it? Sure, maybe there are edge cases, but a rich, Turing-complete set?

It would mean that "the cloud" can ensure your privacy, while *also* doing useful operations on your data (as the author, Craig Gentry, goes at great length to emphasize).

As best I can tell from the paper, here's the trick, and the intuition why it's possible:

1) The computation must be non-deterministic -- i.e. many encrypted outputs correspond to the correct decrypted output. This is the key part that keeps the computation provider from learning about the data.

2) The output must be fixed size, so you have a sort of built-in restriction of "limit to the first n bytes of the answer".

3) It does require a blowup in the computational resources expended to get the answer. However, as noted above, it's only a polynomial blowup. And thanks to comparative advantage, it can still make sense to offload the computation to someone else, for much the same reason that it makes sense for surgeons to hire secretaries even when the surgeon can do every secretarial task faster. (Generally, when the provider's opportunity cost of performing the task is less than yours.)

4) Finally, to be fully homomorphic -- capable of doing every computation, not just a restricted set of additions and such -- the encrypted computation has to find a way around the buildup of "noise" in the computation, i.e. properties of the output that put it outside the range of what can be decrypted (due to exceeding the modulus of the operation needed to extract the output). And to do that, in turn, its operation set must be sufficient to perform its own decryption.

I'm only about halfway through the paper, but it's been really enlightening to get the intuition behind why this kind of thing can work.

Monday, April 15, 2013

Our latest work on Bitcoin

Bob Murphy and I have collaborated on an attempt to explain Bitcoin -- and its economic implications -- for the masses.

For the three of you still following this blog, check out the link.

Thursday, November 1, 2012

Disaster Keynesianism -- Say something responsive for once!

Last day in Budapest for now, leaving in a few hours. But it looks like the topic of the day is the economics of Hurricane Sandy, and, as with any discussion of economics during a natural disaster, whether it will be "good for the economy".

Needless to say, this is a discussion that has happened several times already. Still, engagement with the other side's arguments is always good -- as long as you're actually, well engaging, rather than extending and reinforcing a non-responsive (or no-longer-responsive) point.

Which brings us to pseudo-contrarian Steve Landsburg's latest pseudo-contribution to the matter. He thinks he has an even more devastating critique of the "hurricanes can be good for the economy" by posing this:

ask your opponent whether it’s “good for the ants” when you put a stick down their anthill, wiggle it around and destroy their infrastructure. Go ahead and acknowledge that this can sure put a lot of ants to work.

Or, for that matter….

Ask if spilling ink on the living room rug is “good for your household’s economy” because of all the cleanup work you’ll do.

Of course, this doesn't actually address the Keynesian's central point, because their claim is that normally such acts are destructive, but need not be so when there are idle resources (found after a quick search).

To make absolutely sure I'm not misunderstood, please read these caveats if you plan on responding:

- I don't agree with they Keynesian "idle resources" argument, and have said as much before.

- I realize that Keynesians (and their critics) acknowledge that there are always better ways to do economic stimulus than a natural disaster -- just employ those otherwise-would-be-disaster-response-resources to do something that's not completely wasteful.

And yet there's no mention of relevance of idle resources in Landsburg's post, or in the army of back-slappers or hangers-on that dominate the beginning of the discussion. When we finally do, it's from critics who offer surprisingly good analogies, like commenter "Brian", who compares a stagnant economy to laziness ("akrasia") in an individual:

Suppose Billy Joe has been in bed for years. He’s overweight and unmotivated. His life appears to continue to spiral out of control as he watches reruns of every horrible show made from the 1970′s on. But when that ink falls on the floor, this finally gave him a reason to get out of bed and clean up the mess, and the mere activity of it kick started him into action of doing thins again, and even being motivated [sic]

And the defenders of the post (I guess *not* surprisingly) miss the point that of course making new windows is better than fixing broken ones, but that's not an option here. Landsburg himself does that in this comment:

... this is ridiculous, on Keynesian grounds or any other. If you believe it’s important to hire idle resources in order to “stimulate the economy”, then you don’t have to wait for a hurricane — you can hire people to build *new* bridges instead of having them rebuild old ones. The hurricane does not in any way expand your set of policy options; it only destroys stuff.

Except, of course, that it does expand your options, since by supposition, policy makers won't allocate funds for public works projects that build new windows, but will gladly fund projects to restore the windows that were broken in the disaster. (To re-iterate: I disagree that such public works funding -- whether for building or fixing -- is a good idea for "helping the economy"; this is simply about appreciation of one's opponent's arguments and responsiveness thereto.)

***
My point here is that if you want a really hot one-line zinger for why the "hurricanes good for economy" meme (in its most intelligent form) is wrong, you're going to have to do a lot more than just say that destruction is bad. No -- you're going to have to show why destruction is not "better than nothing" if its effect is to put (only) idle resources to use, thus giving people the dignity of a job and practice of their skills, when you don't have the option (for e.g. political reasons) of simply employing those idle resources to build on top of existing wealth.

What's that, you say? It's hard to give a concise, fun explanation of why that thinking is wrong? Well, it should be. Two-sided political debates tend to be like that. My shortest debunking is at least this long

Can you do better? Perhaps. But it won't be by invoking the ten millionth permutation of "destruction is bad, m'k?".

Wednesday, September 19, 2012

What's up with further vs farther?

Yeah, I know it's been a while since I posted, and I've kinda let this blog die off, but I figure, better late than never, right?

So, to start with just a random thought: Why is it that "farther" seems to be the only word we can't use figuratively? The standard explanation is that "farther" is only used for literal distance, while "further" can be used in a figurative sense.

What's up with that? What other word do we have this rule for? As I understand it, you get to use any word you want in a figurative or metaphorical sense. Why not "farther" as well? (I guess the one other example would be "literal", which I oppose the figurative use of, since, ya know, it's the one word that's supposed to actually distinguish the two cases, and without which we can't even speak of the difference.)

It just doesn't make a whole lot of sense.

[Insert usual remark about how I composed this entire post, including navigating to and copying the link, without the mouse or trackpad.]

Saturday, May 12, 2012

Setting naming conventions for international audiences straight.

So the place in Budapest where I'm staying is called the K9 Residence.  What I need to say next depends on your native language.  Please skip to the subheading that best describes you.

Native English Speakers


No, the place doesn't have anything to do with dogs, nor can one jokingly say that they "treat you like one".  The name comes from how it's number 9 on the street Karoly Korut, and no one ever alerted them, apparently, that K9 is a common shorthand for "dog" in English.  (Or perhaps they did learn that much, but deemed it too late to change.)

Non-Native English Speakers


Hey, did you know that in English, K9 is a common symbol or abbreviation for "dog"?  Yeah, it comes from how it's pronounced like "canine", the adjective for dog based on its Latin root canis.  Remember that movie K-9?  Yeah, kinda like that.

****
Anyway, I don't expect all of this internationalization to go perfectly for everyone, but, well, y'all could have saved me from having to explain stuff to a lot of people of different native tongues...

Saturday, May 5, 2012

I'm going to Hungary tomorrow.

That is all.

Monday, April 23, 2012

Taking the anti-mouse campaign ... to the next level

Wow, did you guys know that you can buy domain names now?

And that no one had yet taken "tyrannyofthemouse.com"?

Well, I snapped it up, and it will include a showcase of my programming projects soon.

I also managed to move this blog to the subdomain blog.tyrannyofthemouse.com (as you might have already noticed).  Don't worry, all the old links will still work!