Saturday, December 31, 2011

Broken Windows, Part I: The Pain of Hard Choices

This will be the first in a series where I spell out an underappreciated concept in economics and how it leads many economists astray in proposing solutions to economic problems. I figured I better get a start on it before the New Year.

Recently, I've gained some insight into the economic debates between the various camps that claim to have a solution to our current problems. In addition to tying up some loose ends regarding a century-old debate, this insight gave me a good explanation of why standard dismissals of the so-called recalculation story (in explaining recessions like the current one) are making a subtle error.

First, a high-speed recap: Way back in the 1800s, Bastiat described what is known as the "Broken Window Fallacy" to refute the prevailing economic wisdom of the age. Many believed that a vandal who broke a window could be doing the economy a favor, reasoning that the owner would have to hire a glazier to fix the window, who would have new money he could use to buy new shoes, which would give the shoemaker the chance to buy something he wanted, and so on. (Note the early shades of the "multiplier effect" argument.)

Bastiat replied, basically, that no, this doesn't quite work, because you have to account for the "unseen" loss to the window owner, who would have engaged in the exact same economic stimulation as the glazier, had the window not broken, because he would have been able to buy something he wanted -- and we'd get to keep the window, to boot!

This mention of the Broken Windows Fallacy is often brought up in response to proposed Keynesian solutions (involving government stimulus spending), where their opponents say that it makes the same error, neglecting the unseen economic activity that would go on in the absence of the government's spending.

Keynesians, in turn, reply that the Broken Window Fallacy only applies at "full employment", where there is no "crowding out" (i.e. forgone projects due to the government's use of resources for different ones). In a depressed economy, they argue, the alternative to a metaphorical broken window (along with its fixing) is not "the window owner buys something else", but rather, "the window owner hoards that money", providing no economic benefit. Therefore, breaking a window in such a case would not have an economic opportunity cost, and so could indeed be good for the economy -- though Keynesians of course admit there are much better ways to increase employment than breaking a window.

The back-and-forth goes on, of course, with each side claiming that the other's position implies or relies on an absurdity. Keynesians accuse the free-market/"Austrian" types of thinking the economy is always optimally using resources, while Austrians accuse the Keynesians of calling a hurricane "God's gift to depressions".

But here, I think, I've noticed something that tremendously clarifies the debate, and gives us insight into why economic activity does or doesn't happen, and why certain events are or aren't good. So, here goes.


Let's go back to the original Bastiat thought experiment about the broken window. Ask yourself this: Why are we assuming the window will be fixed at all?

Don't misunderstand me: it's a reasonable assumption. But we have to be careful that this assumption isn't fundamentally ignoring relevant economic factors, thereby baking in a desired conclusion from the very beginning. And here, I think we have good reason to believe that's exactly what's going on.

So let's start simple: under what circumstances would it be not be reasonable to assume that the window will be fixed, (i.e. that the owner will choose to pay someone to fix it), even during a depression? That's easy: if the neighborhood (along with that building) is run-down to begin with, already littered with broken windows. A lone broken window merits a quick repair, but if it's yet-another-broken-window, why bother? (Note here the substantive similarity to the homonymous "broken window" effect!)

So here we see the crucial, unappreciated factor: the obviousness of certain production decisions. What these thought experiments -- carefully constructed to make a different point -- actually prove is the importance of being able to confidently decide what is the best use of resources. And we can step back and see the same dynamic in very different contexts.

For example, say an unemployed guy, Joe, is trying all different kinds of things to find a job, and nothing is working. Then while driving one day, makes a wrong turn and steers his car off a bridge into the river below. Not good. But there is one teensy-weensy good part: it's a lot easier to prioritize! Previously, Joe didn't know what he should do to make optimal use of his time. Now, he knows exactly what he needs to work on: avoiding death from falling into a river!

And we can step back even more and generalize further: what we are seeing is but a special case of the law of diminishing returns. Abstractly, each additional unit of satisfaction requires a greater input of factors: land, labor, capital ... and thought (sometimes called "entrepreneurial ability"). Generally, the further up you pick the fruit, the harder it is to pick the next branch up, in terms of any factor of production, including and especially thought. Conversely, if you suddenly face a sharp drop in satisfaction by being deprived of more fundamental necessities, it becomes easier to decide what to do: replace those necessities!


That should give you a taste of what I think is missing from discussions of the economic impact of natural disasters and inability to reach full employment. In the next entry, I'll go further to illustrate how deeply this oversight impacts the ability to perform good economic analysis.

Sunday, December 11, 2011

EXCLUSIVE: Silas's bitcoin mining rig tell-all!

As part of an application I filled out recently (more on that in the future), I explained everything I went through to get my bitcoin mining rig up and running. But why bury that story in a place only one person will read it? Nay, my readers ought to hear about it as well! So, here's the story, with a photoalbum for each stage at the end.


In February 2011, I learned about Bitcoin and the feasibility of building a "mining rig" (machine that quickly computes lots of SHA256 hashes) to make money by generating bitcoins, which trade against dollars at a varying rate. Though I hadn't built a custom box before, the idea of setting up a mining rig excited me.

I looked over some rig designs in the wiki, and based on those, designed a custom setup that I figured would achieve the highest RoR (involving 4 GPUs connected to a motherboard in a large case) and ordered the parts. Some graphics cards had already been bid up to unprofitability (Radeon 5970) by other rig builders, so I picked a slightly slower (Radeon 5870) one that was several times cheaper.

Over the course of putting it together I ran into a number of problems, any one of which would have shut down my plans, but kept trying different things until I overcame them.

First, since I hadn't built a computer from (near) scratch I had to learn what parts (motherboard, SSD, CPU, RAM, PSU) went where, and how to optimally route the wires. Then, on bootup, I found the BIOS didn't see the hard drive, and traced the problem to a part of the case's internal wiring that wasn't passing the SSD's SATA flow through, so I bypassed that and plugged it in directly to the motherboard.

Then, after installing Ubuntu, I had to download the exact set of ATI drivers required for mining rig code to work. It turned out the latest drivers interfered with the mining code, so I had to get an earlier version that AMD no longer promoted (or pretty much acknowledged the existence of). From the forums I found that you had to manually enter the URL since nothing linked to it anymore, which allowed me to mine with the first GPU in a way that exploited its ability to do parallel hash calculations.

After configuring the GPU to send its computations to a mining pool (group of miners that combines computations to get a more predictable solution [hash inversion] rate), I opened up the box again to add the second GPU. (I had decided early on to add them one by one to make sure I understood what was going on at each stage.) Getting them both to work together introduced another problem, as they would somehow keep adjusting their hashing rate downward to the level of only one GPU. This required another trip back to the forums to learn new software to install, which still didn't work after numerous configurations, so I wrote down the whole process up to that point and re-installed the operating system. (I ended up doing this several times as a last resort at different stages.)

Once I got all 4 GPUs and a hardware monitor installed, I was able to get excellent hashing performance, but soon noticed that, with four high-power GPUs packed so closely together, they heated up to unacceptably high temperatures, so I took two out. That solved the temperature problem, but I still wanted all four to be able to run, so I looked into better cooling solutions. (For a short while I ran three cards safely by having one side of the case open and pointing a box fan at the cards, though this was obviously very inconvenient and wouldn't permit safe overclocking.)

It turned out that liquid cooling was my only option, which I had also never set up before. Nevertheless, I went forward and found a cooling block model (i.e., something that replaces the OEM GPU heat sink) that would fit my cards, as well as a cooling kit (pump, radiator, reservoir, tubing). I also ordered some parts that would directly connect different blocks together and so minimize the need for tubes.

When I got the cooling blocks, it turned out they didn't fit, because the particular variant of the Radeon card I was using had a non-standard PCB design (which I didn't realize was possible before). So I sent back the cooling blocks, found ones sure to match this specific design, and ordered those. Finally I was able to attach a block to each of the 4 GPUs. I then ran into another problem with the block-to-block connectors, which I couldn't figure out how to install (and had ambiguous directions), and had to be put in a tight spot, so I asked a more home-improvement-savvy friend how they worked.

I eventually got the connectors to install, but ran into another problem: because of space constraints, the tubing would require bends that were too sharp. I figured I needed a 90-degree angle fitting, but I couldn't get one at a local hardware shop because PC cooling parts all use British piping threads, which are incompatible with those carried in American stores. After finding a compatible one online, I realized that each day without the rig running was costing me money, and this was the only part holding it up, so I had it express-shipped to arrive the next day, allowing me to finish setting up the liquid cooling system.

I had to make a few choices then about which way to point the radiator air flow and otherwise optimize cooling capacity. I eventually settled on a design that had the radiator take air from the room and dump the exhaust into the case, which I partially mitigated by flipping one of the case fans to bring more external air in rather than taking it out.

At this point, there were fewer setbacks, but I was hesitant about circulating coolant inside the system if I couldn't first ensure there were no leaks. So, before closing the loop and adding the coolant, I configured some leakage tests where I would fill the system with distilled water, leaving an open tube at the top, sealing the other end, and ensuring plenty of towels around the potential leak points.

With this test configuration, I blew into the tube. I figured that if it could withstand this pressurization without leaking, I could be more confident about actual fluid circulation. Fortunately, none of the tests showed a problem, and I got the "production" liquid cooling system running.

Finally, I was able to have all four GPUs running overclocked, generating bitcoins for me through a mining pool, and staying at temperatures significantly below what I got before. I further optimized performance by using new mining software, experimenting with settings, and then saving a file with the commands to get the rig running optimally.

There were still some other kinks to sort out, like what to do about the immense heat it generated for the rest of my place, and how to monitor the mining pool status, but that about covers everything. Now, for the pictures:

Final result

The various stages (no captions, sorry):
Initial set-up
Putting four graphics cards in
Replacing OEM heat sinks on graphics cards with waterblocks
Installing the liquid cooling system

Saturday, November 12, 2011

Another explanation of hash functions

I think I've found a new way to explain hash functions and convey the intuition behind them.

First, think back to the classic problem from grade school: a farmer raises cows and chickens. His animals have a total of, say, 10 heads and 22 feet. How many cows and chickens does he have?

If you know algebra, you probably laugh at how easy the problem is, though it's still interesting for kids, who generally have fun with it. Now, you can either solve it the kid way, or the algebraic way. The kid way is to "guess and check": that is, guess a number of chickens and of cows, find the corresponding number of heads and feet, and check if it matches that given in the problem. The algebraic way is to let x be the number of chickens, y the number of cows, and write:

x + y = 10
2x + 4y = 22

And then solve for x and y the standard way. (In case it needs to be said, chickens have one head and two feet while cows have one head and four feet.)

What does this all have to do with hash functions, though? Glad you asked.

A cryptographic hash function for 10,000 BC

Recall the requirements for a cryptographic hash function: it must be easy to compute the output (digest) for any input (preimage), but hard to compute the input for any output (other than by trying every preimage). In other words, a one-way function (or trapdoor one-way function without the trapdoor).

So, here's a hash function for a mathematically backward world: First, it takes two integers as input. You let the first be number of chickens, and the second be the number of cows. Then, output the total number of heads and feet (perhaps with a simple separator, like "10:22" for the above example).

It works as a hash function for the people of 10,000 BC precisely because of their (relatively) poor mathematical understanding. Since they don't know to express it as a system of linear equations, or otherwise derive a simple general solution, someone trying to "crack" a given hash digest (output) would have no choice but to guess and check a bunch of chicken/cow possibilities. This method is referred to as "brute force" in cryptography, and as long as a hash function hasn't yet been "broken" (by a better understanding of the math theory involved), it's the only option available.

Though this situation may seem contrived to us now, the same dynamic is at play in modern, military grade cryptographic hash functions: so long as people lack sufficient mathematical understanding, it is impossible to invert a hash digest except by brute force. The only difference between now and then is that the math and computations are harder.

In the previous post, I mentioned how hash functions can be used to protect stored passwords while still allowing password-based authentication. Let's go over how this would work.

An encrypted password system for 10,000 BC

Let's say Mike the Merchant wants to set up a service of warehousing people's valuables. He'll deal with many customers and doesn't want to count on remembering their faces when they come to reclaim their stuff. So, he'll give each customer a unique password they must use to get in. This will be a stronger authentication system than just trying to remember every customer.

Like modern website owners, Mike also wants to keep his password records from being stolen or misused (say, to steal his customers' stuff). If he simply kept the passwords in a book, someone who gained access to it could copy them and then come one by one to illicitly claim the goods in the warehouse. (Though of course, Mike can always use common sense security measures, like noticing something is fishy when the same customer one day claims to know all the passwords!)

So, rather than store the passwords, he converts them into two integers (if the password isn't already in that form) and stores the digest from putting them through the hash function described above. So, if Harry the Hoarder's password were 100:350, Mike would store Harry's password as 450:1600. (That is, interpret the number before the colon in the password as chickens and the latter as cows, and then store the number of heads [100 + 350 = 450] and number of feet [2*100 + 4*350 = 1600], separated by a colon.)

Then, when someone comes in claiming to be Harry, Mike asks for the password. Next, instead of comparing Harry's password (100:350) to an entry in his book, he first computes the hash digest (450:1600), and compares that number with his entry for Harry. So, he still has a working system to authenticate people by password.

What protection does this offer Mike if someone gains access to his password book and copies the entries? Well, remember, like with modern systems, all they get are the digests that result from putting the passwords through the hash function, not the passwords themselves. And knowing a digest won't convince Mike that you're the account holder: remember, he's checking for a pair of numbers that hash to your entry, not the entry itself.

Could the attacker infer the passwords from the digests listed in Mike's book? Yes, but it would take infeasibly long to do so -- that's the role of the hash function. Going from cows/chickens to heads/feet is easy, but going the other way is hard (for a mathematically backward society, at least). To make any use out of Mike's hashed-password book, an attacker would have to guess a huge number of passwords and see if their digests match any in the book. As long as the password space is big enough, and the society remains mathematically backward enough, it's just not feasible for an attacker to guess and check enough passwords to find a match in the book.

And like in the previous section, this is the same position we are in with respect to hashed password storage today: with a good enough mathematical breakthrough, it might become feasible to quickly invert a hashed password, but as it stands now, the hash functions used are enough to render such databases useless to attackers (though obviously some attacks still get through, such as when a website continues to use a long-broken hash function). The only difference is the complexity of the math.

Of course, Mike still has to make sure his customers don't give out their passwords or store them insecurely ... a problem we still grapple with, twelve thousand years later.

Sunday, November 6, 2011

Setting professor fashion straight

How good are you at distinguishing professors from the homeless? Test your prof/hobo classifier here. I only did slightly better than chance -- 6/10. (Though obviously it doesn't sample uniformly over all professors.)

HT: Jerry Coyne

Saturday, November 5, 2011

Setting unilateral disarmament (obligations) straight

Apparently a Steve Landsburg post from a few months ago has been rediscovered and sparked a new blogosphere debate. The question: "If you favor higher taxes on a class of people that includes yourself, are you obligated, in the absence of the higher taxes, to make voluntary contributions to the government so as to push the world closer to your preferred one?"

The debate also considers (in greater depth) a weaker claim: "If you favor higher taxes on a class of people that includes yourself, you have a greater moral obligation to voluntarily pay (part of) such taxes regardless of whether they are enacted."

Bryan Caplan, Tyler Cowen, and Bob Murphy, all libertarians, weigh in and agree with that weaker claim. (They are listed in approximate decreasing order of confidence in the claim.) Karl Smith, one of those who does want higher taxes, disagrees.

My take is that, despite superficial dissimilarities, the question reduces to that of unilateral disarmament (UD). That is, if everyone (else) would be better of for each person who (metaphorically) disarms, but you would be much worse off if only you disarmed, should you disarm? I say that, you do not have such an obligation, either morally, or for logical consistency, though it would certainly be a noble act. So, I think Karl Smith is basically right (about the implied obligation -- obviously, not about taxes!).

Just as in UD/public goods/free rider cases, the decision to UD will, for lack of a better term, "weed out the meme pool" of people like you, effectively rewarding those who favor opposite policies (which you, by stipulation, regard as pernicious).

It is for the same reason that you should not pay Coasean extortioners: though ostensibly, it works toward your goals, it undermines them by rewarding the wrong people.

I think Douglas Hofstadter made the point very well in his Tale of Happiton, which discusses this dynamic, but in the (less relevant, IMHO) context of nuclear disarmament. He's set up a public goods type situation in which "writing postcards" (i.e. to advocate nuclear disarmament) benefits everyone, but has its costs paid purely by whoever writes them. Watch how he subtly describes the dynamics of what happens when one person takes it upon himself or herself to do the postcard writing. (Here, it's a girl named Andrea.)

Andrea’s older sister’s boyfriend, Wayne, was a star halfback at Happiton High. One evening he was over and teased Andrea about her postcards. She asked him, “Why don’t you write any, Wayne?”.

“I’m out lifeguardin’ every day, and the rest of the time I got scrimmages – for the fall season.”

“But you could take some time out - just 15 minutes a day - and write a few postcards!” she argued. He just laughed and looked a little fidgety. “I don’t know, Andrea”, he said. “Anyway, me ‘n Ellen have got better things to do-huh, Ellen?” Ellen giggled and blushed a little. Then they ran out of the house and jumped into Wayne’s sports car to go bowling at the Happi-Bowl.

Naturally, Hofstadter wrote this piece to encourage people to UD ("write postcards") in such a situation. But I think he's just as well shown that, in the absence of a collective agreement, your decision to unilaterally disarm is, well, spitting in the wind.

(A version of this post was made as a comment on Bob Murphy's blog.)

Thursday, November 3, 2011

Setting universal debt paydown straight

Not that this will get the article any more hits, but I strongly recommend Bob Murphy's takedown of the all-too-common belief that it's somehow impossible or damaging for everyone to reduce or eliminate their debts. It was in response to the latest articulation of the idea by Paul Krugman.

Really, folks, any economy that relies on a certain level of indebtedness is not an economy I care to defend, as it rests on a poor foundation. The purpose of an economy is to provide people with the best consumption/leisure/labor bundle possible, not to goose the hippest new econometric.

Wednesday, November 2, 2011

Virtualization is a riot!

Isn't it neat how computers can completely simulate other computers purely via software? (Background). Well, right now I'm reading a book that comes with a Linux Live CD containing relevant source code -- basically, a CD that you can boot from to see what it's like to use the operating system without interfering with the one you currently have. (Not to mention the benefit of having the exact same environment as the author so you can make sure It Works.) And that's how Live CDs are typically run: from bootup.

But with emulation, you don't need to reboot your whole computer just to get that Linux (or whatever OS) experience! You can set up a "sandbox" environment, or virtual machine that "pretends to be a computer". The software allocates a portion of your computing resources that you specify (disk space, RAM, etc.), and you just run the Live CD through that "pretend computer", freely switching from the window containing that virtual machine, to your web browser and whatnot. Again, it's without the hassle of rebooting every time you want to switch between that and the cute cat video you were watching.

So there I am -- I've got the virtual machine running a "sample" of an operating system off a Live CD, no need to reboot my "real" machine. But it gets better! I can go one step further and tell my pretend computer, "You know what? Let's go all the way. I want the full operating system -- not just the "sample" -- installed on your pretend hardware!" And then it dutifully runs through all the screens you would normally see when installing a Linux distro as your operating system, seizing control of the sandboxed software-that-thinks-it's-hardware, for a full wipe of the, um, non-OS you had there before.

Sorry, I don't know why ... I just find this all so hilarious ... virtualizing the use of a "sample" operating system before I install it on its virtual hardware.

(For those who are curious, the software I'm using is Oracle VM VirtualBox, available free for (IIUC) personal non-commercial use. I learned about it from's failed attempts to teach others how to play with the site's code.)

Sunday, October 30, 2011

Happy birthday to me...

Today is my big 30th birthday. Check out the cake my friends made for me! (Click to enlarge.)

Friday, October 21, 2011

You know you're an economist when ...

... you find yourself needing to cite “Buchanan 1973″ when claiming that gangs want to do “too many” drive-by shootings.

In addition, the Mexican Mafia regulates drive-by shootings…because any particular street gang only suffers a portion of the increased attention of law enforcement from drive-by shootings, each street gang has an incentive to do too-many (Buchanan 1973).

Monday, October 17, 2011

Now this is why we have "Do Not Kill" Lists...

Though it's not exactly the same thing as the list that I'm making up, it turns out that some people actually don't want to opt out of being killed by the state.

You see? When we have it all formalized with a list, governments can know who doesn't and who does want to be killed!

Thursday, October 13, 2011

"Do Not Kill List" FAQ

To respond to some of the burning questions you all have about my new list, here are some answers:

What does being on the Do Not Kill List (DNKL) mean?

It means you have expressed your desire not to be subject to targeted killings by the US government or any of its contractors (i.e. that you've "opted out" of such programs the CIA one detailed by Reuters that is not subject to judicial review), preferring instead that, if the government believes you have committed a capital crime, it should apprehend you and pursue a criminal case in full accordance with your Constitutional rights, such as "habeus corpus", "due process", "right to counsel", "right against self-incrimination" and all other kinds of obscure stuff you might not have heard of.

Wait, is this official? Is the government actually going to abide by this?

Probably not. The idea is just to make sure some list of this type exists so that the government won't be able to pull a trick like, "oh, well, see, Mr. Doe didn't actually invoke his right against secret assassination, so we can assume he waived it".

Isn't this dangerous? Can't someone, like, break out the Uzi and start mowing down people, and then be like, "oh, oh, oh, look, my name's on the DNKL, you can't touch me!"?

No. This "opt-out" doesn't prevent the government from arresting you and prosecuting you to the full extent of the law, even if it were to abide by the list.

I was convicted of a grisly triple murder and sentenced to death. I was accorded full due process and have exhausted all appeals. I'm now set to be executed soon. Optimistically, will being on this list protect me from being executed?

No. The DNKL administration takes no position on current laws or punishments in the US, only on people's right to "opt-out" of government-sanctioned extrajudicial killings.

Setting the signaling model of education straight(er?)

Note: free business suggestion below.

You might have heard about the so-called "Signaling model of eduction", promoted by Bryan Caplan at GMU (among others!), and it's something I find plausible.

First, some background: The problem is to explain why people who get a college education are more able to get jobs, and better paying ones. The traditional explanation is that colleges provide you with knowledge skills that allow you to be more productive. (This has always seemed suspicious to those of us who have remarked, throughout our education, that "I'm never gonna use this stuff" ... and been mostly right.)

The signaling model, in contrast, says that completion of college simply reveals your possession of good traits for hiring that you already had before, but could not convincingly claim to have until you completed college, since a college degree indicates some combination of intelligence, willingness to do boring stuff that doesn't make sense, and capacity to be indoctrinated into and conform with a group (I'm simplifying a bit). These things are hard to test in a job interview, or, in the case of intelligence, usually illegal to test for.

A few years ago, I pointed out (HT: Bob Murphy [1]) that one usefully testable implication of the signaling model is that you should be able to earn big profits by running a business that provides high school graduates with the same "signals of good qualities" that a college provides, but at significantly lower (monetary) cost to them, simply by "cutting out the fat" -- all the stuff that doesn't help to signal the student's ability. You would just set up some school that filters students by IQ, and then puts them through hell, gives them difficult assignments, poor living conditions, etc. No way an unemployable person could survive through that kind of regimen, right?

So there's your idea: you make students just as employable, but they don't have to take on nearly as much debt.

Interesting caveat: in one discussion of my idea, someone mentioned that this business model is already in widespread use: specifically, the military! Let's go through the checklist:

- Cheaper than college? Check. (Heck, in terms of money, they pay you!)
- Enforces indoctrination and unquestioning following of direction? Check! [2]
- Selects for people who are willing to give a lot to a big organization? Check.
- Employers regard service therein as equivalent to college experience? Check (usually).
- Gives experience doing boring tasks because you were told to? Check.
- Generally puts you through hell? Check.

Wait, this can't be right, can it? This comparison fails in that the military doesn't filter people based on an IQ test! Hah!

Not so fast -- they've got that one covered: in the US, it's called the ASVAB, which determines whether you can get in, and then which branch, role, or officer status you're eligible for. (My mom used to pass on her dad's remark that, "the army'll take anyone who can crawl there, but not the Coast Guard! An exaggeration, of course, though the branches do have different score cutoffs.) The ASVAB is, in content, an IQ test.

Now, if you can provide a better value than the military (say, to people who don't want to possibly be put in harm's way), here's your business idea!

[1] Yes, a hat tip for pointing me to my own post,.
[2] Note: this isn't always a bad thing. As Eliezer Yudkowsky put it in that article:

Let's say we have two groups of soldiers. In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy. In group 2, everyone at all levels knows all about tactics and strategy.

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

Tuesday, October 11, 2011

Time for a "Do Not Kill" list?

In the wake of US citizen and al-Qaeda booster al-Awlaki's killing by drone, some have raised alarm about further targeted killings of US citizens (including Glenn Greenwald and my close, personal friend, Bob Murphy). And indeed, Reuters is now reporting that there are secret CIA kill lists of US citizens not subject to judicial reivew.

At this point, many people are wondering if they can have themselves removed from such a list (like al-Awlaki's father unsuccessfully tried to do). Well, a thought occurred to me: we have the Do Not Call list, right? Why not a "Do Not Kill List" so that you can let your government know you want to opt out of its targeted killing program, and would prefer, in the case of suspicion of capital crimes, you be given a public trial instead, and then only killed if found guilty or something.

Since no one else seems to have started this yet, I figured I'd give it a go. Personally, I want to opt out of being killed (I'm a big fan of trials), so I'll be first on the list. While I'm not sure about the legal niceties, I don't want my family killed either (though it depends on the day in the case of my brother ...), so I'm going to add them to the list too. Here's the Do Not Kill List as it stands now:

Do Not Kill List (opt-out for CIA targeted killing program)

1. Silas Barta
2. Silas Barta's family
3. John Salvatier (aka jsalvati), added 10/11/11
4. Robert P. Murphy (yes, this RPM), added 10/11/11 (woo-hoo! up to six people on the list!)
5. Aurini's family, added 10/12/11
6. commenter "Rob", added 10/16/11
7. Jayson Virissimo, added 10/16/11
8. Matthew Graves
9. Joseph Fetz
10. Doctor Squirrel
11. Carlos M. Rivera, added 11/8/11

Doesn't take a lot of memory to store as of this moment. I'm thinking of expanding it in the future so that it has more specific identification of persons -- say, pairing a given person with their Social Security number and/or email so that you won't have some goofy "Sarah Connor/Terminator 1" moment where you kill the wrong person because they have the same name. (Don't worry, the entries will be hashed to prevent abuse.)

Please contact me, either by email or in the comments, if you want to be on the list.

UPDATE: An FAQ on this is now up.

Saturday, July 16, 2011

Bitcoin overview: proofs and common knowledge

In previous posts, I gave an explanation of the cryptographic building blocks of Bitcoin. Now I'll give a more "big picture" overview of how the overall system works. As before, I expect this to be easier to follow than the explanations I had to read to get to my current level of understanding.

Let's start from the general problems that a decentralized, anonymous (or pseudonymous) currency system has to solve. The most fundamental problem, is that of achieving "common knowledge" of the currency ownership. Specifically, everyone has to know not only who is the valid owner of any currency unit (so as to prevent double-spends); they must also know that everyone else knows the same answer. And they must know that you know that they know (and so on) this information. (This level of knowledge is known in the literature as common knowledge, but with the definition I just gave, not the conventional one.)

In other words, it's not enough that I know the current ownership status of any coin; I must count on others agreeing with me and knowing I agree with them. If you could accomplish this, you could get everyone to use and depend on the same record, thereby resolving disagreements about who is the current owner of what -- without trusting any one person. It is this problem that required the "key" innovation behind Bitcoin, as it has normally needed a trusted authority to solve it.

So what is this key innovation to solving that problem? The first insight is that it's possible to prove how many computing cycles were spent working on something. And with a system that implements such a "proof protocol", you can have a transaction record that provably has a certain number of past computing cycles spent on it. Then, you just need most of the users of a system to agree that they'll "go along with" whatever transaction record has the most computing cycles spent on it. Then, you know what the "real" global ledger is -- and you can trust that everyone else is using it too! (And they can trust that you're using it, etc.)

And there you have it: proof of ownership, without a central authority.

With that problem and solution in mind, a lot of the complexity of Bitcoin starts to make sense.

Remember how I had previously mentioned that bitcoins are initially doled out based on who can solve complex mathematical problem? Well, that math problem doesn't just exist to get initial bitcoins widely distributed -- that's not even the most important function of the problem. The main purpose, rather, is to prove that the the largest number of computer cycles were spent on a given transaction record. You see, if you start from the last known solution (which itself has the transaction record up to a point in time), you are starting from a record with, so far, the biggest number of cycles spent on it. (And the Bitcoin protocol specifies that you should start from the biggest one, though its in your own interest, as you will see.)

If you publish an "update" -- the previous ledger plus more recent transaction -- with the next solution, then the other users know that your purported ledger has all the cycles you spent on it plus all the cumulative cycles spent up to the last solution. Therefore, if you want to claim credit for the latest solution (entitling you to the 50 BTC bounty), you should start from the ledger in the latest solution.

So, let's step back and summarize. Here is a simplified version of what goes on in the Bitcoin network:

1) Whenever users want to transfer their bitcoins over to someone else, they broadcast a message describing the transfer and sign it with their private key.

2) Whenever a user receives a message indicating a transfer, they first check that the signature is valid (see previous post on digital signatures), and that the address doesn't spend more than the latest "confirmed" ledger shows it as having. If it checks out, they keep the message and propagate it to others.

3) All users wishing to claim the reward for a solution (aka "miners") bundle up all transactions they know of (i.e., new ones plus those in the latest confirmed ledger), and convert it into a math problem unique to that transaction set. They then work on solving that problem.

4) When someone finds a solution, they broadcast it, with their bundle of known transactions (new latest ledger), to all other users. Like with individual transactions, anyone who receives one of these checks it, and if valid, broadcasts it to others.

5) Miners who receive a new valid solution quit their current search for a solution, then take the latest ledger as definitive. Again, as in 3), they bundle up new transactions they hear of, add them to this new ledger, and try to solve a new math problem unique to the new transaction set, and the process begins anew.

In practice, sometimes different users will simultaneously find a solution, or solutions will propagate through different parts of the network at different speed. So miners will typically hold on to the 4-5 last latest ledgers, in case one of them is extended and becomes definitive. Users, for their part, will wait for several new ledger solutions before accepting their transaction is firmly in the network.

Oh, and as for the relevant jargon? A new solution, with its bundle of old and new transactions, is called a block. The complete transaction record, with each solution along the way, showing how the build off of each other, is called the block chain -- because each block "chains" off a previous ledger.

Now, I'm leaving out a lot of details, but I hope that explains the overall system and the different roles played. In the future, I'll go into more detail on:

- How you prove you spent X computing cycles on something.
- How you prevent situations where miners constantly find solutions at the same time.
- How you minimize storage requirements for the transaction record.
- How overlapping solutions get resolved.
- And much more.

Monday, June 27, 2011

Setting professional Bitcoin traders straight

It's bothered me how a lot of the people posting criticisms of Bitcoin manage to get their facts wrong. But apparently, even people with a giant financial incentive to get them right ... still get them wrong.

At this point, I think it's only fair to post disclosures: I hold a portfolio that is long Bitcoin.

Anyway, I saw an (unintentionally) funny post on the blog at the Financial Times's Alphaville.

According to the post, a trader found out about Bitcoin and, based on technical analysis (chart-reading), he judged that Bitcoin was in a bubble and wanted to short. Okay, fair enough, we have someone entering the marketplace and tendering his judgment through the price system. So, you would think he would do his diligence and have some clue about what he was trading before trying to make a big bet on it, right?

Well ... I'll just quote him:

I've done some research, read through the concept [of Bitcoin] and quickly got to the point where I felt that the only reasonable position would be to short such a bull market. [...]

... so I tried to contact Adam at to ask if they intended to implement a possibility to short the BTC. Due to the overload in mails they must have had, I never got an answer on my inquiry.

See the rookie mistake there? (If you don't, that's okay. After all, you weren't about to bet $50,000 on your incomplete understanding.) Bitcoin is a open source project that uses protocol that implements a currency. That's all it does: make sure that the ability to use Bitcoins, per its own published protocols, works. The people at -- the development team and volunteers updating the wiki -- don't run exchanges (like Mt. Gox) where you can convert bitcoins into dollars. Those are independently run by people who use Bitcoin.


So, this trader just did the equivalent of "trying to contact" the U.S. Mint to "ask if they intended to implement a possibility to short the US dollar", and then speculating that they must have been unable to answer his inquiry "due to the overload in mails they must have had" in this oh-so-heated market.

No, bright guy, they probably just didn't have time to talk to someone who didn't even understand the difference between a Bitcoin exchange (like Mt. Gox) and the Bitcoin project. Just like, I suppose, the U.S. Mint doesn't respond to inquiries misdirected people who ask them when they can short the dollar. (Note: it's not shorting the US dollar that's necessary misdirected, but asking the U.S. Mint about it.)

The blogger, Tracy Alloway, didn't seem to do any better. He added:

We like the currency trader’s rather more nuanced take ...

Nuanced? Yikes. I just hope traders -- and financial journalists -- have a better understanding of their normal playground than they do about Bitcoin.

Thursday, June 16, 2011

Explaining Bitcoin and Cryptography, Part 2

UDPATE: This was actually posted ~8:15 am CST, 6/25/11. For some reason, the date shown is that of an earlier draft. Blame blogger/blogspot.

Now that you've gotten your feet wet with my masterful explanations of some of the cryptographic pre-requisites of Bitcoin, you're ready for a more detailed explanation that removes some of the simplifications I used last time. But I will focus more on the cryptography here, telling it as I wish someone had told me when I was learning. So without further ado...

"Bitcoin really uses no encryption at all?"

The protocol itself does not involve encrypted messages, as many news outlets mistakenly report. Rather, the protocol is based on everyone seeing every message, unencrypted. However, some consider hashing a text to be encrypting it. And the address you use to send and receive is actually a hash of your public key rather than the public key itself (the signature protocol used only requires the verifier to have a hash of the public key). So, in that sense, there is encryption.

Also, as an optional (but recommended) technique, you can encrypt the "wallet file" that stores your private (and public) keys so that if someone gets control of your computer, they can't use your private keys to sign away your bitcoins.

So be careful: just because a protocol uses "cryptography" ("In cryptography we trust" being an unofficial motto of Bitcoin), doesn't mean it's actually encrypting anything, just that it's using a technique studied in the field of cryptography.

You don't usually sign an entire message in public key signatures.

I simplified: normally you just need to sign a hash of the message. Given the properties of hash functions, this is just as good as signing the message: it doesn't introduce a new weakest link, and signing a hash is computationally easier than signing the full message.

Now, you might argue that, "But there are infinitely many messages (preimages) that hash to the same digest! You said so yourself! How could I not be introducing a weakness by only signing the message digest? That allows someone to claim that I signed every preimage that hashes to that digest! I don't want to take responsibility for signing all those unknown messages!"

Calm down. For one thing, those second pre-images are, by design, very difficult to find, even despite the huge numbers of them (remember first and second pre-image resistance?). Don't let the infinite size deceive you. If the digest is 256 bits long (as in the case of the hash function bitcoin uses, SHA-256), then that means that only 1 in 2^256 (about 10^77) of all messages will "collide" with yours. That means that, on average, they have to look through 2^128 (about 3*10^38) candidate messages just to find one collision. That's a lot of work! (The "birthday paradox" ensures that you only have to search a space whose size is the square root of the space of digests: sqrt(2^256) = 2^128.)

And remember, cryptographic hash functions "look random" -- meaning there's no simple relationship between two preimages that collide. So let's say that your message is, "I hereby transfer $10 to Bob", and you sign the SHA-256 digest of that message. And let's even assume that an attacker did a lot of work and found their first collision, entitling them to claim you signed a different message, since it hashes to the same digest. Danger! Well, no, no danger. Because of the pseudo-randomness of hash functions, that "colliding message" won't be something neat and useful for the attacker, like "I hereby transfer $1 million to Bob."

Rather, in all likelihood, their second pre-image (i.e. purported alternate message) will look something like, "n02nS+TH/4dXcuPasQQn4". Doesn't seem to get the attacker very far, does it? All it lets them do is say, "Hey, I have proof that Silas sent the message 'n02nS+TH/4dXcuPasQQn4', and yes, I durn well do have have the signature, derived from Silas's public/private keypair, which matches the hash of that message. Checkmate!"

See the problem? "Um, excuse me Mr. Mallory, but what does 'n02nS+TH/4dXcuPasQQn4' actually mean? What is Silas transferring to you with that statement? It just looks like garbled text. I doubt Silas actually signed something like that ... hey, it looks like he *did* sign the hash of this other message, which actually makes sense. You can buzz off now, Mallory."

(Note: this may be a moot point, as I don't know if the Bitcoin protocol requires you to sign a hash or the original message, since the latter is already short.)

"But how do pubilc key signature algorithms actually work?"

Those of you with a scientific or rational mindset will rightly object that I didn't actually tell you how to digitally sign a message. I really just gave you the vocabulary for discussing public key signatures and asked you to take on faith my claim that the relationships hold (i.e. which parts of the protocol are "hard" and which are "easy"). I certainly didn't tell you enough to go out and create your own digital signature scheme (be it weak or strong), and this probably bothered some readers.

Well, I still won't! But I invite you to read about RSA, a commonly-used public key algorithm (with both an encryption and signature protocol). It's fairly easy to understand, and will shed some light on how it's possible for them to introduce the criticial asymmetries, such as how the private key can be difficult to infer from the public key, making it hard to generate a signature for anyone but the private key holder.

"And what do trapdoor functions have to do with public key signatures, again?"

When I mentioned the use of trapdoor one-way functions (TOWF) as underlying public key algorithms, I didn't make it clear how you turn a TOWF into a public key signature method. In the comment section of the last post, Boxo spelled out the mapping. I'll phrase it in a slightly different way. Remember that a TOWF is a function meeting the following criteria:

1) Given x, it's easy to compute f(x).

2) Given a value V equal to f(x1), it's hard to infer x1 (or any other x such that f(x) = V).

3) But if you have some "trapdoor knowledge", it's easy to find that x1 given V.

So if you have a TOWF, here's how you can sign a message. First you find a particular instance of the function class, f1(x) to which your TOWF belongs. The information that identifies f1(x) out of the function class is your public key. The trapdoor information is your private key.

One you generate a message M, you let that M (or some hash of M) take the role of V in item 2) of the description above. Because you have the "trapdoor knowledge" (item 3), you can find x1 easily, where f1(x1) = M. Then x1 is your signature, and you attach it to the message.

Others can very your signature by checking that f1(x1) really does equal M (or the hash of M). This is the "mathematical relationship for verifying a signature" that I kept mentioning in the last post. Per item 1, this computation is easy.

Hope you found this helpful!

Friday, June 10, 2011

Explaining – not setting – Bitcoin straight

Okay, I had some spare time last night, so I figured I’d sit down and write up an explanation of some of Bitcoin’s workings. The chief problem in explaining this to the layman is that, as a prerequisite, you need to understand the basics of public key cryptography (aka asymmetric cryptography), which, for the average person, is quite a tall order in itself. But since I’m the master at this kind of thing, here’s how I would put it:

First, to get something out of the way: nothing in Bitcoin is actually encrypted. Rather, it works, and works robustly, without centralization, specifically because all transactions are visible. The privacy comes in how the entities trading the coins are referred to in this transaction database, purely by their Bitcoin address (a string of numbers and letters, like 1mVQtx6rn…), which is like one of those supposed Swiss bank accounts you hear about that are only known by a number. (So yes, if you publicly and believably reveal that, "Hey, I own address 152zpfu5b20gh29...!", then people can see what you do via the address 152zpfu...) So rather than anonymous, Bitcoin is best described as pseudonymous (sue-DONN-i-MUS).

The reason that you need to know the basics of public key cryptography, rather, is that a lot of its "primitives" (building blocks) are used in Bitcoin, and the protocols used are heavily studied by professional cryptographers.

First primitive: public key-based digital signatures

How do you accomplish signatures in a digital world, where anyone can put any data on any storage medium? Like a physical signature, a digital one needs to meet the following characteristics:

A) Proof of identity: only you can produce your signature, so seeing your signature is proof that you endorse what you signed.
B) Non-repudiation: after giving your signature, you can't plausibly deny having signed it.
C) Non-transferability: your signature on Document1 can't be "moved" to a different Document2, implying your endorsement of the latter

Quite surprisingly, you can accomplish these goals with a kind of signature in the digital world. Here's the trick: you generate a keypair -- a "public key" and a corresponding "private key". You keep the private key secret, and tell everyone in the world your public key. You then use a "public key algorithm" (PKA) that takes as an input:

1) the message, M1, that you want to sign
2) your private key, SK1

and outputs a signature, SIG1. PKAs are designed so that computing this algorithm and generating this signature is quick and easy.

Then, if someone wants to verify that you really did sign message M1, they just verify that a certain mathematical relationship (corresponding to the particular PKA used) holds among your public key (which, remember, they know), your message M1, and your signature SIG1. Again, this process is designed to be quick and easy for the verifier.

So, how does this provide the desired qualities A through C above? A and B are satisfied by the fact that it is extremely difficult and time-consuming to produce SIG1 *unless* you know the private key SK1. (Inferring the private key from the public key is likewise too time-consuming to be finished anytime in the next few centuries.) So, the fact that you were able to (quickly) compute SIG1 is proof that you hold the private key corresponding to the public key, AND that (with a few caveats) you chose to use that key to generate the signature for M1.

This protocol satisfies criterion C (non-transferability) because, as you recall, SIG1 is partly a function of the message itself. This means that your signature will be different for each message you could conceivably want to sign. So someone can't take SIG1 and cite it as proof that you signed a different message M2 -- because the protocol's specified mathematical relationship will *not* hold for {M2, SIG1, public key} -- it will only hold for {M1, SIG1, Bob's public key}. To "forge" a signature, they would need to produce {M2, SIG2, Bob's public key}. But like I said above, it's way too hard for them to figure out what SIG2 would be unless they know your private key.

I'm deliberately leaving off the specific algorithms used for such systems so that this does not become unbearably long. Suffice to say, there are algorithms that accomplish this, and they mainly rely on modular arithmetic and prime numbers. I will only add that the class of function needed to produce such a PKA is known as a "trapdoor one-way function". That is any function f(x) such that:

- Given x, it's easy to compute f(x).
- Given a value V equal to f(x) for some unknown x, it's hard to find an x such that f(x) = V. (i.e., it's hard to invert f)
- But, if you know a specific piece of information particular to f, called the "trapdoor knowledge" (in the exposition above, this is the part played by the private key), it is *easy* to invert f

What role do public key signatures play in Bitcoin? They are used to prove to the network that the owner of address A1 (A1 also functioning as a public key!) really did authorize the transfer of certain coins to the next address. Other nodes in the network, in turn, are able to easily verify that the owner of A1 signed off on the transfer by checking that the mathematical relationship I mentioned above holds among the A1 public key, the message indicating the transfer, and the signature on the transfer. And if this relationship doesn't hold, the other nodes (per the Bitcoin protocol) ignore the purported transfer, acting like it didn't exist, and refuse to tell other nodes about it.

Second primitive: (cryptographically secure) hash functions

A hash function (in cryptography) is a function that takes an input of any length, and deterministically computes a fixed-length output based on it, such that the relationship between input and output "seems random", and there's no quicker way to compute the output, or otherwise learn *anything *about what the output will look like, than to churn through the hash function itself. I will make this make a bit more sense. For simplicity, call the input to a hash function its "preimage", and the output of a hash function its "digest" (the output is also referred to as the checksum or the [digital] fingerprint).

An example of a (weak) hash function most people are familiar with is the kids' game where you find out your "Star Wars" name or your "stage name” by doing something like, "Take the first syllable of the street where you grew up, and add on the last syllable of your middle name, plus the first syllable of where you were born." This name is a hash of all that data about yourself.

However, cryptographically-secure hash functions have to meet more stringent requirements. Like I said above, it must be really hard to make inferences about the relationships between classes of input and classes of outputs without actually grinding through the function for each input in the class. So, for example, you can't have a hash function where "small changes in the input (preimage) lead to small changes in the output (digest)". Rather, they are designed so that a tiny change in the preimage will *significantly *change the digest. More formally, cryptographically secure hash functions must meet the following characteristics:

- Given a digest, it's hard to find a preimage that hashes to that digest. This is called "[first] preimage resistance". (Note: because preimages can be any length and the hash length is fixed, there are an infinite number of preimages that hash to any given digest.)

- Given a preimage, it's hard to find another preimage that hashes to the same digest. This is called "second preimage resistance".

- It's generally hard to find *any* preimages (given or not), that hash to the same digest. Such instances are known as "collisions", and this trait is called, obviously, “collision resistance”.

(Exercise for the reader: how the Star Wars name game described above fail all of these?)

The function of hashes: in everyday data security, they serve the function of obscuring data in a way that limits its malicious uses. For example, websites don't actually store your password (if they know anything about security whatsoever). Rather, they store a *hash* of your password. That way, they can still verify you by password (Check: does the hash of the password given match the hash we have on record?), but if someone breaks into their database, all they get are the hashes. Because the hash function has first preimage resistance
(see above), the list is much less useful to the attacker because they have to accomplish the difficult task of finding preimages for the hashes they found.

Hashes are where the "miners" come into play: initial bitcoins are generated and allocated (and still are) based on who can solve a mathematical problem. That problem is similar to the one of breaking a hash function's (first) preimage resistance. But rather than having to find a preimage with a *specific* digest, the problem is to find a preimage whose hash is a *partial* match (for some specific number of digits) with a target digest string. So, it's like an easier version of breaking preimage resistance, though still requiring the ability to do lots of (parallel) calculations – because there is, by design, no shortcut to solving this but to try as many preimages as you can.

Anyway, that's about all for now, something for you to chew on and get some understanding of the whole thing. There’s still a lot left, but that should cover the pre-requisites.

Monday, May 23, 2011

Bitcoin mining rig is up!

Picture of my liquid-cooled box of 4 Radeon HD5870 cards, before closing up the case and actually getting it to work. It computes about 1.3 Gigahashes per second. (Click to enlarge.)

The side panel that closes it off (not shown) adds another large fan, which I inverted so it's sucking the hot radiator exhaust air out.

UPDATE 5/24/11: I've switched to the Phoenix miner, which somehow gets more hashes out of your card, so I'm now computing about 1.5 Ghash/sec.

Monday, May 9, 2011

Setting Inflation Straight, Part III (at least)

You ever noticed how inflation seems a lot worse than the official numbers indicate?

Via Yahoo, Fox Business reports on the change in prices for a sample of everyday grocery items. It shows quite a shocking increase over the past year, far more than you might suspect from the "tame" inflation numbers you hear about.

I've reproduced the prices from that article in the table below, showing the current, March 2010, and March 2006 values. (I couldn't find the numbers in the source cited, but will operate on the assumption they all refer to March of that year, even though it suggests they average over 12 months in the previous year; this would mean the results I calculate actually understate inflation.)

Yikes! The average 1-year price increase for this sample is over 8%!

So what is the offical food price increase? The BLS CPI report on page 2 gives their aggregate 1-year food price incease as only 2.9%!!! And if you think 1 year is too short because of volatility, then look at the five-year food inflation numbers, a time period that covers the "massive" price collapse and "deflation" following the 2008 crisis onset: 4.7% per year.

And this is still:

- ignoring all quality debasements, and
- in an environment where banks are holding on to their massive reserves, suppressing price increases!

I've listed the corresponding prices for gold (though they're all relative to the present instead of March of any year), using an ETF (ticker symbol GLD) that tracks it. Looks like it works well (if a bit too well) as a barometer of dollar debasement. Hope you stocked up back then! (By a great coincidence, a financial advisor in April 2006 looked at me like I was insane for suggesting putting any money in gold.)

But don't worry, your iPad holding more memory will make up for this, I'm sure...

Tuesday, April 19, 2011

Friday, April 1, 2011

Setting CPI silliness straight ... again

Sorry for the long lull in posting, and I'm a bit late on this story too, but it's very telling. We have on our hands a new modern day Marie Antoinette (or at least the popular image of her): William Dudley of the New York Fed deigned to talk to a working-class audience in Queens, New York on March 11.

He tried to promote the tired line about inflation being low, a story this crowd, well, didn't find plausible:

"When was the last time, sir, that you went grocery shopping?" one audience member asked.

Then, for his "Let them eat cake" moment, Dudley brilliantly replied to these concerns of higher grocery prices with,

"Today you can buy an iPad 2 that costs the same as an iPad 1 [sic] that is twice as powerful," he said referring to Apple Inc's latest handheld tablet computer hitting stories on Friday.


No, Mr. Dudley. An equal-price, technologically-better iPad really doesn't cancel out my more expensive food, energy, tuition, rent, and health care costs. It just doesn't.

Now, some folks have tried sheepish defenses of this line: "Sure, that might not be the best way to say it, but he's ultimately right that you have to look at all prices, and not just narrowly focus on stuff you'd actually buy."

But even saying that much would be wrong. Remember, when central bankers want to promote the idea of how dreadful deflation is, they dismiss that pesky trend of computer hardware getting cheaper, a trend most people, for some reason, regard as a good thing -- not with the rabid hatred they're supposed to hold for deflation.

But central bank acolytes will always trivialize this phenomenon, saying that, no, that's not the kind of inflation we're worried about -- we only want to count the kind that's affected by money supply, money velocity, liquidity preference, that kind of thing -- not these technology-driven improvements!

And in a way, it makes sense. But at the same time, it certainly means you don't get to turn right around, abandoning the long history of deeming cheaper computer hardware irrelevant to inflation, and count higher iPad performance as somehow canceling out the inflation you do care about. It doesn't work that way. If technology-driven hardware performance isn't relevant to measuring inflation for purposes of monetary policy, you don't get to selectively invoke it at the specific times when you "need the numbers to be lower".

It's good to see some people calling the Fed on this.