Wednesday, December 10, 2014

Bitcoin mining pools attacking each other for profit?

So after posting our new Bitcoin book, my buddy Tim Swanson alerted me to the problem of Bitcoin mining pools attacking each other.

Reminds me of the recent Tax Interaction Effect post, where I had to unearth the core of a counterintuitive result: in this case, the claim that not redeeming some of your solutions can increase your return.

I don't think I'm entirely there, but I at least have an analogy to understand the mechanics of the attack.

Bitcoin mining model:As you might know, mining is like buying a bunch of (positive sum) lottery tickets. A fixed reward is given out every hour, then divided equally among the winning tickets. Some people join into pools, where they buy tickets with the proviso that *if* it's a winning ticket, they share the winnings equally among all the tickets in their pool.

The attack: You use some of your money to buy tickets for someone else's pool (call it the "attacked pool", but hide and destroy the winning tickets for that pool.

The effect: There are fewer total wins per period. Each (non-destroyed) ticket gets a larger fraction of the hourly reward. The attacked pool gets a smaller fraction the reward.

My response/confusion: This increases the return to all winning tickets, not just those of the attacking pool, so the attacking pool effectively subsidizes all the others, and dilutes the value of its own tickets across the set of all players.

But maybe I'm missing something here.

Tuesday, December 9, 2014

Our new Bitcoin eBook is up!

Phew, been a while, eh? Well, Bob Murphy and I have a new free eBook up about the economics and mechanics of Bitcoin! Check the site for it, or, if you're too lazy, just go straight to the book itself.

Sunday, March 16, 2014

Tax interaction effects, and the libertarian rejection of user fees

Phew! Been a while, hasn't it?

I want to come back to the tax interaction effect (TIE) issue from previous posts, and go over what I think has been bothering me about the TIE-based argument against the carbon tax shift.

So, a high-speed review of why a carbon tax shift (CTS) is inefficient. The CTS, remember, involves a revenue-neutral reduction of taxes on capital (including land) and labor, replaced by a tax on carbon emissions -- specifically, those fuels that, when used, release carbon dioxide, in proportion to how much CO2 they release per unit.

Review of the argument


And why could it be inefficient? Well, the harm of a tax increases faster than its rate. To have a revenue-neutral CTS, you have to "focus" the tax -- i.e. raise the same revenue from a smaller class of goods. This necessarily means a higher tax rate on the "focused" goods, and therefore higher induced inefficiencies (compared to the broader tax). When you further note that these taxes will, in effect, "stack on" to the existing labor and capital taxes, then the inefficiencies are even higher -- that's the TIE -- and could even swamp the environmental benefit from the emissions reduction."

But hold on. Those very same steps are a case against any correspondence between "who uses" and "who pays", whether or not the payment is a tax! That's because you can always point out how "concentrating costs" leads to disproportionate inefficiencies, even and especially for textbook "private goods".

That is, you could likewise say, "if people have to -- gasp! -- pay for their own cell phones, at $300/each, then that scares away all the people who can't pay $300 (after paying labor taxes, remember!), so you can an efficiency loss there. Plus, anyone who can steal the phone has a $300 incentive too, so people invest in ways to steal them, and you have to pay for countermeasures. Those go up quickly with the price of the good.

"Therefore, the government should just tax everyone to cover the cost, and then hand out the cell phones for free."

Wait, that doesn't sound right ...


What's wrong with that argument? Well, a lot. So much that you probably already know the answer. It's for the very same reasons that many advocate user fees for any good that's excludable. Generally, whoever benefits should be the one to pay. ("Cuius lubido, eius sumptum." -- "Whose desire, his expense.")

As with those reasons in favor of user fees, you can make the exact same argument regarding the purported inefficiency of a CTS:

"Yes, you get inefficiencies every time you concentrate costs like that. And yes, they disproportionately stack with whatever taxes you already had. But you need the fee structure to work that way in order to align incentives. The one who uses the scarce resource -- whether a cell phone, or atmospheric dumping capacity -- should be the one to pay for it, as this leads them to economize on the use of that resource, and if possible, route around it. That remains doubly so when exempting them from the expense would lead to further penalization of every other class of socially-useful activity."

And that, I think, goes to the core of my original balking at the CTS/TIE argument.

Saturday, November 23, 2013

Liberty vs efficiency: The real conflict

Liberty: Being free of constraints

Efficiency: Raising the state of the world as high as possible on everyone's preference ranking (or some aggregate measure thereof)

You might have heard of Amartya Sen's Liberal paradox, which purports to show that the two necessarily conflict. Of course, as I said a while back, it does no such thing; it only shows a problem with preventing people from waiving their liberties when they find it preferable to do so.

However, there is a real sense in which those two conflict, and it becomes most apparent in discussions of taxation, and how to make it better.

The conventional economist's view is that "The ideal tax system is the one that hurts efficiency the least."

But there's another view, exemplified by the Murphy article that I linked in my last post: "The ideal tax system is the one that's easiest to opt out of."

Naturally, these really do conflict. Why? Because generally speaking, if you want to levy a tax that merely transfers purchasing power to the government without also forcing people to bear other hardships, you have to do it by taxing goods with inelastic demand, like energy, as people will not respond to the tax by buying less of the good, which would indicate a reduction in efficiency.

But the harder a tax is to avoid, the harder it is to "opt-out" of!

So if you think it's good for people to be able to legally reduce government revenues by abstaining from a product at relatively little cost to themselves, then "economically efficient taxes" are no longer an unvarnished good, as they come at the direct expense of the goal of making it easier for people to change their behavior in a way that routes around taxation.

This, I think, is the true conflict between efficiency and liberty, as it doesn't hinge on confusing rights and obligations.

Saturday, November 9, 2013

I explain tax interaction effects (because I think the experts can't)

So it turns out there's a serious argument (HT and text summary: Bob Murphy) that a "green tax shift" may be welfare-worsening rather than welfare-improving. (The green tax shift is where you cut taxes on labor and capital while raising them on environmental "bads" like CO2 emission.)

Huh? How can a tax shift off of bads and onto goods be welfare worsening? It seems the argument is somewhat subtle; even Bob Murphy dismisses clarification requests in the comments, pleading that "it’s hard to point to 'what’s driving the result' except to say, 'Adding the carbon tax drove the result.'"

Well, it's actually not that hard, but the standard expositions don't make it explicit. After reading another of Murphy's articles, it finally clicked for me, although the better explanations still hid the true mechanism in unstated assumptions. Here's how I explained it in the comments (cleaned up a bit and sourced).
****
I think I have an explanation that conveys the intuition.

Insight 1: the harm of a tax is more-than-proportional to its magnitude. (This is the assumption that the writing on this seems to assume and which I wish was made explicit here and in your article.) Mankiw gives the rule of thumb that the deadweight loss of a tax increases with the square of the tax rate. Thus why you want to raise a given amount of revenue from as “broad a base” as possible -- to lower the rate each tax has to be.

Insight 2 (most important): Because of the above, each increase in tax above the Pigovian level is more harmful than the same increase from zero.

Insight 3: Taxes on anything chase back to their original land/labor/capital factors. So a carbon tax amounts to a tax on land, labor, and capital, divided up per their relative supply/demand curve elasticities (slopes).

Given the above, the intuition becomes a lot clearer: a tax on carbon is like an income tax (with different levels for different kinds of income). Levied at the Pigovian rate, it merely cancels out the carbon harms. But if you have an additional (direct) income tax, you get a disproportionate harm for each (potentially) taxed dollar above the Pigovian level (compare to taxing from the first dollar) — *that* is the tax interaction effect.

Furthermore, since the “green tax trade” tries to raise the same revenue on a smaller base (i.e. only those income sources touching carbon), the tax rates have to be much higher than they would be if they were on all income. This then causes major welfare-harming changes in behavior, far out of proportion to the assumed harms from carbon.
****
Problem solved, right?

Well, no; Bob insists that Insight 1 is irrelevant to the argument. But I don't see how this can be; you can only get the bad "tax interaction effects" if the tax's harms are more-than-proportional to ("superlinear in") the tax rate.

If it's merely proportional, the taxes don't "interact" at all -- raising taxes by 1 percentage point (on any kind of income) does just as much additional harm, regardless of whether it's on top of a 6% existing tax, or a zero. But when it's more than proportional, then that extra point of tax is (badly) "interacting" with whatever other taxes got it to that level. This is the key insight: that having income taxes in addition to the (implicit income tax resulting from a) carbon tax means those taxes are doing more harm than they otherwise would.

Likewise, if the harm (deadweight loss) of a tax were less than proportional to (sublinear in) the rate, then they would interact in the opposite way. It would make sense to have as few distinct taxes as possible, on a small a base as possible, with as high a rate as possible -- because in that case, each additional increase in the tax rate hurts less than the previous. (Obviously, we don't live in that world!)

I note, with some irony, that this point ultimately reduces to the reasoning behind standard mainstream economist's tax advice to "lower the rates, broaden the base", a mentality Bob actually criticized in another context...

Thursday, November 7, 2013

No politician has ever lied

Because gosh, you'd have to be an idiot to believe them in the first place, says Steve Landsburg.

Thursday, July 11, 2013

My discovery of "semantic security"

One interesting thing I forgot to mention in the previous post about homomorphic encryption: the concept of semantic security.

It was actually a major stumbling block for me. When I got to the passage that mentions the concept, the author casually remarks that "semantic security is an expected feature of public key cryptosystems", and then defines the term as follows: a system is semantically secure if, given two plaintexts and the ciphertext of one of them, an attacker cannot do better than chance in guessing which plaintext goes with that ciphertext.

That didn't make sense because I had always assumed that the defining feature of public key cryptography was that the attacker is permitted unlimited chosen-plaintext attacks, which -- I thought -- means that the attacker always gets to know what ciphertext goes with any plaintext. So how can you make it so that the attacker can -- as required for public key encryption -- produce valid ciphertexts from arbitrary plaintext, and yet still have a semantically secure cryptosystem? Couldn't the attacker just encrypt both plaintexts to figure out which one corresponds to the given ciphertext?

What I missed was that you can use a one-to-many cipher: that is, the same plaintext corresponds to many ciphertexts. What's more, it's actually trivial to convert a plain-vanilla one-to-one public key cipher into a one-to-many semantically secure version. Here's how: just before applying the encryption step, generate a random number (a "nonce") and append it to the plaintext, with the proviso that the recipient with look for it in that position and strip it off after decryption.

This way, in order for an attacker to try to guess the plaintext in the game above, it's no longer enough for them to simply encrypt both plaintexts: a random number was inserted in the process. This means that in order to find a match between a plaintext and a ciphertext, the attacker must encrypt each plaintext with every possible nonce, which requires resources that increase exponentially with the size of the nonce used. (That is, an n-bit nonce can have 2^n possible values.)

The more you know (tm).

Friday, May 24, 2013

"I added your numbers, and I have no idea what they are."

So it turns out there's a thesis arguing that polynomial-time, fully homomorphic encryption is possible. (Link is to the dumbed-down -- but still journal-published -- version that mortals like me are capable of understanding.)

It's hard to understate the significance of this. This means that it's possible for you to give someone your data in encrypted form, and for them to execute arbitrary operations and give it back to you, without ever knowing what the data is. That is, they transform an input ciphertext to and output ciphertext such that when you decrypt the output, you have the answer to your query about the data, but at no point did they decrypt it or learn what was inside.

In other words: "I just calculated the sum of the numbers you gave me, but I have no idea what the sum is, nor what any of the numbers are."

If it sounds impossible, it's not because you misunderstand it, but because that kind of thing shouldn't be possible -- how can you perform arbitrary operations on data without learning something about it? Sure, maybe there are edge cases, but a rich, Turing-complete set?

It would mean that "the cloud" can ensure your privacy, while *also* doing useful operations on your data (as the author, Craig Gentry, goes at great length to emphasize).

As best I can tell from the paper, here's the trick, and the intuition why it's possible:

1) The computation must be non-deterministic -- i.e. many encrypted outputs correspond to the correct decrypted output. This is the key part that keeps the computation provider from learning about the data.

2) The output must be fixed size, so you have a sort of built-in restriction of "limit to the first n bytes of the answer".

3) It does require a blowup in the computational resources expended to get the answer. However, as noted above, it's only a polynomial blowup. And thanks to comparative advantage, it can still make sense to offload the computation to someone else, for much the same reason that it makes sense for surgeons to hire secretaries even when the surgeon can do every secretarial task faster. (Generally, when the provider's opportunity cost of performing the task is less than yours.)

4) Finally, to be fully homomorphic -- capable of doing every computation, not just a restricted set of additions and such -- the encrypted computation has to find a way around the buildup of "noise" in the computation, i.e. properties of the output that put it outside the range of what can be decrypted (due to exceeding the modulus of the operation needed to extract the output). And to do that, in turn, its operation set must be sufficient to perform its own decryption.

I'm only about halfway through the paper, but it's been really enlightening to get the intuition behind why this kind of thing can work.

Monday, April 15, 2013

Our latest work on Bitcoin

Bob Murphy and I have collaborated on an attempt to explain Bitcoin -- and its economic implications -- for the masses.

For the three of you still following this blog, check out the link.

Thursday, November 1, 2012

Disaster Keynesianism -- Say something responsive for once!

Last day in Budapest for now, leaving in a few hours. But it looks like the topic of the day is the economics of Hurricane Sandy, and, as with any discussion of economics during a natural disaster, whether it will be "good for the economy".

Needless to say, this is a discussion that has happened several times already. Still, engagement with the other side's arguments is always good -- as long as you're actually, well engaging, rather than extending and reinforcing a non-responsive (or no-longer-responsive) point.

Which brings us to pseudo-contrarian Steve Landsburg's latest pseudo-contribution to the matter. He thinks he has an even more devastating critique of the "hurricanes can be good for the economy" by posing this:

ask your opponent whether it’s “good for the ants” when you put a stick down their anthill, wiggle it around and destroy their infrastructure. Go ahead and acknowledge that this can sure put a lot of ants to work.

Or, for that matter….

Ask if spilling ink on the living room rug is “good for your household’s economy” because of all the cleanup work you’ll do.

Of course, this doesn't actually address the Keynesian's central point, because their claim is that normally such acts are destructive, but need not be so when there are idle resources (found after a quick search).

To make absolutely sure I'm not misunderstood, please read these caveats if you plan on responding:

- I don't agree with they Keynesian "idle resources" argument, and have said as much before.

- I realize that Keynesians (and their critics) acknowledge that there are always better ways to do economic stimulus than a natural disaster -- just employ those otherwise-would-be-disaster-response-resources to do something that's not completely wasteful.

And yet there's no mention of relevance of idle resources in Landsburg's post, or in the army of back-slappers or hangers-on that dominate the beginning of the discussion. When we finally do, it's from critics who offer surprisingly good analogies, like commenter "Brian", who compares a stagnant economy to laziness ("akrasia") in an individual:

Suppose Billy Joe has been in bed for years. He’s overweight and unmotivated. His life appears to continue to spiral out of control as he watches reruns of every horrible show made from the 1970′s on. But when that ink falls on the floor, this finally gave him a reason to get out of bed and clean up the mess, and the mere activity of it kick started him into action of doing thins again, and even being motivated [sic]

And the defenders of the post (I guess *not* surprisingly) miss the point that of course making new windows is better than fixing broken ones, but that's not an option here. Landsburg himself does that in this comment:

... this is ridiculous, on Keynesian grounds or any other. If you believe it’s important to hire idle resources in order to “stimulate the economy”, then you don’t have to wait for a hurricane — you can hire people to build *new* bridges instead of having them rebuild old ones. The hurricane does not in any way expand your set of policy options; it only destroys stuff.

Except, of course, that it does expand your options, since by supposition, policy makers won't allocate funds for public works projects that build new windows, but will gladly fund projects to restore the windows that were broken in the disaster. (To re-iterate: I disagree that such public works funding -- whether for building or fixing -- is a good idea for "helping the economy"; this is simply about appreciation of one's opponent's arguments and responsiveness thereto.)

***
My point here is that if you want a really hot one-line zinger for why the "hurricanes good for economy" meme (in its most intelligent form) is wrong, you're going to have to do a lot more than just say that destruction is bad. No -- you're going to have to show why destruction is not "better than nothing" if its effect is to put (only) idle resources to use, thus giving people the dignity of a job and practice of their skills, when you don't have the option (for e.g. political reasons) of simply employing those idle resources to build on top of existing wealth.

What's that, you say? It's hard to give a concise, fun explanation of why that thinking is wrong? Well, it should be. Two-sided political debates tend to be like that. My shortest debunking is at least this long

Can you do better? Perhaps. But it won't be by invoking the ten millionth permutation of "destruction is bad, m'k?".

Wednesday, September 19, 2012

What's up with further vs farther?

Yeah, I know it's been a while since I posted, and I've kinda let this blog die off, but I figure, better late than never, right?

So, to start with just a random thought: Why is it that "farther" seems to be the only word we can't use figuratively? The standard explanation is that "farther" is only used for literal distance, while "further" can be used in a figurative sense.

What's up with that? What other word do we have this rule for? As I understand it, you get to use any word you want in a figurative or metaphorical sense. Why not "farther" as well? (I guess the one other example would be "literal", which I oppose the figurative use of, since, ya know, it's the one word that's supposed to actually distinguish the two cases, and without which we can't even speak of the difference.)

It just doesn't make a whole lot of sense.

[Insert usual remark about how I composed this entire post, including navigating to and copying the link, without the mouse or trackpad.]

Saturday, May 12, 2012

Setting naming conventions for international audiences straight.

So the place in Budapest where I'm staying is called the K9 Residence.  What I need to say next depends on your native language.  Please skip to the subheading that best describes you.

Native English Speakers


No, the place doesn't have anything to do with dogs, nor can one jokingly say that they "treat you like one".  The name comes from how it's number 9 on the street Karoly Korut, and no one ever alerted them, apparently, that K9 is a common shorthand for "dog" in English.  (Or perhaps they did learn that much, but deemed it too late to change.)

Non-Native English Speakers


Hey, did you know that in English, K9 is a common symbol or abbreviation for "dog"?  Yeah, it comes from how it's pronounced like "canine", the adjective for dog based on its Latin root canis.  Remember that movie K-9?  Yeah, kinda like that.

****
Anyway, I don't expect all of this internationalization to go perfectly for everyone, but, well, y'all could have saved me from having to explain stuff to a lot of people of different native tongues...

Saturday, May 5, 2012

I'm going to Hungary tomorrow.

That is all.

Monday, April 23, 2012

Taking the anti-mouse campaign ... to the next level

Wow, did you guys know that you can buy domain names now?

And that no one had yet taken "tyrannyofthemouse.com"?

Well, I snapped it up, and it will include a showcase of my programming projects soon.

I also managed to move this blog to the subdomain blog.tyrannyofthemouse.com (as you might have already noticed).  Don't worry, all the old links will still work!

Sunday, February 26, 2012

Ending the tyranny of the mouse -- in web browsing

Since I plan to program professionally, I've ramped up my efforts to get by without a mouse, and I thought I'd share some key tools I've used to accomplish this.

For web browsing, the key is Pentadactyl, a Firefox extension that lets you do the things you want from the keyboard. (I would say all the things, but some websites are written so as to be unfriendly to it.) For example, if you want to click on a link, you hit f, and it pops up a keyboard code over every link, and you type the code to "click" on it. Here's what it looks like when you use it:



Other useful features are:
back/forward = shift+H / shift+L
page down/up = space bar / shift+space bar (these work without Pentadactyl)
half page down/up = ctrl + d / ctrl + u
search = / (yes, the slash key), then enter, then n/shift+N to search down/up
go to URL = o, space, [page url] (if you've entered something similar before, you can tab through the cached options)
go to URL in new tab = replace "o" in the above with "t"
open link in new tab: ;t , then it pops up hints as if you had pressed f but opens in a new window

However, you need to configure it a bit in order to get the most out of it. For example, as initially installed, it will remove your ability to use the familiar ctrl +c/v/a (for copy/paste/select all) due to its being based on the text editor Vim. Also, the hint keys (buttons used when creating a code that lets you click a click) are set by default to draw from the numbers 0-9, which are less comfortable to type every time you click on a link, and they are displayed too small to read.

To set your configurations, you need to create/edit a file called ".pentadactylrc" in your home directory. Here are the contents of mine, which fix the above problems:

"1.0rc1

loadplugins '.(js|penta)$'
group user
highlight Hint font: bold 10px "Droid Sans Mono", monospace !important; margin: -.2ex; padding: 0 0 0 1px; outline: 1px solid rgba(0, 0, 0, .5); background: rgba(255, 248, 231, .8); color: black; font-size: 14pt !important;
map -count -modes=i,n,v <C-c> <count><Pass>
map -count -modes=i,n,v <C-v> <count><Pass>
map -count -modes=i,n,v <C-a> <count><Pass>
map -count -modes=i <C-a> <Pass>
map -count -modes=i <C-x> <Pass>
set guioptions=bCrsmT
set hintkeys=asdfwervcxtgq
set hinttimeout=500

" vim: set ft=pentadactyl:


Enjoy your breaking the tyranny of the mouse! (And yes, I composed this entire post, including creating the link, without using the mouse. I may have mentioned that a few times before).

Wednesday, February 1, 2012

The secret unleased: I will become a software developer!

I know I've kept my readers kind of in the dark, but I just left my day job to attempt a career change into software development and otherwise make sense of all the projects out there that I could improve. I'll be flying to San Francisco today for it, where I'll stay for at least 2-3 months, residing on the luxurious (s)Nob Hill.

Here is the site for the "Developer bootcamp" program, for which I'll be in the Feb/March cohort.

Flight leaves at 1:30 pm, wish me luck!

Wednesday, January 18, 2012

I'm such a twit...

I didn't think I'd ever see the day, but because of that "thing I applied for" (more on that later) I had to get a Twitter account, and as luck would have it, @SilasBarta was available and I took it.

So you can be reassured that @SilasBarta isn't some dastardly soul trying to impersonate me.

(I was going to make another remark about having made this post purely through the keyboard, but Twitter ruined it ... some of their clickable buttons can't be recognized by Pentadactyl, the Firefox extension that tries to let you do everything from the keyboard but gets stymied by by poor web design.)

Wednesday, January 11, 2012

Mr. Ford, meet Boeing

You know how it's become a sort of cliche/folk-economics to say that "You should pay your workers enough so that they can buy the product you sell?" It's supposed to be what gave Henry Ford I his tremendous success with the Model T, and has become a staple of union bargaining.

For a recent example of this line of thought, here's none other than (former Secretary of Labor) Robert Reich arguing it, complete with reference to the Model T story.

Well, it recently occurred to me how underpaid I am. My employer modifies and sells large aircraft. No way can I afford that!!!

Did somebody say "raise"?

(This post made entirely without use of the mouse -- including for looking up and copying over links -- thanks to the use of the Firefox Pentadactyl extension. Give it a whirl!)

Addendum: To clarify, Boeing is not my employer, just a synecdoche for large aircraft manufacturers in general.

Saturday, December 31, 2011

Broken Windows, Part I: The Pain of Hard Choices

This will be the first in a series where I spell out an underappreciated concept in economics and how it leads many economists astray in proposing solutions to economic problems. I figured I better get a start on it before the New Year.

Recently, I've gained some insight into the economic debates between the various camps that claim to have a solution to our current problems. In addition to tying up some loose ends regarding a century-old debate, this insight gave me a good explanation of why standard dismissals of the so-called recalculation story (in explaining recessions like the current one) are making a subtle error.

First, a high-speed recap: Way back in the 1800s, Bastiat described what is known as the "Broken Window Fallacy" to refute the prevailing economic wisdom of the age. Many believed that a vandal who broke a window could be doing the economy a favor, reasoning that the owner would have to hire a glazier to fix the window, who would have new money he could use to buy new shoes, which would give the shoemaker the chance to buy something he wanted, and so on. (Note the early shades of the "multiplier effect" argument.)

Bastiat replied, basically, that no, this doesn't quite work, because you have to account for the "unseen" loss to the window owner, who would have engaged in the exact same economic stimulation as the glazier, had the window not broken, because he would have been able to buy something he wanted -- and we'd get to keep the window, to boot!

This mention of the Broken Windows Fallacy is often brought up in response to proposed Keynesian solutions (involving government stimulus spending), where their opponents say that it makes the same error, neglecting the unseen economic activity that would go on in the absence of the government's spending.

Keynesians, in turn, reply that the Broken Window Fallacy only applies at "full employment", where there is no "crowding out" (i.e. forgone projects due to the government's use of resources for different ones). In a depressed economy, they argue, the alternative to a metaphorical broken window (along with its fixing) is not "the window owner buys something else", but rather, "the window owner hoards that money", providing no economic benefit. Therefore, breaking a window in such a case would not have an economic opportunity cost, and so could indeed be good for the economy -- though Keynesians of course admit there are much better ways to increase employment than breaking a window.

The back-and-forth goes on, of course, with each side claiming that the other's position implies or relies on an absurdity. Keynesians accuse the free-market/"Austrian" types of thinking the economy is always optimally using resources, while Austrians accuse the Keynesians of calling a hurricane "God's gift to depressions".

But here, I think, I've noticed something that tremendously clarifies the debate, and gives us insight into why economic activity does or doesn't happen, and why certain events are or aren't good. So, here goes.

*******

Let's go back to the original Bastiat thought experiment about the broken window. Ask yourself this: Why are we assuming the window will be fixed at all?

Don't misunderstand me: it's a reasonable assumption. But we have to be careful that this assumption isn't fundamentally ignoring relevant economic factors, thereby baking in a desired conclusion from the very beginning. And here, I think we have good reason to believe that's exactly what's going on.

So let's start simple: under what circumstances would it be not be reasonable to assume that the window will be fixed, (i.e. that the owner will choose to pay someone to fix it), even during a depression? That's easy: if the neighborhood (along with that building) is run-down to begin with, already littered with broken windows. A lone broken window merits a quick repair, but if it's yet-another-broken-window, why bother? (Note here the substantive similarity to the homonymous "broken window" effect!)

So here we see the crucial, unappreciated factor: the obviousness of certain production decisions. What these thought experiments -- carefully constructed to make a different point -- actually prove is the importance of being able to confidently decide what is the best use of resources. And we can step back and see the same dynamic in very different contexts.

For example, say an unemployed guy, Joe, is trying all different kinds of things to find a job, and nothing is working. Then while driving one day, makes a wrong turn and steers his car off a bridge into the river below. Not good. But there is one teensy-weensy good part: it's a lot easier to prioritize! Previously, Joe didn't know what he should do to make optimal use of his time. Now, he knows exactly what he needs to work on: avoiding death from falling into a river!

And we can step back even more and generalize further: what we are seeing is but a special case of the law of diminishing returns. Abstractly, each additional unit of satisfaction requires a greater input of factors: land, labor, capital ... and thought (sometimes called "entrepreneurial ability"). Generally, the further up you pick the fruit, the harder it is to pick the next branch up, in terms of any factor of production, including and especially thought. Conversely, if you suddenly face a sharp drop in satisfaction by being deprived of more fundamental necessities, it becomes easier to decide what to do: replace those necessities!

***

That should give you a taste of what I think is missing from discussions of the economic impact of natural disasters and inability to reach full employment. In the next entry, I'll go further to illustrate how deeply this oversight impacts the ability to perform good economic analysis.

Sunday, December 11, 2011

EXCLUSIVE: Silas's bitcoin mining rig tell-all!

As part of an application I filled out recently (more on that in the future), I explained everything I went through to get my bitcoin mining rig up and running. But why bury that story in a place only one person will read it? Nay, my readers ought to hear about it as well! So, here's the story, with a photoalbum for each stage at the end.

***

In February 2011, I learned about Bitcoin and the feasibility of building a "mining rig" (machine that quickly computes lots of SHA256 hashes) to make money by generating bitcoins, which trade against dollars at a varying rate. Though I hadn't built a custom box before, the idea of setting up a mining rig excited me.

I looked over some rig designs in the Bitcoin.org wiki, and based on those, designed a custom setup that I figured would achieve the highest RoR (involving 4 GPUs connected to a motherboard in a large case) and ordered the parts. Some graphics cards had already been bid up to unprofitability (Radeon 5970) by other rig builders, so I picked a slightly slower (Radeon 5870) one that was several times cheaper.

Over the course of putting it together I ran into a number of problems, any one of which would have shut down my plans, but kept trying different things until I overcame them.

First, since I hadn't built a computer from (near) scratch I had to learn what parts (motherboard, SSD, CPU, RAM, PSU) went where, and how to optimally route the wires. Then, on bootup, I found the BIOS didn't see the hard drive, and traced the problem to a part of the case's internal wiring that wasn't passing the SSD's SATA flow through, so I bypassed that and plugged it in directly to the motherboard.



Then, after installing Ubuntu, I had to download the exact set of ATI drivers required for mining rig code to work. It turned out the latest drivers interfered with the mining code, so I had to get an earlier version that AMD no longer promoted (or pretty much acknowledged the existence of). From the forums I found that you had to manually enter the URL since nothing linked to it anymore, which allowed me to mine with the first GPU in a way that exploited its ability to do parallel hash calculations.

After configuring the GPU to send its computations to a mining pool (group of miners that combines computations to get a more predictable solution [hash inversion] rate), I opened up the box again to add the second GPU. (I had decided early on to add them one by one to make sure I understood what was going on at each stage.) Getting them both to work together introduced another problem, as they would somehow keep adjusting their hashing rate downward to the level of only one GPU. This required another trip back to the forums to learn new software to install, which still didn't work after numerous configurations, so I wrote down the whole process up to that point and re-installed the operating system. (I ended up doing this several times as a last resort at different stages.)



Once I got all 4 GPUs and a hardware monitor installed, I was able to get excellent hashing performance, but soon noticed that, with four high-power GPUs packed so closely together, they heated up to unacceptably high temperatures, so I took two out. That solved the temperature problem, but I still wanted all four to be able to run, so I looked into better cooling solutions. (For a short while I ran three cards safely by having one side of the case open and pointing a box fan at the cards, though this was obviously very inconvenient and wouldn't permit safe overclocking.)

It turned out that liquid cooling was my only option, which I had also never set up before. Nevertheless, I went forward and found a cooling block model (i.e., something that replaces the OEM GPU heat sink) that would fit my cards, as well as a cooling kit (pump, radiator, reservoir, tubing). I also ordered some parts that would directly connect different blocks together and so minimize the need for tubes.

When I got the cooling blocks, it turned out they didn't fit, because the particular variant of the Radeon card I was using had a non-standard PCB design (which I didn't realize was possible before). So I sent back the cooling blocks, found ones sure to match this specific design, and ordered those. Finally I was able to attach a block to each of the 4 GPUs. I then ran into another problem with the block-to-block connectors, which I couldn't figure out how to install (and had ambiguous directions), and had to be put in a tight spot, so I asked a more home-improvement-savvy friend how they worked.



I eventually got the connectors to install, but ran into another problem: because of space constraints, the tubing would require bends that were too sharp. I figured I needed a 90-degree angle fitting, but I couldn't get one at a local hardware shop because PC cooling parts all use British piping threads, which are incompatible with those carried in American stores. After finding a compatible one online, I realized that each day without the rig running was costing me money, and this was the only part holding it up, so I had it express-shipped to arrive the next day, allowing me to finish setting up the liquid cooling system.

I had to make a few choices then about which way to point the radiator air flow and otherwise optimize cooling capacity. I eventually settled on a design that had the radiator take air from the room and dump the exhaust into the case, which I partially mitigated by flipping one of the case fans to bring more external air in rather than taking it out.

At this point, there were fewer setbacks, but I was hesitant about circulating coolant inside the system if I couldn't first ensure there were no leaks. So, before closing the loop and adding the coolant, I configured some leakage tests where I would fill the system with distilled water, leaving an open tube at the top, sealing the other end, and ensuring plenty of towels around the potential leak points.



With this test configuration, I blew into the tube. I figured that if it could withstand this pressurization without leaking, I could be more confident about actual fluid circulation. Fortunately, none of the tests showed a problem, and I got the "production" liquid cooling system running.



Finally, I was able to have all four GPUs running overclocked, generating bitcoins for me through a mining pool, and staying at temperatures significantly below what I got before. I further optimized performance by using new mining software, experimenting with settings, and then saving a file with the commands to get the rig running optimally.

There were still some other kinks to sort out, like what to do about the immense heat it generated for the rest of my place, and how to monitor the mining pool status, but that about covers everything. Now, for the pictures:

Final result

The various stages (no captions, sorry):
Initial set-up
Putting four graphics cards in
Replacing OEM heat sinks on graphics cards with waterblocks
Installing the liquid cooling system