This will be the first in a series where I spell out an underappreciated concept in economics and how it leads many economists astray in proposing solutions to economic problems. I figured I better get a start on it before the New Year.
Recently, I've gained some insight into the economic debates between the various camps that claim to have a solution to our current problems. In addition to tying up some loose ends regarding a century-old debate, this insight gave me a good explanation of why standard dismissals of the so-called recalculation story (in explaining recessions like the current one) are making a subtle error.
First, a high-speed recap: Way back in the 1800s, Bastiat described what is known as the "Broken Window Fallacy" to refute the prevailing economic wisdom of the age. Many believed that a vandal who broke a window could be doing the economy a favor, reasoning that the owner would have to hire a glazier to fix the window, who would have new money he could use to buy new shoes, which would give the shoemaker the chance to buy something he wanted, and so on. (Note the early shades of the "multiplier effect" argument.)
Bastiat replied, basically, that no, this doesn't quite work, because you have to account for the "unseen" loss to the window owner, who would have engaged in the exact same economic stimulation as the glazier, had the window not broken, because he would have been able to buy something he wanted -- and we'd get to keep the window, to boot!
This mention of the Broken Windows Fallacy is often brought up in response to proposed Keynesian solutions (involving government stimulus spending), where their opponents say that it makes the same error, neglecting the unseen economic activity that would go on in the absence of the government's spending.
Keynesians, in turn, reply that the Broken Window Fallacy only applies at "full employment", where there is no "crowding out" (i.e. forgone projects due to the government's use of resources for different ones). In a depressed economy, they argue, the alternative to a metaphorical broken window (along with its fixing) is not "the window owner buys something else", but rather, "the window owner hoards that money", providing no economic benefit. Therefore, breaking a window in such a case would not have an economic opportunity cost, and so could indeed be good for the economy -- though Keynesians of course admit there are much better ways to increase employment than breaking a window.
The back-and-forth goes on, of course, with each side claiming that the other's position implies or relies on an absurdity. Keynesians accuse the free-market/"Austrian" types of thinking the economy is always optimally using resources, while Austrians accuse the Keynesians of calling a hurricane "God's gift to depressions".
But here, I think, I've noticed something that tremendously clarifies the debate, and gives us insight into why economic activity does or doesn't happen, and why certain events are or aren't good. So, here goes.
*******
Let's go back to the original Bastiat thought experiment about the broken window. Ask yourself this: Why are we assuming the window will be fixed at all?
Don't misunderstand me: it's a reasonable assumption. But we have to be careful that this assumption isn't fundamentally ignoring relevant economic factors, thereby baking in a desired conclusion from the very beginning. And here, I think we have good reason to believe that's exactly what's going on.
So let's start simple: under what circumstances would it be not be reasonable to assume that the window will be fixed, (i.e. that the owner will choose to pay someone to fix it), even during a depression? That's easy: if the neighborhood (along with that building) is run-down to begin with, already littered with broken windows. A lone broken window merits a quick repair, but if it's yet-another-broken-window, why bother? (Note here the substantive similarity to the homonymous "broken window" effect!)
So here we see the crucial, unappreciated factor: the obviousness of certain production decisions. What these thought experiments -- carefully constructed to make a different point -- actually prove is the importance of being able to confidently decide what is the best use of resources. And we can step back and see the same dynamic in very different contexts.
For example, say an unemployed guy, Joe, is trying all different kinds of things to find a job, and nothing is working. Then while driving one day, makes a wrong turn and steers his car off a bridge into the river below. Not good. But there is one teensy-weensy good part: it's a lot easier to prioritize! Previously, Joe didn't know what he should do to make optimal use of his time. Now, he knows exactly what he needs to work on: avoiding death from falling into a river!
And we can step back even more and generalize further: what we are seeing is but a special case of the law of diminishing returns. Abstractly, each additional unit of satisfaction requires a greater input of factors: land, labor, capital ... and thought (sometimes called "entrepreneurial ability"). Generally, the further up you pick the fruit, the harder it is to pick the next branch up, in terms of any factor of production, including and especially thought. Conversely, if you suddenly face a sharp drop in satisfaction by being deprived of more fundamental necessities, it becomes easier to decide what to do: replace those necessities!
***
That should give you a taste of what I think is missing from discussions of the economic impact of natural disasters and inability to reach full employment. In the next entry, I'll go further to illustrate how deeply this oversight impacts the ability to perform good economic analysis.
Saturday, December 31, 2011
Sunday, December 11, 2011
EXCLUSIVE: Silas's bitcoin mining rig tell-all!
As part of an application I filled out recently (more on that in the future), I explained everything I went through to get my bitcoin mining rig up and running. But why bury that story in a place only one person will read it? Nay, my readers ought to hear about it as well! So, here's the story, with a photoalbum for each stage at the end.
***
In February 2011, I learned about Bitcoin and the feasibility of building a "mining rig" (machine that quickly computes lots of SHA256 hashes) to make money by generating bitcoins, which trade against dollars at a varying rate. Though I hadn't built a custom box before, the idea of setting up a mining rig excited me.
I looked over some rig designs in the Bitcoin.org wiki, and based on those, designed a custom setup that I figured would achieve the highest RoR (involving 4 GPUs connected to a motherboard in a large case) and ordered the parts. Some graphics cards had already been bid up to unprofitability (Radeon 5970) by other rig builders, so I picked a slightly slower (Radeon 5870) one that was several times cheaper.
Over the course of putting it together I ran into a number of problems, any one of which would have shut down my plans, but kept trying different things until I overcame them.
First, since I hadn't built a computer from (near) scratch I had to learn what parts (motherboard, SSD, CPU, RAM, PSU) went where, and how to optimally route the wires. Then, on bootup, I found the BIOS didn't see the hard drive, and traced the problem to a part of the case's internal wiring that wasn't passing the SSD's SATA flow through, so I bypassed that and plugged it in directly to the motherboard.
Then, after installing Ubuntu, I had to download the exact set of ATI drivers required for mining rig code to work. It turned out the latest drivers interfered with the mining code, so I had to get an earlier version that AMD no longer promoted (or pretty much acknowledged the existence of). From the forums I found that you had to manually enter the URL since nothing linked to it anymore, which allowed me to mine with the first GPU in a way that exploited its ability to do parallel hash calculations.
After configuring the GPU to send its computations to a mining pool (group of miners that combines computations to get a more predictable solution [hash inversion] rate), I opened up the box again to add the second GPU. (I had decided early on to add them one by one to make sure I understood what was going on at each stage.) Getting them both to work together introduced another problem, as they would somehow keep adjusting their hashing rate downward to the level of only one GPU. This required another trip back to the forums to learn new software to install, which still didn't work after numerous configurations, so I wrote down the whole process up to that point and re-installed the operating system. (I ended up doing this several times as a last resort at different stages.)
Once I got all 4 GPUs and a hardware monitor installed, I was able to get excellent hashing performance, but soon noticed that, with four high-power GPUs packed so closely together, they heated up to unacceptably high temperatures, so I took two out. That solved the temperature problem, but I still wanted all four to be able to run, so I looked into better cooling solutions. (For a short while I ran three cards safely by having one side of the case open and pointing a box fan at the cards, though this was obviously very inconvenient and wouldn't permit safe overclocking.)
It turned out that liquid cooling was my only option, which I had also never set up before. Nevertheless, I went forward and found a cooling block model (i.e., something that replaces the OEM GPU heat sink) that would fit my cards, as well as a cooling kit (pump, radiator, reservoir, tubing). I also ordered some parts that would directly connect different blocks together and so minimize the need for tubes.
When I got the cooling blocks, it turned out they didn't fit, because the particular variant of the Radeon card I was using had a non-standard PCB design (which I didn't realize was possible before). So I sent back the cooling blocks, found ones sure to match this specific design, and ordered those. Finally I was able to attach a block to each of the 4 GPUs. I then ran into another problem with the block-to-block connectors, which I couldn't figure out how to install (and had ambiguous directions), and had to be put in a tight spot, so I asked a more home-improvement-savvy friend how they worked.
I eventually got the connectors to install, but ran into another problem: because of space constraints, the tubing would require bends that were too sharp. I figured I needed a 90-degree angle fitting, but I couldn't get one at a local hardware shop because PC cooling parts all use British piping threads, which are incompatible with those carried in American stores. After finding a compatible one online, I realized that each day without the rig running was costing me money, and this was the only part holding it up, so I had it express-shipped to arrive the next day, allowing me to finish setting up the liquid cooling system.
I had to make a few choices then about which way to point the radiator air flow and otherwise optimize cooling capacity. I eventually settled on a design that had the radiator take air from the room and dump the exhaust into the case, which I partially mitigated by flipping one of the case fans to bring more external air in rather than taking it out.
At this point, there were fewer setbacks, but I was hesitant about circulating coolant inside the system if I couldn't first ensure there were no leaks. So, before closing the loop and adding the coolant, I configured some leakage tests where I would fill the system with distilled water, leaving an open tube at the top, sealing the other end, and ensuring plenty of towels around the potential leak points.
With this test configuration, I blew into the tube. I figured that if it could withstand this pressurization without leaking, I could be more confident about actual fluid circulation. Fortunately, none of the tests showed a problem, and I got the "production" liquid cooling system running.
Finally, I was able to have all four GPUs running overclocked, generating bitcoins for me through a mining pool, and staying at temperatures significantly below what I got before. I further optimized performance by using new mining software, experimenting with settings, and then saving a file with the commands to get the rig running optimally.
There were still some other kinks to sort out, like what to do about the immense heat it generated for the rest of my place, and how to monitor the mining pool status, but that about covers everything. Now, for the pictures:
Final result
The various stages (no captions, sorry):
Initial set-up
Putting four graphics cards in
Replacing OEM heat sinks on graphics cards with waterblocks
Installing the liquid cooling system
***
In February 2011, I learned about Bitcoin and the feasibility of building a "mining rig" (machine that quickly computes lots of SHA256 hashes) to make money by generating bitcoins, which trade against dollars at a varying rate. Though I hadn't built a custom box before, the idea of setting up a mining rig excited me.
I looked over some rig designs in the Bitcoin.org wiki, and based on those, designed a custom setup that I figured would achieve the highest RoR (involving 4 GPUs connected to a motherboard in a large case) and ordered the parts. Some graphics cards had already been bid up to unprofitability (Radeon 5970) by other rig builders, so I picked a slightly slower (Radeon 5870) one that was several times cheaper.
Over the course of putting it together I ran into a number of problems, any one of which would have shut down my plans, but kept trying different things until I overcame them.
First, since I hadn't built a computer from (near) scratch I had to learn what parts (motherboard, SSD, CPU, RAM, PSU) went where, and how to optimally route the wires. Then, on bootup, I found the BIOS didn't see the hard drive, and traced the problem to a part of the case's internal wiring that wasn't passing the SSD's SATA flow through, so I bypassed that and plugged it in directly to the motherboard.
Then, after installing Ubuntu, I had to download the exact set of ATI drivers required for mining rig code to work. It turned out the latest drivers interfered with the mining code, so I had to get an earlier version that AMD no longer promoted (or pretty much acknowledged the existence of). From the forums I found that you had to manually enter the URL since nothing linked to it anymore, which allowed me to mine with the first GPU in a way that exploited its ability to do parallel hash calculations.
After configuring the GPU to send its computations to a mining pool (group of miners that combines computations to get a more predictable solution [hash inversion] rate), I opened up the box again to add the second GPU. (I had decided early on to add them one by one to make sure I understood what was going on at each stage.) Getting them both to work together introduced another problem, as they would somehow keep adjusting their hashing rate downward to the level of only one GPU. This required another trip back to the forums to learn new software to install, which still didn't work after numerous configurations, so I wrote down the whole process up to that point and re-installed the operating system. (I ended up doing this several times as a last resort at different stages.)
Once I got all 4 GPUs and a hardware monitor installed, I was able to get excellent hashing performance, but soon noticed that, with four high-power GPUs packed so closely together, they heated up to unacceptably high temperatures, so I took two out. That solved the temperature problem, but I still wanted all four to be able to run, so I looked into better cooling solutions. (For a short while I ran three cards safely by having one side of the case open and pointing a box fan at the cards, though this was obviously very inconvenient and wouldn't permit safe overclocking.)
It turned out that liquid cooling was my only option, which I had also never set up before. Nevertheless, I went forward and found a cooling block model (i.e., something that replaces the OEM GPU heat sink) that would fit my cards, as well as a cooling kit (pump, radiator, reservoir, tubing). I also ordered some parts that would directly connect different blocks together and so minimize the need for tubes.
When I got the cooling blocks, it turned out they didn't fit, because the particular variant of the Radeon card I was using had a non-standard PCB design (which I didn't realize was possible before). So I sent back the cooling blocks, found ones sure to match this specific design, and ordered those. Finally I was able to attach a block to each of the 4 GPUs. I then ran into another problem with the block-to-block connectors, which I couldn't figure out how to install (and had ambiguous directions), and had to be put in a tight spot, so I asked a more home-improvement-savvy friend how they worked.
I eventually got the connectors to install, but ran into another problem: because of space constraints, the tubing would require bends that were too sharp. I figured I needed a 90-degree angle fitting, but I couldn't get one at a local hardware shop because PC cooling parts all use British piping threads, which are incompatible with those carried in American stores. After finding a compatible one online, I realized that each day without the rig running was costing me money, and this was the only part holding it up, so I had it express-shipped to arrive the next day, allowing me to finish setting up the liquid cooling system.
I had to make a few choices then about which way to point the radiator air flow and otherwise optimize cooling capacity. I eventually settled on a design that had the radiator take air from the room and dump the exhaust into the case, which I partially mitigated by flipping one of the case fans to bring more external air in rather than taking it out.
At this point, there were fewer setbacks, but I was hesitant about circulating coolant inside the system if I couldn't first ensure there were no leaks. So, before closing the loop and adding the coolant, I configured some leakage tests where I would fill the system with distilled water, leaving an open tube at the top, sealing the other end, and ensuring plenty of towels around the potential leak points.
With this test configuration, I blew into the tube. I figured that if it could withstand this pressurization without leaking, I could be more confident about actual fluid circulation. Fortunately, none of the tests showed a problem, and I got the "production" liquid cooling system running.
Finally, I was able to have all four GPUs running overclocked, generating bitcoins for me through a mining pool, and staying at temperatures significantly below what I got before. I further optimized performance by using new mining software, experimenting with settings, and then saving a file with the commands to get the rig running optimally.
There were still some other kinks to sort out, like what to do about the immense heat it generated for the rest of my place, and how to monitor the mining pool status, but that about covers everything. Now, for the pictures:
Final result
The various stages (no captions, sorry):
Initial set-up
Putting four graphics cards in
Replacing OEM heat sinks on graphics cards with waterblocks
Installing the liquid cooling system
Subscribe to:
Posts (Atom)