So there's a common kind of article that comes up a lot, and annoys me. It's the "What every programmer should know" or "needs to know" about this or that. About time, or names, or memory, or solid state drives or floating point arithmetic, or file systems, or databases.
Here's a good sampling on Hacker News of the kinds of things someone out there is insisting you absolutely have to know before you can knock out "Hello, World", or otherwise meet some higher standard for being a "real programmer".
I hate these articles.
For any one of them, you can find about 99% who don't already know something from the article, and yet they manage to somehow be productive enough that someone is paying to produce a product of value. To me, they come off as, "I want to be important, so I want to make it seem like my esoteric knowledge is much more important than it really is." If taken seriously, these articles would heavily skew your priorities.
So here's my standard for when you're justified in saying "what every programmer needs to know about X":
Think about the tenacious, eager kid from your neighborhood. He picks up on things really quickly. Once he learned to program, he was showing off all kinds of impressive projects. But he also makes a lot of rookie mistakes that don't necessarily affect the output but would be a nightmare if he tried to scale up the app or extend its functionality. Things you, in your experience, with your hard-earned theoretical knowledge, would never do. When you explain it to him, he understands what you mean quickly enough, and (just barely) patches that up in future versions.
What are those shortcomings? What are those insights, that this kid is lacking? What of this kid's mistakes would you want to give advice about to head off as soon as possible? That's what every programmer needs to know.
The rest? Well, it can wait until your problem needs it.
Wednesday, December 30, 2015
Sunday, December 6, 2015
The Scylla-Charybdis Heuristic: if more X is good, why not infinite X?
(Charybdis = care-ib-dis)
A Scott Alexander post really resonated with me, when he talked about one "development milestone" (#4): the ability to understand and speak in terms of tradeoffs, rather than stubbornly try to insist that there are no downsides to your preferred course of action. Such "development milestones" indicate a certain maturity of one's thought process, and greatly change how you discuss ideas.
When a Hacker News discussed Alexander's post, I remembered that I had since started checking for this milestone whenever I engaged with someone's advocacy. I named my spot-check the "Scylla-Charybdis Heuristic", from the metaphor of having to steer between two dangers, either of which have their downsides. There are several ways to phrase the core idea (beyond the one in the title):
Or, from the metaphor,
It is, in my opinion, a remarkably powerful challenge to whether you are thinking about an issue correctly: are you modeling the downsides of this course of action? Would you be capable of noticing them? Does your worldview have a method for weighing advantages against downsides? (Note: that's not just utilitarian cost/benefit analysis ups and downs, but relative moral weight of doing one bad thing vs another.) And it neatly extends to any issue:
- If raising the minimum wage to $15/hour is a good idea, why not $100?
- If lifting restrictions on immigration is good, why not allow the entire Chinese army to cross the border?
- If there's nothing wrong with ever-increased punishments for convicts, why not the death penalty for everything?
One indicator that you have not reached the "tradeoff milestone" is that you will focus on the absurdity of the counter-proposal, or how you didn't advocate for it: "Hey, no one's advocating that." "That's not what we're talking about." "Well, that just seems so extreme." (Extra penalty points for calling such a challenge a "straw man".)
On the other hand, if you have reached this milestone, then your response will look more like, "Well, any time you increase X, you also end up increasing Y. That Y has the effect/implication of Z. With enough Z, the supposed benefits of X disappear. I advocate that X be moved to 3.6 because it's enough to help with Q, but not so much that it forces the effects of Z." (I emphasize again that this method does not assume a material, utilitarian, "tally up the costs" approach; all of those "effects" can include "violates moral code" type effects that don't directly correspond to a material cost.)
I've been surprised to catch some very intelligent, respected speakers failing this on their core
What about you? Do you make it a habit to identify the tradeoffs of the decisions you make? Do you recognize the costs and downsides of the policies you advocate? Do you have a mechanism for weighing the ups and downs?
A Scott Alexander post really resonated with me, when he talked about one "development milestone" (#4): the ability to understand and speak in terms of tradeoffs, rather than stubbornly try to insist that there are no downsides to your preferred course of action. Such "development milestones" indicate a certain maturity of one's thought process, and greatly change how you discuss ideas.
When a Hacker News discussed Alexander's post, I remembered that I had since started checking for this milestone whenever I engaged with someone's advocacy. I named my spot-check the "Scylla-Charybdis Heuristic", from the metaphor of having to steer between two dangers, either of which have their downsides. There are several ways to phrase the core idea (beyond the one in the title):
Any model that implies X is too low should also be capable of detecting when X is too high.
Or, from the metaphor,
Don't steer further from Scylla unless you know where -- and how bad -- Charybdis is.
It is, in my opinion, a remarkably powerful challenge to whether you are thinking about an issue correctly: are you modeling the downsides of this course of action? Would you be capable of noticing them? Does your worldview have a method for weighing advantages against downsides? (Note: that's not just utilitarian cost/benefit analysis ups and downs, but relative moral weight of doing one bad thing vs another.) And it neatly extends to any issue:
- If raising the minimum wage to $15/hour is a good idea, why not $100?
- If lifting restrictions on immigration is good, why not allow the entire Chinese army to cross the border?
- If there's nothing wrong with ever-increased punishments for convicts, why not the death penalty for everything?
One indicator that you have not reached the "tradeoff milestone" is that you will focus on the absurdity of the counter-proposal, or how you didn't advocate for it: "Hey, no one's advocating that." "That's not what we're talking about." "Well, that just seems so extreme." (Extra penalty points for calling such a challenge a "straw man".)
On the other hand, if you have reached this milestone, then your response will look more like, "Well, any time you increase X, you also end up increasing Y. That Y has the effect/implication of Z. With enough Z, the supposed benefits of X disappear. I advocate that X be moved to 3.6 because it's enough to help with Q, but not so much that it forces the effects of Z." (I emphasize again that this method does not assume a material, utilitarian, "tally up the costs" approach; all of those "effects" can include "violates moral code" type effects that don't directly correspond to a material cost.)
I've been surprised to catch some very intelligent, respected speakers failing this on their core
What about you? Do you make it a habit to identify the tradeoffs of the decisions you make? Do you recognize the costs and downsides of the policies you advocate? Do you have a mechanism for weighing the ups and downs?
Labels:
immigration,
minimum wage,
moral philosophy,
philosophy,
politics
Subscribe to:
Posts (Atom)