(original post: Come On, Obviously The Purpose Of A System Is Not What It Does)
…
Thanks to everyone who commented on this controversial post.
Many people argued that the phrase had some valuable insight, but disagreed on what it was. The most popular meaning was something like “if a system consistently fails at its stated purpose, but people don’t change it, consider that the stated purpose is less important than some actual, hidden purpose, at which it is succeeding”.
I agree you should consider this, but I still object to the original phrase, for several reasons.
First, although I agree you should consider it, the original phrase too strongly asserts that this is the only possible explanation. If there are three possible perspectives:
Naive: A failing system is always pursuing its stated purpose as best it can. There could never possibly be any hidden motives going on.
Paranoid: A failing system is always caused by the people at the top deliberately wanting it to fail. Without these traitors, you could always accomplish everything perfectly with no tradeoffs.
Balanced: Systems can fail for many reasons. Sometimes it’s just a hard problem with tradeoffs. Sometimes it’s been perverted from its original goal by special interests. Sometimes it’s some third thing. Usually it’s a combination of all of these. You can’t know for sure until you look at it closely.
…then I think the people who use the phrase want to imagine that they’re pushing people from Naive to Balanced. But I think the last person to hold the Naive perspective died sometime in the 1980s, and in real life POSIWID is mostly used to push people from the Balanced to the Paranoid perspective without actually looking at the system involved or arguing the case.
And second, the explanation above is just the most popular of about a half-dozen different exegeses that commenters offered. So if you do want to communicate the thing I suggested above - which, reminder, is:
If a system consistently fails at its stated purpose, but people don’t change it, consider that the stated purpose is less important than some actual, hidden purpose, at which it is succeeding
…I think you should just say that, instead of the confusing version that half of your audience will misinterpret, and which incorrectly implies that this always happens.
When people insist on the confusing and inappropriately-strong version, I start to suspect that the confusingness is a feature, letting them smuggle in connotations that people would otherwise correctly challenge.
Hopefully this will become clearer as I answer your comments one by one, starting with:
Charles Lehman writes (X):
The actually useful insight from POSIWID is the negative corollary: there is "no point in claiming that the purpose of a system is to do what it constantly fails to do."
This was the original, less-catchy form, but it seems exactly the same to me, and equally wrong.
For example, Iran’s intelligence agency consistently fails to prevent Israel from infiltrating and attacking their nuclear program. But it’s very useful to claim that their purpose is to prevent this! If we both try to predict the behavior of Iranian intelligence, and I’m allowed to use the hypothesis “their purpose is preventing Israeli infiltration”, and you’re not allowed to use that hypothesis, I will consistently outpredict you. For example, I’ll be expecting them to interview security staff to see which ones are Israeli spies, try to intercept Israeli communications, and do other espionage activities, and you’ll still be stuck wondering whether they might take up gardening or ballet dancing.
The clear, natural language expression of this useful hypothesis is “the purpose of Iran’s intelligence forces is to prevent Israeli infiltration, but they usually fail”. This clear, natural language expression is great and tells you everything that you need to know. POSIWID (or its inverse) adds nothing except an attempt to ban this excellent and communicative expressive technology, in favor of some other vague meaning of “purpose” which can’t hang together and which nobody can really explain.
Ersatz writes:
“I thought the meaning was more something like “the system took these side effects into account and still considered that what it was doing was net positive in expectation, so the side effects are as much part of the system's purpose as the ‘positive’ outcomes”.
This is just diluting the word “purpose” into incoherence.
I think it’s useful linguistic technology to be able to say “The New York bus system both transports people from place to place, and emits lots of carbon dioxide. Its purpose is the transportation, and the carbon dioxide is an unfortunate side effect”.
If the goal of POSIWID is to insist that no, the transportation and the CO2 emissions are equally purposeful, or that we’re not allowed to talk about that question, then it sabotages our ability to communicate clearly, for no apparent gain.
Consider the New York carbon dioxide system, a hypothetical government department dedicated to emitting as much carbon dioxide as possible (why? to own the libs, of course!) Sometimes people drive motorcycles along its vast network of CO2 pipes, allowing them to travel from place to place. Is this exactly the same as the New York bus system? No? Why not? A natural answer would be “Because the bus system is aimed at transportation, but emits some CO2 as a minor side effect, and the CO2 system is aimed at emission, but facilitates transport as a minor side effect.”
If your objection is going to be that instead of considering purpose, you should restrict yourself to saying that one transports more people than it emits CO2, and the other emits more CO2 than it helps transport people, you’ve now legislated that we must list exactly how many people the New York bus system transports, and how much carbon dioxide it emits, and how many ants it crushes, etc, every time we talk about it. But most people who talk about the bus system don’t know these statistics, and don’t have to. It’s sufficient (and linguistically convenient) to say “Its purpose is transportation, not CO2 emission or ant-crushing.
Andrew Pearson writes:
The steelman of the phrase, which might be what Stafford Beer had in mind when he coined the phrase, is that in large complex organisations it's very hard for individual workers to see a link between what they do and the stated purpose of the organisation - people just get on and do what they're told, and orient themselves more towards "doing what they're already doing, just more effectively" than towards "fulfilling the stated purpose of the organisation".
I don’t think this is a great steelman. True, the average employee’s actions don’t obviously connect to the organization’s stated purpose. But the average employee’s actions also don’t obviously connect to what the organization does.
If the stated purpose of the US military is to protect America, and what it does is bomb Middle Eastern weddings, both of those are equally remote from the day-to-day life of some low-level employee who just counts the number of screws in a warehouse.
Aashish Reddy writes:
I think POSIWID is best applied to bureaucracies or large structures where the reason bad outcomes occur is not because of difficult battles with reality (like government or hospitals or the Ukrainian military), but because of the way incentives are set up in the system.
If someone said, “the purpose of the Civil Service is to drive through new, innovative ways of delivering rapid change!”, that would clearly be absurd. That may be their goal, or how they see themselves; but the purpose of the system is not defined by either of those things. If it was, they wouldn’t incentivise caution and slowness. Whether that’s good or not, the purpose of the Civil Service is best approximated by what it does!
I want to pay more attention to the word “goal” in the second sentence of the second paragraph:
“[Driving innovative change] may be their goal, or how they see themselves.”
It sounds like Aashish think it’s useful to use the word “goal” to discuss what a system is trying to do, separately from what it does or doesn’t accomplish. I agree! I just think “purpose” is a synonym for goal.
If you use POSIWID, you have to posit some kind of weird new ontology where “purpose” means the opposite of “goal”. If you don’t use POSIWID, you can just keep the words “purpose” and “goal” having their regular everyday meaning, and describe this state of affairs with phrases like “The goal/purpose of the Civil Service is to deliver rapid change, but due to perverse incentives, its actual effect is to prevent change.”
Kay writes:
For example, the US policing and criminal justice system and prisons seem to continually over imprisons people in general, and especially people of colour. These systems seem not to be changed, while they ostensibly truly could be. So to me this seems that the system may be working "to purpose" for the actors who want it to work that way.
This is interesting because Kay is using a left-wing example: “The criminal justice system imprisons too many people of color, so its purpose is to oppress black people”.
But one of the tweets in my original post was close to its right-wing opposite: “The criminal justice system consistently lets criminals off with a slap on the wrist, so its purpose is to get people raped and murdered.”
As long as people are thinking in these terms, they’re going to be prey for whichever conspiracy theory best suits their pre-existing prejudices. I think both of these are worse than the Balanced View version:
Some people care a lot about keeping people safe from crime, but other people care a lot about the human rights of suspects and convicts. The incarceration rate is a balance between these two forces, with some lesser contribution from sinister forces like private prison owners who want to increase incarceration to line their pockets.
This has the advantage of being obviously true (there are pro-tough-on-crime activist groups and pro-soft-on-crime activist groups, they both do effective activism for their chosen cause, and the exact incarceration rate depends on which ones are in the ascendant and have the ear of politicians), and not being a conspiracy theory that forces you to believe that the government is a monolithic entity that really wants people to be raped and murdered.
NegatingSilence writes:
Come on, obviously "The Purpose of a System is What it Does" is meant to draw your attention to the incentives in cases where something different is happening than is supposed to be happening.
My government purposefully raised housing prices for the benefit of people who own assets. They talk about "trying" to make things more affordable, but somehow they only succeeded in raising the cost by 500% in nominal terms. What a curious result.
I don’t know what country this person is in. But in my country, housing prices are high because of a combination of all of the following:
Citizens want to preserve “neighborhood character”: they currently live in a low-density low-crime low-traffic pretty suburb, they want it to remain a low-density low-crime low-traffic pretty suburb, and they worry that building new homes threatens that status.
Environmentalists want to preserve natural environments, so they make it illegal to build houses on unoccupied land.
Leftists want to prevent gentrification, so they thwart any new housing that rich people might move into.
Economically illiterate people think that market-rate housing somehow makes all other housing less affordable, or have some sense that anything which is good for “greedy” developers must be bad for the average person, so they’re against market-rate housing.
Investors bid up the price of houses for complicated non-market reasons (e.g. Chinese people looking for assets to store wealth outside of China), then don’t rent them out.
Homeowners want to preserve or increase the value of their houses.
Of these, I think 6 is one of the less important ones - if this were the dominating factor, people would support upzoning, since it usually raises the value of properties in the upzone (if developers can build skyscrapers on your land, then your land value goes up relative to the profitability of skyscrapers). But part of the problem is that people don’t support upzoning. So 6 can’t be the dominating factor.
Without POSIWID, people could think about all of these possibilities and come to their own conclusions. POSIWID tries to ban thinking about 1-5 by fiat, insisting that 6 is the only possible explanation and anyone considering the others is naive. I think this makes it a bad heuristic.
But there are two more concerning things about how Negating is using POSIWID.
First, he’s picking out one particularly salient thing the system does (raise house prices) and claim that’s “the” purpose. He could equally well pick any of the other results - preserve neighborhood character, protect the environment, help Chinese people escape currency controls. Like I said in the original post, in practice POSIWID serves as justification for paranoia - whatever effect you like least, whatever possibility would be most sinister - that’s the one that the system is intentionally aiming for.
Second, he’s saying it’s the purpose of “the” system. Which system? I bet whatever government he’s talking about has some organization called the Affordable Housing Bureau, or whatever. And I bet that the Affordable Housing Bureau really does make housing slightly more affordable, relative to the counterfactual where it doesn’t exist. It’s just that lots of other government, market, and social forces conspire to make it much less affordable. If Negating were to claim “The purpose of the Affordable Housing Bureau is to make housing less affordable”, this would be false even if the overall picture (the government is deliberately raising real estate prices) were true.
Brad writes:
I have to toss in Pournelle's Iron Law. The purpose of a system - when it is first established - may be dramatically different from the purpose it assumes after a few years.
Consider: You establish a system to solve a problem. That could be homelessness, or asylum, or drug abuse, or any of a number of other things. This system employs people, who then have an automatic interest - not in solving the problem - but in prolonging it, even in making it worse. After all, without the problem, the organization would not need to exist.
And hwold writes:
I see it used as "if you have a complex system/bureaucracy to solve X, then the incentives inside it is for X to get worse, and incentives will not have 0 influence on outcomes" For example : https://x.com/Devon_Eriksen_/status/1906042672499864034
I think this sounds profound on first glance, and it’s probably true in some cases. But it’s not nearly true enough to be an Iron Law. Try to think about it in specific Near Mode cases:
If you eliminated police, would crime go down, because the police have an incentive to preserve crime?
If you eliminated the fire department, would fires go down, because the fire department has an incentive to preserve fire?
If you eliminated doctors, would cancer deaths go down, because doctors have an incentive to preserve cancer deaths?
If you eliminated the FDA, would dangerous drug side effects go down, because the FDA has an incentive to preserve dangerous drug side effects?
If you eliminated the Federal Reserve, would bank runs go down, because the Federal Reserve has an incentive to preserve bank runs?
Brad’s original comment mentions homelessness and drug abuse, but I know some drug abuse doctors, and they’re (mostly) good people who do their best in a tough situation. Drug abuse doesn’t continue because drug abuse doctors are secretly ensuring it continues to help their bottom line. Drug abuse continues because fentanyl is really, really addictive.
Even good conspiracy theories don’t work like this. Was there a conspiracy among pain pill manufacturers to addict people? Yeah, kinda, although I think the degree to which this caused the opioid crisis is pretty overblown. But the pain pill manufacturers weren’t a system dedicated to preventing addiction. They did their job (reduce pain) fine, then ran an unrelated evil conspiracy on the side!
Breb writes:
This way of thinking may result from taking a strategy for predicting the motives of individuals, and using it to predict the motives of organisations. "Cui bono?" works when you're considering a single action carried out by a single person at a single moment in time, but it doesn't really work when you're considering the behaviour of hundreds of people who are incentivised to somewhat-but-not-perfectly cooperate over a long period to somewhat-but-not-perfectly implement a goal that was established by someone who somewhat-but-not-perfectly understands that that goal is just an instrument to attain a larger, more complex goal set by somebody else.
I’m against this for individuals too!
There are a million self-help gurus who try to convince you that that if you procrastinate - let’s say you always do term papers the night before and get terrible grades and it’s threatening your ability to complete college - then it must be because this secretly benefits you in some way. Maybe your overly-strict father wants you to complete college, and you’re deliberately trying to fail as a secret act of rebellion against him hidden even from yourself. Although something like this might sometimes be true, more often a clearer understanding of the circuitry involved (in this case, hyperbolic discounting) saves you from these labyrinths and lets you think about things straightforwardly again.
Tom J writes:
In the original Stafford Beer sense, the slogan POSIWID means that you can't tell from outside the system whether any given behaviour was *intended* or not. For the purposes of objective analysis, you have to treat your system as a black box that *does* whatever it's observed to do, as opposed to what people *claim* the point of the system is.
This may be true in cybernetics. Or it may be an interesting methodological commitment, in the same way that the behaviorists’ “assume there is no such thing as human interiority” was an interesting methodological commitment. But I don’t think it’s common or valuable in normal-life analysis of social systems.
When Biden bans NVIDIA from sending advanced chips to China, black box analysis would have to be ambivalent between explanations like:
Biden personally hates Jensen Huang and wants his company to suffer
Biden thinks NVIDIA produces bad chips and wants to save China from buying inferior products.
Biden wants to incentivize China to manufacture their own chips.
Biden wants to slow down Chinese AI.
Biden wants to slow down Chinese cryptocurrency mining.
Biden is angry at China over something else (the Uighurs?) and wants to punish them.
Biden supports Xi’s campaign to prevent Chinese people from getting addicted to video games, and wants to keep video-game-enabling GPUs out of the country.
…and design experiments to distinguish between these, or wait for more chip sanctions to see how they pan out.
But in real life, we can be very sure some of these (like 2 and 7) weren’t intended, and others (like 4) were. Why? Some combination of trusting Biden’s stated goals, psychoanalyzing Biden’s plausible goals, checking who lobbied Biden to do this, and reading enough international relations journals to get a sense of what policymakers are thinking about.
I think it’s fine to do black box systems analysis, just like it’s fine to do behaviorism. But we should view these as methodological commitments for a specific group, rather than good strategies for normal people.
Jared Peterson (blog) writes:
This originally struck me as rather silly and as an obvious misinterpretation of an idea that has nothing to do with human intentions...then I read the comments and saw many people claiming exactly that!
Donella Meadows is an important figure in the field of Systems Thinking, and says by definition (whether human designed or not), systems have a purpose.
"A system’s function or purpose is not necessarily spoken, written, or expressed explicitly, except through the operation of the system. The best way to deduce the system’s purpose is to watch for a while to see how the system behaves. Purposes are deduced from behavior, not from rhetoric or stated goals”
One way to think about this is that Meadows would be OK talking about Molochs purpose as something coherent. Is changing the climate the purpose of modern capitalism? In one sense, no. But simultaneously, it is perfectly coherent to talk about the system as having that exact purpose because the system seems to work towards that goal. Even if you push against the system, the system seems to adapt and continue with that goal anyways. There is something almost intelligent about systems where they seem to work towards goals that no one ever intended.
But the phrase isn't about human goals at all!
Oh! I agree this makes sense if you need to talk about the “purpose” of an un-designed system with no humans in it.
Moonshadow writes:
This sentiment is grasping towards the same sort of place as your "Meditations on Moloch" essay.
No-one involved in the system wants what the system actually ends up doing. But whatever their individual intents, /the system as a whole/, if allowed to grow naturally, inevitably ends up doing what Moloch wants.
Of course the purpose we intended for the system isn't really that, any more than Moloch really exists. But you can't begin the meta level fight - of designing the system's high level organisational structures and incentives to try to reduce this effect, instead of letting it emerge organically like it always does - unless you first admit the problem.
I agree this is a useful thing to talk about, I just don’t think “purpose” is the right word for it. I’m not even sure “system” is the right word for it.
A good example of Moloch would be two countries having a nuclear arms race. But how is this POSIWID? The purpose of the . . . system of two countries . . . is to . . . have a nuclear arms race? This is pretty different from how I usually hear it used.
Ajb writes:
POSIWID was not originally an antagonistic political snark. It's perfectly sensible to notice that a system may be fulfilling other purposes than it does officially, and this is not incompatible with it operating in good faith. You can think of it as a bit like Chesterton's fence:
* to reform a system you should understand what purposes it fulfills, not just what it is officially supposed to do
* These additonal or alternative purposes may in fact be desirable ones that you should avoid breaking.
Cybernetics (where the phrase originated) drew a lot of inspiration from biology, and there obviously nothing has an 'official purpose' at all. But it nevertheless has organisation and is functional.
Rob writes:
The problem with quoting aphorisms like this is that it misses the context - specifically the context of a management consultant (viz. Stafford Beer) who spends his entire life being told about systems his clients have put in place, with some stated purpose in mind. Those systems do not achieve their stated purposes, but can be continually defended against change by re-stating the purpose - this shouldn't work, but in practice it often does, because most people aren't great at decoupling intent from outcome. "The purpose of a system is what it does" is a good rhetorical counter, because it acknowledges that, in practice, any continuation of a system with known outcomes is a tacit acceptance of those outcomes as the system's real purpose. You don't get to claim some other "real" purpose once you know what the outcomes are.
My interpretation has always been in the spirit of this tweet: https://x.com/primawesome/status/1178671690261286918?lang=en
> My neighbor told me coyotes keep eating his outdoor cats so I asked how many cats he has and he said he just goes to the shelter and gets a new cat afterwards so I said it sounds like he’s just feeding shelter cats to coyotes and then his daughter started crying.
I agree this makes more sense in the context of some supposed person claiming that “the system has good intentions” means they should never have to change the system. I don’t think I really see this failure mode.
I bet a lot of you are going to yell at me and say that, I don’t know, homelessness or something is like this. But defenders of the current homelessness system never say you can’t change it because it had good intentions when it started. I predict they would say that their own group is doing good work, and it’s everyone else who needs to change. Or that the current system works a little and just needs to be funded more. Or that the current system is better than nothing, and your proposed attempt to “change” it is secretly a plan to gut it and leave homeless people without help.
I definitely don’t think they’d say “Yes, your proposed change would improve the system, but you’re not allowed to make it because the people who designed the current system had good intentions”.
Leah Libresco Sargeant writes:
I think the Catholic principle of double effect is helpful here. This often comes up in the case of eg delivering a baby pre-viability because the mom has an infection that will progress to sepsis and death if she and the baby aren’t separated.
The three criteria are:
the nature of the act is itself good, or at least morally neutral;
the agent intends the good effect and does not intend the bad effect, either as a means to the good or as an end in itself;
the good effect outweighs the bad effect in circumstances sufficiently grave to justify causing the bad effect and the agent exercises due diligence to minimize the harm
And I think it’s the second that’s most relevant to POSIWID. If the system could switch to doing the good without the bad, would it happily make the switch?
For the cancer hospital: yes!
For NEPA: I think not.
This is an interesting test, thanks. My only concern is that “if the system could switch” is kind of meaningless. When we ask whether the purpose of a charity is to help the poor, or just to give high salaries to its CEO, the test urges us to ask “If it could switch to helping the poor just as much without paying its CEO anything, would it do that?” What if the answer is “the board, low-level staff, and donors would support this, but the CEO wouldn’t,” and the charity’s actions come from compromises negotiated among these groups? What is the purpose of the charity then?
TimG writes:
I've seen reports (don't know how true) that NGOs in San Fran get paid a lot of money to solve homelessness. But after billions spent, homelessness is worse.
I thought this saying was a kinda reference to that sort of thing: the NGOs are there to collect money by virtue of the fact that there are homeless. Which is not what they are purported to do.
My understanding of the situation is that there are many groups.
Some are traditional anti-homelessness groups that try to build homeless shelters or something.
Others are homeless-rights-advocacy groups that try to prevent the police from doing things which they think violate homeless people’s rights, like forcing them to go to shelters.
It’s true that these two purposes are at odds, and that this conflict prolongs homelessness in San Francisco.
But I think that thinking of this as a “system” whose “purpose” is to preserve homelessness (because systems actually act in ways contradictory to their goals) makes you less able to understand the dynamics, not more!
The build-shelter groups are mostly building shelters! The fight-against-shelters groups are mostly fighting against shelters! Both of them are doing what they claimed to do, and it’s all canceling out. The more you are tempted to think of [the set of both these groups] as a single “system” fulfilling a single “purpose”, the more confused you’re making yourself.
Brett writes:
I've always thought of the phrase as an argument against the "no true Scotsman" fallacy when it's used in an organisational setting. When there are significant failings of an organisation, the response (within the organisation) can sometimes be: "there are some bad apples working against the purpose of our system: our system is not supposed to do this and the failings are due to individuals and not the system itself". POSIWID then is applicable: you can't claim a system "isn't supposed to" do something, if it's repeatedly doing it on a large enough scale.
I don’t think this works.
Often failures are because of incompetent individuals. For example, one reason that UK intelligence agencies did such a bad job fighting Communism in the ‘40s and ‘50s was that lots of their staff, including some leaders, were Soviet spies. When those people were replaced, results improved! And there are plenty of stories of companies that turn around once a few bad executives get fired and replaced (eg Apple after Jobs came back).
So why would we want a phrase saying that the failure of systems is never because of incompetent individuals?
It makes the most sense if you don't take it as having anything to do with intentions.
The truth at which it gestures is "This system can be relied upon to consistently produce this outcome, just as if it were designed to do so."
The point is to suggest that the "unintended side effects" are a direct result of the "rules" of the system, intentionally so or not, and therefore you can't ignore them as one-off incidents, or hope a minor patch will fix it. The system needs to be abolished, or else given a complete overhaul.
Obviously the ambiguous phrasing also allows you to assign insanely hostile and nonsensical motives to the outgroup. I would like to think this was not intention of the people who came up with the phrase, but whether it is or not, it can be relied upon to consistently produce that outcome.
I agree this is one of many possible meanings it could have which there are much better ways to phrase.
Joost de Wit writes:
I’d say the hospital is precisely designed to cure 66% of people because it operates within constraints (financial, #doctors, approved meds). A “system” designed to cure let’s say 99% of people would look wholly different.
I have occasionally been a low-level representative in hospital administration meetings. I’m trying to to think of what suggestions I could given to “redesign” the hospital to cure 99% of people. “Hey, guys, have you considered having more money?” I guarantee the hospital has considered this. The reason they don’t have more money is that insurance companies won’t pay more for care and donors won’t donate more.
Maybe you could bring it up a level, to the US health care system as a whole? But insofar as anyone is in charge here (maybe the Secretary of Health and Human Services), I guarantee that person has also considered getting more money. The reason they don’t have more money is that Congress and the President set their budget and balance it off against their other priorities.
Maybe the system is America as a whole? In this case yeah, you could imagine an America redesigned completely around cancer care, where there are sky-high taxes and all the money goes to cancer hospitals, so much so that bridges collapse and the military can’t defend the country anymore because we’re spending all the money on hospitals. But what does it mean to have a “systems analysis” principle which is incapable of accurately analyzing any system smaller than the whole country?
Also, shouldn’t we expect a good theory to yield true predictions? My theory is that cancer hospitals want to cure as many patients as possible (given other constraints). If I recommended them a new policy that would increase their cure rate, they might worry about cost or hassle - but if it were low-cost and low-hassle, they’d eventually implement it. But if you recommended a new policy that brought them closer to 66% (“We’re on track to rise to 70% next year, but if we get Dr. Smith to relapse back into alcoholism, we can go back to 66%!”) they would call you insane and fire you immediately and definitely not agree.
Since “make cure rates as high as possible” accurately predicts the hospital’s behavior, but “keep cure rates at exactly 66%” doesn’t, why would you describe the second one as the “purpose”? What use is it to accuse them of having a “purpose” which they will never take any action to achieve?
But also, what are even we doing here? In real life, nobody says things like “the purpose of a cancer hospital is to keep cure rates at 66%”. Why are people defending this inane statement so hard? This reminds me of the old atheism-religion debates, where some atheist would bring up an awkwardly-phrased Bible statement, and the religious people would contort themselves to say that nooooooo, it’s totally true that the world was created in seven days, as long as you define day to mean “any time period of an indeterminate length”. But at least their motives make sense to me; lots of other things depend on whether Bible verses are true or false. POSIWID was first coined in 2001. Why should people contort themselves to defend this extremely poorly-phrased thing?
In this comment thread, people have claimed that the real meaning of POSIWID is:
Chesterton’s Fence
Moloch
Alienation of labor
Pournelle’s Iron Law of Bureaucracy
People follow incentive gradients
If nothing is changed, things will stay the same
If a system keeps going despite side effects, it’s okay with those side effects
If a system has side effects, those side effects are secretly the whole point
It’s about machines and was never intended to apply to social systems
These are pretty different things! So I continue to think that, if you like one of them, you should consider the possibility that this phrase isn’t a clear way to communicate the thing you like.
Share this post