Why can’t we skip all these tedious intermediate steps and just succeed already?

Suppose you had a discussion like this:

A: Choristers are terrible! They keep singing things all the time, and it gets on my nerves!
B: Have you tried earplugs?
A: Of course! They’re worthless! Uncomfortable, constantly in need of replacement, hardly block the awful singing but do somehow always make me miss important phone calls…
B: Okay, not that then. Have you tried asking them to stop?
A: Only every day for the last eternity. Why won’t they stop? Argh.
B: Maybe try asking them to sing something different, that you’ll like?
A: I don’t want them to sing something else, I want them to stop.
B: Or maybe you could offer to help them find a more soundproof room somewhere?
A: Why should I help them?! They’re torturing me! Why can’t they just stop doing it?
B: Perhaps some kind of rotating schedule, so you can be elsewhere when they sing…
A: Argh! No! They need to just not do it!

We could definitely accuse B of being unsympathetic. But A is also being unsympathetic, and so are the choristers, and it doesn’t really matter anyway. The point is that B is trying to be pragmatic – find a workable solution that makes A less unhappy. But A doesn’t seem all that interested in the workarounds – their only plan is to hope for the simpler solution of everyone abiding by A’s own preferences.

Let’s briefly consider some real-world examples:

And so on through a hundred other tedious culture-wars-by-proxy, “why can’t people just diet and stick to it,” “why can’t people just have more feminist sexual preferences,” “why can’t people just get jobs,”… All different in their exact causes but all containing a trace of the same error. Now that everyone is at least a little bit angry and considering leaving a comment about how their pet issue is totally different (hey! Just like mine!), we can move on.

Hopefully the idea is now clear. Someone has some extremely precious value like pro-choice, free speech, having guns, etc. That value gets questioned by other people who have different values. The person wishes other people would stop doing that. The problem is that, no matter how important it is (to the requester) that the value be respected, it’s not enough to make people actually do it. And emphasising that importance by repeated injunction does nothing.

Which is to say, there’s a tendency to try to object to a proposed solution by saying “but the real problem is that people are causing a problem. People just need to stop doing that.” Essentially, asking for people to change in an unlikely way as a substitute for discussing the proposed solution on a deeper level and gaining understanding of why it’s not satisfactory that can be used to refine the solution and so on.

I’ve made this mistake over and over again, on issues from environmentalism to electoral reform to foreign policy. It’s ludicrously hard to debate ideas without ever asking for the impossible. It could be seen as a kind of fallacy of perfectionism, but I prefer to think of it as its own thing, a kind of cognitive failure mode based around the fear that one’s values won’t be respected and the tendency to stop looking for a solution once someone else can be blamed.

The objection is obvious: but isn’t asking for less “asking for other people to change in unlikely ways” asking for people to change in an unlikely way? Yes, it kind of is. Therefore, here are some proposed practical workarounds:

  • Express the sentiment as “just to check, we agree that it would be best if … ?” – The aim here is to placate the part of the mind that is worried that the other participants won’t respect your highly regarded value.
  • Emphasize not wanting to be dragged off-topic when mentioning that it would be nice if whatever optimal path could be taken instead of compromising. This seems prone to failure. No amount of “let’s not get off topic, but…” has ever prevented discussions getting off-topic.
  • Resist the temptation to respond to “why can’t X just V?” with disagreement about whether it would be good if X just V. It is sometimes possible to find a way to express the idea that the principle is sound but an unhelpful way of looking at the original question; but if not, you’re usually allowed to just drop the line of discussion.
  • Ignore the discussions themselves. Then, write a long meta-level rant on your blog about it. This solves the problem forever.
Why can’t we skip all these tedious intermediate steps and just succeed already?

Utter Seriousness

Epistemic status: trying to talk about things that actively defy being talked about. Largely pointless. Occasionally descends into nonsensical prose for no reason.

1.
A basilisk is a fearsome serpent hatched from a toad’s egg, praise Kek, incubated by a cockerel. It possesses potent venom and, critically, the ability to kill those who look at it. The idea was brilliantly used in the famous short story BLIT for a deadly fractal image. A basilisk, then, refers to a particular type of antimeme: the kind that kills those who perceive it, thereby preventing its own perpetuation. There are others.

Post-Truth and Fake News have become the defining political issue on my mind lately, which is either pretty impressive given the circumstances or completely predictable given the zeitgeist. And indeed the world is noting the significance of the crumbling of the possibility of genuine discussion as right and left retreat into worlds not merely of separate ideals but separate facts. TUOC writes:

I bet there are a lot of people who read r/the_donald and have a vague impression that refugees committed six murders in Canada last night, a vague impression which will stack with other similarly unverified vague impressions and leave them convinced there’s an epidemic of refugee violence. I have no idea what to do about that, and it terrifies me.

As it turns out, there was a popular thread there about the true identity of the shooter. But note how none of the details are in the thread title – the memorable point will still be “uhh, terrorism’s sure rampant with all these refugees, isn’t it?” And also note this story in which the Orange Man himself joins in on the action. Now, it certainly seems like he was talking about some kind of event in Sweden on Friday 17th. But his fans quickly accepted the alternative interpretation he gave, that he was talking about a Fox News report about Sweden. And then proceeded to claim that it’s everyone else who’s just buying into a narrative. And kept the vague impression that there’s terror and crime in Sweden beyond all proportion to what was actually the case at the time of the statement (retrocausality being almost certainly impossible). Or consider this discussion which takes a look at exactly the same thing from the other political side.

James Hitchcock also weighs in:

A less-discussed innovation of modern politics is the collapse of earnestness in public discourse. Sarcastic and ironic modes of conversation have sprouted like fungi wherever political discussion occurs –in political speech, formal journalism, social media formats, and on online content aggregators such as Reddit and Tumblr. This mode of discourse provides lazy, comfortable white noise as a backdrop to political discussion, a rhetorical style that can be genuinely funny but that masks a lack of faith in one’s words. Moreover, it deprecates sincerity as a value worth striving for while engaging others.

Anderson and Horvath discuss one of the purveyors of antifactualism in depth here, saying:

In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them.

This widely-circulated post gives another good breakdown of the phenomenon, although I don’t know if it needs to be attributed to enemy action. This article discusses the notion under the name “the big joke.”

This is simply how modern ‘media’ works. People can’t maintain a cognitive network in which they keep track of what each source is saying, which people in their less immediate social-media circles can be expected to pursue true beliefs, which of the myriad links they need to follow to learn more and when they can safely trust someone’s summary. So people end up with vague impressions, ghosts of maps.

2.
Yudkowsky wrote on thought-terminating clichés in straightforward terms. Alexander wrote about the “bingo card” as part of a larger-scale discussion. The former is the negative-sense, “thing that you stop thinking when you encounter,” the latter the positive-sense “thing to which other ideas are drawn and approximated,” but in both paradigms, a mind adds on a structure that automatically resists attempts to modify that structure.

Consider, then, this comment suggesting that commentators who “will always wrap up their counterpoints in lengthy and citation-heavy word salads designed to give an impression of intellectual honesty” are malevolent, or this alt-right meme creating the impression that arguers who acknowledge complexities of positions are laughable. If you’re imagining a bingo card with squares like “But I Have Evidence” and *Is Polite and Acts Reasonably*, well. Bingo. With such a mentality becoming commonplace, discussion can become utterly impossible rather than merely “urgh talking to $OUTGROUP is impossible“-impossible.

But then consider in juxtaposition the notion of the thought-terminating cliché. What if you put up stop-signs around the action of thinking about things in the evidence-based, polite-and-reasonable fashion? What if noticing yourself taking any foreign idea seriously were cause to shut down inquiry along the lines of noticing that you’re questioning the sacred/taboo?

The idea of doublethink goes back at least as far as the 4th century BCE, when the tenets of Buddhism were first laid down. In typical meditative procedure, the practitioner attempts to dismiss their distracting thoughts as they form, eventually becoming proficient enough to be free from onerous mental diversion, which, it is held, is the root of all dhukka (like ‘suffering,’ but much less so). The goal is noble enough, and the technique actually quite useful, but it reveals an important secret of the human mind: it is possible, with training and practice, to go from avoiding pursuing thoughts, to avoiding thinking them at all. This has some implications for the nature of the conscious mind which I feel have not been fully explored by the non-reductionist crowd, but that is a different discussion entirely.

(my apologies for brutally over-simplifying this practice. It is meant to be illustrative of an idea, not dismissive of a religion)

Of course, when people hear “doublethink” they don’t think of ancient religious practice, but rather the comparatively very recent 1984. Wikipedia quotes Orwell describing it as:

To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself – that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word ‘doublethink’ involved the use of doublethink.

In Orwell’s fiction, when The Party demands doublethink, it is supposed to be demanding the impossible – an illustration of how the state is all too happy to make everyone a criminal and then selectively enforce the law against those it dislikes, as well as the particular anti-truth brand of impossible to which the Party adheres. However, the real doublethink is a simpler thing, something the brain is perfectly capable of doing – as has been known since antiquity. It is merely one more stone in a bridge to post-truth.

3.
Edit: This is by far the most contentious section, perhaps unsurprisingly. However, it’s also quite tangential – skipping ahead to the end is entirely reasonable. There’s also a rather more productive discussion in the comments!
So let’s talk about “postmodernism,” by which I mean “the thing referred to commonly as postmodernism” rather than postmodernism itself (for a good discussion of the distinction, see this thread. OP is a bit smug and wrong, but the overall discussion is good). But surely no one takes it truly seriously any more? Isn’t it just a funny game that humanities sophists use to amuse themselves? Didn’t Sokal prove that or something?

I used to joke about Virtue Mathematics, by analogy to and as a criticism of Virtue Ethics. “Mathematics is simple!,” I would say, “Just stop dragging up all these crazy notions of ‘proof’ and ‘axioms’ and ‘formalism’ and simply accept the conjectures that the Clever Mathematician would accept! True understanding, the kind that actually matters in day-to-day life, has nothing to do with carefully-constructed theoretical quandaries, and mostly comes down to intuition, so obviously intuition is the true root of all mathematics!” This struck me as quite funny, though it’s more mockery than real criticism. But are there really people who take this attitude and who can be taken seriously?

Jordan Peterson talks about “Maps of Meaning.” David Chapman talks about “Meaningness.” I am almost convinced that they are talking about the same hard-to-grasp thing. But I am also almost convinced that the thing they’re talking about is simply their own confusion.

In Peterson’s case it’s hard to directly quote him, as he has a habit of wandering off on huge tangents that will provide context for important statements – talking about zero to talk about trading to talk about Monopoly to talk about Pareto distributions to talk about Communism to talk about the USSR to talk about growing up in the 80s, in order to give the life-story context to a discussion of… well, I’m not sure, he didn’t really specify. Nonetheless we will make an effort.

He will say things like “I realised that a belief system is a set of moral guidelines; guidelines of how you should behave and how you should perceive.” This seems like word salad on the face of it, but maybe we can drain off some of the overabundant dressing and fish some tasty radish or cucumber out of the mass of soggy lettuce and bewildered mushrooms.

Well, undeniably some belief systems include moral guidelines on how you should act. That much seems, well, trivial? That is, that can’t possibly be a realisation. No, the position being sought here is that all belief systems are, contain, or break down to rules about how you should perceive the world. The fallacy of grey leaps to mind. Even if this were true, it would not be even slightly useful for helping determine which among the many belief systems is the most appropriate to adopt in consideration of the goals you wish to achieve. It indistinguishes belief systems, claiming that since scientific belief systems also guide how you should perceive, they’re not any better than any old random belief system you found in your grandfather’s attic.

In fact, his whole style is described as “immunising [the audience] from a totalitarian mindset.” Sounds lovely? Think back to the cognitive lobster-pot of previous paragraphs, the bingo-card thought-replacement. What is a totalitarian mindset, according to Peterson? Well, one example would be supporting laws against hate speech, of any form. Now, we can disagree about where exactly is the best place to put the boundaries of free speech. That can be a productive discussion. But when one side is screaming that anything less than total adherence to their absolutist position makes you the same as Stalin, that discussion evaporates.

He will also say things like “[…] so when everyone believes this , it becomes true in a sense.” This is referring to things like contracts, where indeed the truth is (at least partly) determined by what people’s beliefs are. But in that case, he’s not really saying anything. Money only has value insofar as we agree it does? Well, yes. I thought this was supposed to be important new information?

One notes a similarity to Dennett’s notion of the “deepity” – a statement that can be read as either true, but trivial; or deep, but false. “Reality is nebulous” – true if we’re talking about lack of sharp category distinctions, but then hardly a great insight, nor one that requires you to go beyond rationality. Deep if used to mean “there is no universal lawfulness,” but then entirely false. If there is one habit of the metamodernist that gets to me, it is the insistence that rationality can’t explain everything, so it must be incomplete/wrong/broken.

Chapman writes:

The exaggerated claims of ideological rationality are obviously and undeniably false, and are predictably harmful—just as with all eternalism. Yet they are so attractive—to a certain sort of person—that they are also irresistible.

Really? Because I’ve never met such a person or seen him present any examples, and yet his general tone seems to indicate that he thinks the reader is such a person. Yes, calling your readers’ approaches to cognition obviously, undeniably wrong and predictably harmful sounds like a great way to get them on your side. A++ implementation of meta-systemic pseudo-reasoning. But regardless, the reason such claims are exaggerated, obviously false etc is that no one is making them.

Essentially the problem with the meta-rationality, post-truth, prefix-word memeplex is that it explicitly demands non-thinking. Thinking is part of the wrong system, the dreaded Eternalist Ideological Rationality. Scott Alexander has discussed this kind of trap twice to my knowledge, once in a review, once in fiction – both theologically rather than meta-epistemologically, but the mechanism of the trap is the same regardless of the substance of which the teeth are made. The variant here is that whenever you think about metarationality using regular rationality, you’re already wrong by virtue even of trying – the same as trying to repent for the sake of avoiding Hell. You’re expected to already be on the “right” level, in order to understand the thoughts that justify why it’s the right level. Hence “presuppositionalism.”

Chapman does us the favour of writing directly:

Meta-systematic cognition is reasoning about, and acting on, systems from outside them, without using a system to do so.

Once you accept that something can’t be attacked by reason, or meta-reason, or anything anywhere up the chain (systems), it becomes completely immune to all criticism forever. You might say that it’s still vulnerable to criticisms made in the right way – on the right non-systematizing level – but the fact is you will very conveniently never come across any criticisms on that level. You will, weirdly, only ever encounter people trying to critique from “within the system.” Poor dears! They don’t know how helplessly stuck they are, how deeply mired in the Ideology of Rationality!

This essay isn’t meant to persuade people to come down from the tower of counterthought, of course: they are beyond the power of articulate reason to reach. They have rejected the implications of incompleteness proofs, preferring the idea that they are somehow above the chain of total regression, the Abyss of accepting that not being an anti-inductionist is okay, really, reasoning about your reasonability using your reason is the only option and that’s fine. Arguing with postmodernists is for giving yourself a headache, not for having fun or seeking truth. Likewise, the relation is mirrored: someone genuinely convinced of the merit of the object level (rather than merely operating there by default) will not be seduced by the appeal of meta-level 2deep4u-ing.

The emergence of post-rationality/post-truth/post-systemism/etc is the final triumph of what we might call Irony. The iron-clad position of ultimate immunity to everything, the ferrous dark tower against which pin the world must be turned aside, the point of nuclear stability from which no further action can be extracted. Not merely to unthink your thoughts, not merely to meet a stop-sign and turn back, but to unthink the thoughts about unthinking, and the thoughts about that, quine the whole thing and be done with discourse forever. Ironic detachment beyond merely a new level, but taken to a whole new realm of smug disengagement, an Alcubierre drive running on exotic logic, causally disconnected from the rest of reason and already accelerating away to some absurd infinity.

0.
This, then, is the antimemetic meme. Don’t take things too seriously. If someone tries to engage in a serious discussion, post a frog picture and move on. Don’t think too hard about it, don’t believe anything you read, don’t try to understand why other people disagree. They’re probably just signalling anyway. Definitely don’t do anything as uncool as caring. Why you mad though? Truth isn’t subjective, of course, we’re not peddling woo here, but don’t waste your time on a mere system. Your impression of reality is supposed to be a big blurry mass, isotropic and incoherent. And so on.

Douglas Adams wrote about a spaceship suffering a meteorite strike that conveniently knocked out the ship’s ability to detect that it had been hit by a meteorite. Thus the beauty of antimemetic warfare: the first and only thing that needs to be removed is the knowledge that you’re not fighting. Make the thought of not fighting unthinkable, and everything else follows. Can one fight a war with no enemy? Under such circumstances, I don’t see why not. Sam Hughes wrote that “every antimemetics war is the First Antimemetics War.” That a capable response to true antimemetic forces – even those arising purely through natural means – must require respondents who are as good on their first day as they’ll ever be. For the weaker antimemes of the real world, we have perhaps a little leeway, a little ability to learn counter-techniques.

Thus my conclusion. If we cannot re-learn honesty, earnestness, dialogue on the direct object level, then we will lose a war we can’t see being fought to an enemy that doesn’t even exist. I say this with utter seriousness.

Utter Seriousness

Culture War Glossary

Balance:
Describes a state of affairs in which you are winning. See also Fairness.

China:
Somewhere very far away.

Compromise:
An idiotic manoeuvre where you concede some of what the Ingroup wants, which is sacred and precious beyond measure, and grant some of what the Outgroup wants, which is twisted and vile beyond belief. It is unclear why anyone ever attempted to do this.

Culture:
The exact nature of Culture is unclear, but it is inferred from the statements of Culture Warriors to be an opaque, coloured, volatile, immiscible, flammable and strongly-odorous liquid with powerful psychoactive effects.

Culture War:
The current state of affairs regarding Culture. Believed to have been started in early 2002 by the Bush administration as part of a general policy of starting unwinnable abstract conflicts.

Culture Warrior:
An active participant in the Culture War.

Degeneracy:
People liking things that you don’t like.

Democracy:
A means of governance that functions well so long as it has Fairness and Balance but sometimes allows Degeneracy.

Echo Chamber:
Dwelling-place of the Outgroup and center of their crazed religion.

Elite:
An educated member of the Outgroup.

Expert:
An educated member of the Ingroup.

Fairness:
Describes a state of affairs in which your enemies are losing, and more importantly, suffering. See also Balance.

Immigration:
A powerful kind of magic with contradictory capabilities. A great cause of conflict in the Culture War.

Ingroup:
A diverse coalition of free-thinkers like you, doing their best to save the world from the Outgroup.

Islam:
Primary cause of conflating high odds of X given Y, with high odds of Y given X.

Liberalism:
A philosophy that espouses individual freedom; formerly quite popular in The West.

Magnanimous:
Describes a winner from the Ingroup.

Media:
Believed to be a mind-control device of some kind, controlled by the Outgroup.

Narrative:
A method of belief formation in which you start with what you want to conclude, and work backwards from there to fill in facts, statistics etc.

Neoliberalism:
Liberalism, but bad (e.g. when it’s being advocated for by the Outgroup).

Neutral:
Someone who says they’re from the Outgroup, but shares all the opinions of the Ingroup.

Nuclear War:
Definitely impossible according to all sides of the Culture War. No precautions are required to prevent this, because it can’t happen.

Outgroup:
A tribe of Them, dominated by groupthink, who hate everything good (such as the Ingroup) and are deliberately trying to destroy it.

Racism:
A special kind of evil of which only the Outgroup are capable.

Russia:
Harmless.
Mostly harmless.

Science:
One of The West‘s bad habits, which it is doing its best to break.

Smug:
Describes a winner from the Outgroup.

Truth:
See Narrative.

Virtue Signalling:
Someone from the Outgroup saying something nice. Obviously they can’t possibly have meant it, so it was clearly a ploy to try to seem good.

The West:
Countries associated with the Culture associated with white people.

Culture War Glossary

Dear Dinosaur,

So, a year late to the controversy party, I read “If You Were A Dinosaur, My Love” on the recommendation of Eneasz Brodski. But does this sound familiar: a tragic story about loss is presented using masterful language and receives great critical acclaim, from which follows a backlash from those who don’t consider it part of the medium it was being acclaimed in?

I can’t remember how long ago I ‘played’ Dear Esther, but it was fairly soon after it was first released as a kinda clunky mod rather than its own ‘game.’ I liked it a lot, almost entirely because of its haunting visual beauty (only gets better in the final release), great choice of soundtrack and delightful narration/writing. But note the absence of any actual video-game elements from it – apart from the random choice of narration fragments, you’d think it could be done just as well as a short animation.

In fact, Dear Esther only works when the player can treat it like a game even though it’s not. By expecting to be involved in the story in the way a game’s player is, you end up being exactly that. You have to believe that you are the story’s teller in the same way you can believe that you are Chell or DeWitt, and the absolutely minimal amount of control – just enough to walk around as you please – is necessary to achieve that.

Now compare If You Were…, which is not a SF/F story. But by believing it kinda-sorta-is, the reader can be persuaded to humour the narrator’s flights of fancy for just long enough for the author to throw out a BE SAD NOW, drop the mic and leave. As far as I know, though, If You Were… never sold itself as being SF/F, it just got a nomination for a Hugo from fans willing to push a boundary.

If You Were… is perhaps less genre-fiction than Dear Esther is a game. But the resemblance is nonetheless quite striking, especially when you take into account the reaction each received. It should come as no surprise that the “urgh who put this smug literary crap in my vidya” faction quickly allied with the “urgh who put this smug literary crap in my SF/F” faction.

Overall I liked Dear Esther a lot more. In particular, while both are well-written, Dear Esther impressed me a lot more with its focus on meter and pace. Also, it’s really pretty.

Dear Dinosaur,

Donations

Consider the argument that runs as follows: “If we abolished taxation, private donation would cover the costs of the public goods that the government currently pays for. If it fails to do so, that just shows that people didn’t really care enough about the thing to give their money for it.” A recent available near-example: Alex Zavoluk argues that the welfare state is possibly less effective than private charity.

Ignore for now the obvious coordination problem aspect. Couldn’t we simply make a symmetric argument?

“If we abolished the free market, private donation would incentivise creation of better goods and services. If people don’t freely give some of their state-issued money* to whichever* state-owned widget-maker they like best, that just shows that they don’t really care enough about having specifically that brand of widgets.”

Is there anything fundamental to break the symmetry here? Or should we admit that we should expect the public good to be as effectively provided for under total capitalism as high-quality production would be incentivised under communism? Which is to say, not very?

* – I am aware that “having money” and “having multiple competing providers of goods” are not considered typical parts of a socialist state. I don’t see the relevance, though – suppose that they were, then what?

Donations