Why’reheading

Why’re we teaching teenagers about safe sexual practice, when we could just try to treat the underlying psychological issues that make them want to have sex?

Wait, no, that’s stupid [1]. Why’re we allowing people to buy make-up or cosmetic surgery, when we could just try to treat the underlying psychological issues that make people unhappy with their appearance?

Wait, that’s stupid [2]. Why’re we supporting people transitioning to the appropriate gender when we could just try to treat the underlying psychological issues that cause dysphoria?

Wait, that’s also stupid [3]. Why’re we using exercise regimes and gastric bands when the solution to obesity is to treat the underlying psychological issues that make people want to be thin?

Wait, still stupid [4]. Why’re we suggesting improvements to societal structure when clearly we could just treat the underlying psychological issues that make people in modern societies unhappy?

That’s probably stupid.

I’ve seen all of those five arguments made in varying contexts. The point of this selection was to maximise the number of people who have at least one issue on each side, so as to frame the discussion more neutrally. But if you’d have guessed that the origin of the discussion was the highly-contentious third, well, I wouldn’t call your guess wrong. But equally, the aim is to skirt around the highly-abstract last one, which is by far the most important.

Seeing these five side-by-side makes it prudent to start by examining the most common argument that’s made in favour of each of them – a slippery slope argument of the form “but if we accept people getting their noses shrunk, how can we say it’s wrong for them to turn themselves into freakish monstrosities of chitin and tentacles that should be cleansed from the world with fire?” Well, we can’t, same as there’s nothing wrong with people removing their limbs if it’ll make them more comfortable with their body, or people trying to find fulfillment in life rather than altering themselves to find their current circumstances fulfilling. Sure, maybe they’d be better off following Buddha’s advice and trying to become perfectly phlegmatic about things, but that’s ultimately a demand for people to change in an unlikely way.

Now, perhaps with sufficient enlightenment or technological advancement, we could make it less unlikely. We could find drugs that treat gender dysphoria or autism or being unhappy about being overweight. That’d certainly be a start. Why not go further? We could have a pill to cure ennui, a surgical procedure to make Mondays seem less horrible, or a vaccine to prevent liking Nickelback! Wait, is this starting to sound familiar?

The well-read will recognize this as an argument fundamentally about wireheading. Well, perhaps it is not such a good mark of entry into the elite once it has a wiki page. The foresighted will note that I am also constructing a slippery slope argument in the opposite direction: if the answer to the question “how much wireheading do we support” is not “none,” or “only as much as people want to do to themselves,” then how much? If people want to be thin, rather than accept being fat and become happy with it, why should we tell them that correcting their psyche is the better option?

This isn’t exactly the same thing as wireheading on every level, of course. But it certainly seems like any argument that proves that it’s right to alter people’s minds in the listed ways ought to be strong enough to also prove that it’s more generally right to alter minds in any number of unlisted ways. And if we take it as given that we don’t want to just alter ourselves to be permanently blissed out by everything, it follows that any particular argument for why certain people should alter themselves to be happy with any particular thing is in need of justification beyond “it would make them happier.”

There are a handful of such justifications that get seen fairly often. Appeals to Nature or God are of little interest to me; if you wish to make such an appeal, I suppose that’s your prerogative. Sometimes people make appeals to unfair competition – “some people are more attractive, and given the demands of competition, this amounts to a kind of tax on unattractiveness, which would be better off removed.” While I kind of agree, pushing back against the existence of options to alter the environment in favour of altering the individual just seems like a poor way to resolve this. If there are any good ways, I don’t know of them, though.

To go back to an earlier point – in Buddhism, there is a story of a farmer with 83 problems [5]. The story goes that the farmer went to see Buddha, who was known to be wise beyond wisdom, to seek counsel on how to rein in his errant son. Buddha said he could not help with that. Well, said the farmer, maybe you could advise me on how to mend my leaky roof. Buddha said he could not help with that. Okay, said the farmer, maybe you could teach me how to mend shoes. Buddha said he could not help with that. And so on through the farmer’s entire list of problems. At the end of the list, the farmer scowled and said “is there anything you can do?” And Buddha said he could solve the farmer’s 84th problem: that the farmer wanted to not have problems.

This must have made Buddha feel extremely clever, except that I’m pretty sure he was above that. But the farmer’s son was still errant; his roof still leaked; his shoes still tattered. One might even argue – given that Buddha could not wirehead anyone, and could only suggest decades of meditation and self-doubt – that he gave the farmer the 84th problem of feeling like caring about all those other problems was his fault for not being phlegmatic enough. Now, maybe the farmer attained enlightenment and was happy – or whatever positive-affect adjective you use to describe enlightened beings, anyway – and if so, good, but did Buddha really do all that he should in this story? Was it okay for him to sit back, content with having provided only the option to remove the perception of there being any problems?

As with the list of examples above, there’s one simple answer: that if someone prefers not to alter their preferences, then we should not say that having provided the option of doing so fulfills all moral obligations to alleviate their suffering. There are lines to be drawn on how far it is acceptable to go in pursuit of such, but the line is not here unless we want to say that in every case listed so far and many others besides the correct approach is “just don’t care about it.”

Well, I hope I’ve explained a bit about why I think wireheading is the wrong sort of approach to the, hah, problem of having an imperfect world. If people want to alter themselves, sure, but the mere existence of that as an option would not be enough to dismiss unhappiness.


[1] – Because that sounds completely impossible.
[2] – Because self-esteem can only get you so far; attractiveness isn’t going to be purely socio-cultural.
[3] – Again, that also sounds pretty much impossible.
[4] – Again, social norms aren’t so loose that people can expect to do equally well by following or by defying them.

The important note here at the foot is this: these arguments are constructed without reference to what the patient wants, i.e. no “but they probably don’t want to be cured of that desire.” And if you’re asking some of the questions but in other cases using answers similar to the footnotes, note that these are pretty interchangeable. For instance, obesity kills as surely as cancer does – and so does gender dysphoria, if you accept that “just have them not commit suicide after we insist they live a life that will make them want to” is not a valid method of engineering solutions, in the same way that “adopt the NAP” is not a useful solution to propose for gun violence.

[5] – It’s not clear why that number in particular, but I did remember the number perfectly despite having heard the story only once. Maybe there’s someting to it!

Why’reheading

Utter Seriousness

Epistemic status: trying to talk about things that actively defy being talked about. Largely pointless. Occasionally descends into nonsensical prose for no reason.

1.
A basilisk is a fearsome serpent hatched from a toad’s egg, praise Kek, incubated by a cockerel. It possesses potent venom and, critically, the ability to kill those who look at it. The idea was brilliantly used in the famous short story BLIT for a deadly fractal image. A basilisk, then, refers to a particular type of antimeme: the kind that kills those who perceive it, thereby preventing its own perpetuation. There are others.

Post-Truth and Fake News have become the defining political issue on my mind lately, which is either pretty impressive given the circumstances or completely predictable given the zeitgeist. And indeed the world is noting the significance of the crumbling of the possibility of genuine discussion as right and left retreat into worlds not merely of separate ideals but separate facts. TUOC writes:

I bet there are a lot of people who read r/the_donald and have a vague impression that refugees committed six murders in Canada last night, a vague impression which will stack with other similarly unverified vague impressions and leave them convinced there’s an epidemic of refugee violence. I have no idea what to do about that, and it terrifies me.

As it turns out, there was a popular thread there about the true identity of the shooter. But note how none of the details are in the thread title – the memorable point will still be “uhh, terrorism’s sure rampant with all these refugees, isn’t it?” And also note this story in which the Orange Man himself joins in on the action. Now, it certainly seems like he was talking about some kind of event in Sweden on Friday 17th. But his fans quickly accepted the alternative interpretation he gave, that he was talking about a Fox News report about Sweden. And then proceeded to claim that it’s everyone else who’s just buying into a narrative. And kept the vague impression that there’s terror and crime in Sweden beyond all proportion to what was actually the case at the time of the statement (retrocausality being almost certainly impossible). Or consider this discussion which takes a look at exactly the same thing from the other political side.

James Hitchcock also weighs in:

A less-discussed innovation of modern politics is the collapse of earnestness in public discourse. Sarcastic and ironic modes of conversation have sprouted like fungi wherever political discussion occurs –in political speech, formal journalism, social media formats, and on online content aggregators such as Reddit and Tumblr. This mode of discourse provides lazy, comfortable white noise as a backdrop to political discussion, a rhetorical style that can be genuinely funny but that masks a lack of faith in one’s words. Moreover, it deprecates sincerity as a value worth striving for while engaging others.

Anderson and Horvath discuss one of the purveyors of antifactualism in depth here, saying:

In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them.

This widely-circulated post gives another good breakdown of the phenomenon, although I don’t know if it needs to be attributed to enemy action. This article discusses the notion under the name “the big joke.”

This is simply how modern ‘media’ works. People can’t maintain a cognitive network in which they keep track of what each source is saying, which people in their less immediate social-media circles can be expected to pursue true beliefs, which of the myriad links they need to follow to learn more and when they can safely trust someone’s summary. So people end up with vague impressions, ghosts of maps.

2.
Yudkowsky wrote on thought-terminating clichés in straightforward terms. Alexander wrote about the “bingo card” as part of a larger-scale discussion. The former is the negative-sense, “thing that you stop thinking when you encounter,” the latter the positive-sense “thing to which other ideas are drawn and approximated,” but in both paradigms, a mind adds on a structure that automatically resists attempts to modify that structure.

Consider, then, this comment suggesting that commentators who “will always wrap up their counterpoints in lengthy and citation-heavy word salads designed to give an impression of intellectual honesty” are malevolent, or this alt-right meme creating the impression that arguers who acknowledge complexities of positions are laughable. If you’re imagining a bingo card with squares like “But I Have Evidence” and *Is Polite and Acts Reasonably*, well. Bingo. With such a mentality becoming commonplace, discussion can become utterly impossible rather than merely “urgh talking to $OUTGROUP is impossible“-impossible.

But then consider in juxtaposition the notion of the thought-terminating cliché. What if you put up stop-signs around the action of thinking about things in the evidence-based, polite-and-reasonable fashion? What if noticing yourself taking any foreign idea seriously were cause to shut down inquiry along the lines of noticing that you’re questioning the sacred/taboo?

The idea of doublethink goes back at least as far as the 4th century BCE, when the tenets of Buddhism were first laid down. In typical meditative procedure, the practitioner attempts to dismiss their distracting thoughts as they form, eventually becoming proficient enough to be free from onerous mental diversion, which, it is held, is the root of all dhukka (like ‘suffering,’ but much less so). The goal is noble enough, and the technique actually quite useful, but it reveals an important secret of the human mind: it is possible, with training and practice, to go from avoiding pursuing thoughts, to avoiding thinking them at all. This has some implications for the nature of the conscious mind which I feel have not been fully explored by the non-reductionist crowd, but that is a different discussion entirely.

(my apologies for brutally over-simplifying this practice. It is meant to be illustrative of an idea, not dismissive of a religion)

Of course, when people hear “doublethink” they don’t think of ancient religious practice, but rather the comparatively very recent 1984. Wikipedia quotes Orwell describing it as:

To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself – that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word ‘doublethink’ involved the use of doublethink.

In Orwell’s fiction, when The Party demands doublethink, it is supposed to be demanding the impossible – an illustration of how the state is all too happy to make everyone a criminal and then selectively enforce the law against those it dislikes, as well as the particular anti-truth brand of impossible to which the Party adheres. However, the real doublethink is a simpler thing, something the brain is perfectly capable of doing – as has been known since antiquity. It is merely one more stone in a bridge to post-truth.

3.
Edit: This is by far the most contentious section, perhaps unsurprisingly. However, it’s also quite tangential – skipping ahead to the end is entirely reasonable. There’s also a rather more productive discussion in the comments!
So let’s talk about “postmodernism,” by which I mean “the thing referred to commonly as postmodernism” rather than postmodernism itself (for a good discussion of the distinction, see this thread. OP is a bit smug and wrong, but the overall discussion is good). But surely no one takes it truly seriously any more? Isn’t it just a funny game that humanities sophists use to amuse themselves? Didn’t Sokal prove that or something?

I used to joke about Virtue Mathematics, by analogy to and as a criticism of Virtue Ethics. “Mathematics is simple!,” I would say, “Just stop dragging up all these crazy notions of ‘proof’ and ‘axioms’ and ‘formalism’ and simply accept the conjectures that the Clever Mathematician would accept! True understanding, the kind that actually matters in day-to-day life, has nothing to do with carefully-constructed theoretical quandaries, and mostly comes down to intuition, so obviously intuition is the true root of all mathematics!” This struck me as quite funny, though it’s more mockery than real criticism. But are there really people who take this attitude and who can be taken seriously?

Jordan Peterson talks about “Maps of Meaning.” David Chapman talks about “Meaningness.” I am almost convinced that they are talking about the same hard-to-grasp thing. But I am also almost convinced that the thing they’re talking about is simply their own confusion.

In Peterson’s case it’s hard to directly quote him, as he has a habit of wandering off on huge tangents that will provide context for important statements – talking about zero to talk about trading to talk about Monopoly to talk about Pareto distributions to talk about Communism to talk about the USSR to talk about growing up in the 80s, in order to give the life-story context to a discussion of… well, I’m not sure, he didn’t really specify. Nonetheless we will make an effort.

He will say things like “I realised that a belief system is a set of moral guidelines; guidelines of how you should behave and how you should perceive.” This seems like word salad on the face of it, but maybe we can drain off some of the overabundant dressing and fish some tasty radish or cucumber out of the mass of soggy lettuce and bewildered mushrooms.

Well, undeniably some belief systems include moral guidelines on how you should act. That much seems, well, trivial? That is, that can’t possibly be a realisation. No, the position being sought here is that all belief systems are, contain, or break down to rules about how you should perceive the world. The fallacy of grey leaps to mind. Even if this were true, it would not be even slightly useful for helping determine which among the many belief systems is the most appropriate to adopt in consideration of the goals you wish to achieve. It indistinguishes belief systems, claiming that since scientific belief systems also guide how you should perceive, they’re not any better than any old random belief system you found in your grandfather’s attic.

In fact, his whole style is described as “immunising [the audience] from a totalitarian mindset.” Sounds lovely? Think back to the cognitive lobster-pot of previous paragraphs, the bingo-card thought-replacement. What is a totalitarian mindset, according to Peterson? Well, one example would be supporting laws against hate speech, of any form. Now, we can disagree about where exactly is the best place to put the boundaries of free speech. That can be a productive discussion. But when one side is screaming that anything less than total adherence to their absolutist position makes you the same as Stalin, that discussion evaporates.

He will also say things like “[…] so when everyone believes this , it becomes true in a sense.” This is referring to things like contracts, where indeed the truth is (at least partly) determined by what people’s beliefs are. But in that case, he’s not really saying anything. Money only has value insofar as we agree it does? Well, yes. I thought this was supposed to be important new information?

One notes a similarity to Dennett’s notion of the “deepity” – a statement that can be read as either true, but trivial; or deep, but false. “Reality is nebulous” – true if we’re talking about lack of sharp category distinctions, but then hardly a great insight, nor one that requires you to go beyond rationality. Deep if used to mean “there is no universal lawfulness,” but then entirely false. If there is one habit of the metamodernist that gets to me, it is the insistence that rationality can’t explain everything, so it must be incomplete/wrong/broken.

Chapman writes:

The exaggerated claims of ideological rationality are obviously and undeniably false, and are predictably harmful—just as with all eternalism. Yet they are so attractive—to a certain sort of person—that they are also irresistible.

Really? Because I’ve never met such a person or seen him present any examples, and yet his general tone seems to indicate that he thinks the reader is such a person. Yes, calling your readers’ approaches to cognition obviously, undeniably wrong and predictably harmful sounds like a great way to get them on your side. A++ implementation of meta-systemic pseudo-reasoning. But regardless, the reason such claims are exaggerated, obviously false etc is that no one is making them.

Essentially the problem with the meta-rationality, post-truth, prefix-word memeplex is that it explicitly demands non-thinking. Thinking is part of the wrong system, the dreaded Eternalist Ideological Rationality. Scott Alexander has discussed this kind of trap twice to my knowledge, once in a review, once in fiction – both theologically rather than meta-epistemologically, but the mechanism of the trap is the same regardless of the substance of which the teeth are made. The variant here is that whenever you think about metarationality using regular rationality, you’re already wrong by virtue even of trying – the same as trying to repent for the sake of avoiding Hell. You’re expected to already be on the “right” level, in order to understand the thoughts that justify why it’s the right level. Hence “presuppositionalism.”

Chapman does us the favour of writing directly:

Meta-systematic cognition is reasoning about, and acting on, systems from outside them, without using a system to do so.

Once you accept that something can’t be attacked by reason, or meta-reason, or anything anywhere up the chain (systems), it becomes completely immune to all criticism forever. You might say that it’s still vulnerable to criticisms made in the right way – on the right non-systematizing level – but the fact is you will very conveniently never come across any criticisms on that level. You will, weirdly, only ever encounter people trying to critique from “within the system.” Poor dears! They don’t know how helplessly stuck they are, how deeply mired in the Ideology of Rationality!

This essay isn’t meant to persuade people to come down from the tower of counterthought, of course: they are beyond the power of articulate reason to reach. They have rejected the implications of incompleteness proofs, preferring the idea that they are somehow above the chain of total regression, the Abyss of accepting that not being an anti-inductionist is okay, really, reasoning about your reasonability using your reason is the only option and that’s fine. Arguing with postmodernists is for giving yourself a headache, not for having fun or seeking truth. Likewise, the relation is mirrored: someone genuinely convinced of the merit of the object level (rather than merely operating there by default) will not be seduced by the appeal of meta-level 2deep4u-ing.

The emergence of post-rationality/post-truth/post-systemism/etc is the final triumph of what we might call Irony. The iron-clad position of ultimate immunity to everything, the ferrous dark tower against which pin the world must be turned aside, the point of nuclear stability from which no further action can be extracted. Not merely to unthink your thoughts, not merely to meet a stop-sign and turn back, but to unthink the thoughts about unthinking, and the thoughts about that, quine the whole thing and be done with discourse forever. Ironic detachment beyond merely a new level, but taken to a whole new realm of smug disengagement, an Alcubierre drive running on exotic logic, causally disconnected from the rest of reason and already accelerating away to some absurd infinity.

0.
This, then, is the antimemetic meme. Don’t take things too seriously. If someone tries to engage in a serious discussion, post a frog picture and move on. Don’t think too hard about it, don’t believe anything you read, don’t try to understand why other people disagree. They’re probably just signalling anyway. Definitely don’t do anything as uncool as caring. Why you mad though? Truth isn’t subjective, of course, we’re not peddling woo here, but don’t waste your time on a mere system. Your impression of reality is supposed to be a big blurry mass, isotropic and incoherent. And so on.

Douglas Adams wrote about a spaceship suffering a meteorite strike that conveniently knocked out the ship’s ability to detect that it had been hit by a meteorite. Thus the beauty of antimemetic warfare: the first and only thing that needs to be removed is the knowledge that you’re not fighting. Make the thought of not fighting unthinkable, and everything else follows. Can one fight a war with no enemy? Under such circumstances, I don’t see why not. Sam Hughes wrote that “every antimemetics war is the First Antimemetics War.” That a capable response to true antimemetic forces – even those arising purely through natural means – must require respondents who are as good on their first day as they’ll ever be. For the weaker antimemes of the real world, we have perhaps a little leeway, a little ability to learn counter-techniques.

Thus my conclusion. If we cannot re-learn honesty, earnestness, dialogue on the direct object level, then we will lose a war we can’t see being fought to an enemy that doesn’t even exist. I say this with utter seriousness.

Utter Seriousness

GGG Design Philosophy – Points and Counterpoints

It’s a fairly common complaint, amongst both hardcore and softwcore players, that Path of Exile has no combat log that can be used to analyse the cause of a death after it happens. A point raised often in this thread is that “GGG doesn’t want to include a combat log, it would give players too much information.” The problems raised with players having too much information are:

  • The game would become too min-maxed, with players calculating how much survivability they need and getting exactly that much.
  • The presentation of the additional information might be overwhelming.
  • The game is supposed to be mysterious and difficult to understand.

I doubt the significance of these problems.

On the question of min-maxing, there are three realities to address. The first and most obvious is that most powerful hits from major bosses already have known approximate values. The idea that any information is actually being concealed here is laughable: the information is simply made less easily accessible. Second, the game simply doesn’t allow for such straightforward calculations. Between base damage variance, critical hits, overlapping AoEs, using defensive curses, being cursed, map modifiers, the various effects that can buff an enemy unexpectedly, and party-play buffs to enemies, as well as non-standard formulae for calculating the effect of armour and evasion, there are more than enough ways that players would be punished for trying to have only barely enough survivability. And third, the game is already a hugely complicated mess of mathematical min-maxing. That’s why people like it.

As to the question of presentation, I present that it is just that and no more. There’s little demand for ‘floating number’ style information – a combat log accessible after death would be quite sufficient. This might lead to a demand for always-accessible logs, which might in turn lead to third-party programs displaying numbers over the game (this sounds like it would be interacting with the game in memory, which I believe would violate the ToS and almost certainly be detectable). But even so, so what? If people want to debase the game, fine. They’re not gaining any unfair advantage by doing so (if anything, they’re disadvantaging themselves by obscuring the screen).

Lastly, there’s the philosophical point. We can break it into two parts:

  • Should a game have mysterious elements – information kept deliberately hidden from the player?
  • If so, is the manner in which a character dies an appropriate place for this?

On the former, I think a good case can be made either way. On the plus side, it can make a game more involving, letting players decide how deep down the rabbit hole they want to proceed. ARGs are the pinnacle of this: the entire game is the hiding of the information. Puzzle games somewhat less so, all the way down to straightforward board games like Chess or Go where the rules are laid out in their entirety from the very beginning. On the other hand, it is also fairly pointless. Consider the hiding of card rarities in early Magic history. This accomplished basically nothing except making the game harder to understand and creating booster packs where the rare card was a basic Mountain.

So maybe PoE should conceal certain information from the player. My contention is that this is not the right place to do it. Compare vendor recipes. These are pieces of information that are very well-suited to being concealed. A recipe, once discovered or learned, is permanently a part of the player’s ability to play the game. Not knowing a recipe may make a player less effective, but it does so “invisibly” – without the player being aware of what they don’t know until they learn it. And it’s predictable, both in the sense that a recipe can be shared easily across multiple players and experiments can be done to find hidden recipes. It’s practically the ideal example of a game developer hiding information from the player community in order to create fun experiences.

Now consider a character death in a hardcore league for comparison. Learning how much damage an attack deals is inexact and doesn’t permanently increase your skill – intuition won’t transfer well between characters with different survival strategies. For the same reason, the information transfers poorly. But most importantly, it’s really up-front and obvious when you don’t know why your character died. And it’s really frustrating. Learning that you could have been turning in your items differently for better rewards is a “kick yourself” moment. Learning that sometimes you’ll just die and the best way to avoid it is to just not do anything in the game because you can’t determine what is and isn’t dangerous might not cause immediate uninstallation, but it erodes a player’s desire to keep playing. After all, it’s not like there’s any practical difference between a game you play but don’t do anything in, and one you don’t play at all.

Overall, I don’t think there’s any good reason to not include combat logs on death in Path of Exile. The stated reasons either seem oblivious to the actual state of the game and the world – failing to account for the increased availability of information since the dawn of the ARPG genre – or philosophically tenuous, generically aimed at creating fun moments of discovery that simply don’t happen in the specific context in question.

GGG Design Philosophy – Points and Counterpoints

Against PJWs

Fiat justitia, et pereat mundus
-Ferdinand I

Epistemic status: mostly just complaining about deontologists’ constant attempts to frame arguments on their own terms.
Also, a smart person once said never to use politics as an example of more general issues. This is excellent advice and you should not emulate my total disregard of it.

Let us consider Procedural Justice. It contrasts sharply to Social Justice, which concerns itself with creating a good society through consideration of people. Procedural Justice is concerned with with creating a good society through consideration of rules.

The SJW, or “Social Justice Warrior,” has become a modern archetype. I am now convinced that, unlikable as they are, there is a brand of keyboard crusader I like equally little or potentially less: the Procedural Justice Warrior.

Consider the argument:

In a free market, all trade has to be voluntary, so you will never agree to a trade unless you believe it benefits you.
Further, you won’t make a trade unless you think it’s the best possible trade you can make. If you knew you could make a better one, you’d hold out for that. So trades in a free market are not only better than nothing in the opinion of the traders, they’re also the best possible transaction you could make at that time according to your judgment at the time.
Labor is no different from any other commercial transaction in this respect. You won’t agree to a job unless you believe it benefits you more than anything else you can do with your time, and your employer won’t hire you unless she believes it benefits her more than anything else she can do with her money. So a voluntarily agreed labor contract must benefit both parties in their opinion, and must be preferable at that moment over any other alternative.
Source

What, exactly, makes the society that results from such actually good? Well, it’s not that the people in it are happy, fulfilled, free to pursue their dreams or generally flourishing. One can imagine this being the outcome, certainly, but it’s equally trivial to imagine how anarcho-capitalist society gives rise to misery, malcontent, and oppression. After all, this has already happened, probably more than once. Even with a magical power preventing “use of force,”  this would happen with probability near 1, unless we populate the society with robot angels. No, the reason this society is Perfect with a capital P is because it was arrived at by following the right procedure. It’s good by definition! Why should mere facts be allowed to interfere?

As Weltanschauung put it:

When it comes to social liberalism, libertarianism says “do not use the legal system to favour or disfavour any particular lifestyle”. Neoliberalism says “work to make sure society is approximately neutral between different lifestyle choices”. These are very very different! Libertarianism is, in theory, comfortable with cultural discrimination if done through “legitimate” means (i.e. respecting personal and property rights). Neoliberalism wants anti-discrimination law—whether regarding religion, race, gender, age, sexual preference—enforced on private businesses, charities and the government alike.

(emphasis mine). Libertarianism is in fact comfortable with any level of awfulness, provided it is done through “legitimate” means. It is an exact reversal of “the ends justify the means.” Instead, the means are supposed to justify the ends.

Let’s try a change of tack. What about:

Eurosceptics often claim that the EU is undemocratic. They argue that the EU’s decision-making procedures make it difficult for EU citizens to influence policy. Due to their complexity, these procedures also seem inaccessible to the ordinary voter. EU citizens do not feel that they have an effective way to change the course of EU politics and policy. Public disaffection has been expressed in the low turnouts at European elections, which reached an all-time low in 2014 with an EU average of just 42%
Source

(Or the inverse case, Trump being defended as being Democratic and therefore Right)

The idea of being democratic has been elevated above the idea of getting things right. The heuristic has become the whole and sum of the law.

The essence of procedural justice is the implicit belief that if you perform the right ritual, goodness will happen as an automatic result. The elegance and obvious-rightness of the simple rule or rules is simply too enchanting to resist.


Just as SJWs approach arguments for conclusions they dislike by calling them racist etc, likewise PJWs have a default response. Think about the intention of calling someone a racist. They will usually hurry to disprove the accusation, noting that they have done un-racist-y things, etc. This will not save them, but it concedes the critical point that whether or not they’re a racist is important. The PJW, on the other hand, challenges someone to find ‘where the badness comes from.’ Like finding a mistake in a mathematical proof, if one step in the procedure is flawed, all that stems from it is dead at the root and cannot hold. But the trick was always in the structure of the argument. By trying to meet the challenge, just as with the SJW, the arguer walks into the trap. They implicitly concede that the structure of a mathematical proof, where goodness flows from good axioms to good theorems, is the appropriate structure for determining what is good.


I am only mostly a fool: it is probable that you, the reader, are yourself inclined towards a Procedural Justice view. It will be very tempting to say that it’s just obviously true that if you start with good axioms and can’t find anywhere for badness to come into the situation, then the outcome, whatever it may be, is obviously the best. This is exactly the same feeling the SJW has – that it’s just so obvious that good is what happens automatically when you just get rid of all the Oppression.

I really don’t know how to communicate across the inferential gap, though. I can give analogies, knowing they’re flawed:

Suppose we identify that electrons, protons and neutrons are fermions. We say these particles are “fermionic.” Then we ask whether a helium atom is fermionic. Since it has 2 protons, 2 neutrons and 2 electrons, it must be six times as fermionic as any one of those particles. But that isn’t the case, because the property “fermionic” isn’t an abstract basic quantity, but rather a specific state of affairs that can be cancelled out. Likewise, just because any one voluntary trade of property makes both parties better off, it doesn’t follow that any possible arrangement of voluntary trades of property makes all involved parties well off.

But this is hopeless. It can’t overcome the intuition. The Chasm is deep, and full of terrors.


The important part is that I’ve found a way to feel superior to both.


I am not inclined to agree with Chapman’s conception of “metarationality.” It seems like the only reason for it is to attack a straw caricature of rationality while selling something that smells strongly of the old box-outside-the-box. But maybe he has a point. His straw-rationality seems to be strongly similar to the PJW archetype. His proposed solution doesn’t seem very, uh, concretely defined, but might be a step in the right direction – away from the idea that goodness comes from having the right system, and towards the idea that you must choose the right systems to produce goodness.


The thing to remember is that systems designed to produce good outcomes aren’t guaranteed to do so. Of course, this doesn’t mean we should throw the system away every time it gives a result we don’t like – sadly, there’s no procedure to decide when to do so. Sorry about that.

Against PJWs

Disciplined Disagreement

Not so recently now, Jason Brennan wrote on a popular “gotcha” in internet debates, and while his take on it took a beating in the comments, it got me thinking.

Is there a justification for science needing philosophy beyond “but whenever you try to argue that you don’t, that is itself an act of philosophy”?

That particular gotcha seems reminiscent of proof by contradiction. But if we formalise it, it falls apart:
Assumption: Science doesn’t need philosophy
Implication 1: No scientist would ever engage in philosophical discourse
Premise 1: The arguer is a scientist
Premise 2: This argument itself is a part of philosophy
Conclusion 1: A scientist is doing philosophy whenever they engage in this argument
Conclusion 2: Since conclusion 1 contradicts Implication 1, the Assumption must be incorrect.

Written out like this, it seems much more obvious that the ‘gotcha’ isn’t even an argument. It’s a signal for philosophers and philosophy-fans in internet comments to snicker and pat themselves on the back for being so much cleverer than those stupid self-contradicting scientismists. This is why Brennan’s counter-gotcha is likewise a signal for scientists and science-fans on the internet to giggle and raise their glasses in commemoration of how much more productive they are than those angel-counting philosophismists.

Or, to put it explicitly: Implication 1 doesn’t follow from the Assumption. Not even close. Premise 1 might not be true in any given use of the argument, since it’s mostly deployed against science-fans. Premise 2 is deeply dubious and seems to rely on mixing meta-level “discussion about philosophy” into “philosophy.” Yes, if we say “philosophy” is a field of study that includes all studies of itself, studies of studies of itself, and so on ad infinitum, that’s permissible, but that’s not what we were saying at the beginning.

Trying to prove that science doesn’t need philosophy by getting science to engage in a meta-discussion of philosophy is like trying to prove that you need to know group theory to solve a Rubik’s cube. Actually, it’s not even that good. It’s like arguing that you need to know Zermelo-Fraenkel Set Theory in order to write a diary, since every argument you make that that’s a stupid idea can be expressed using ZFC. Or that you need to carefully study human languages in order to do meaningful heart surgery, since any argument against the notion will be expressed using language.

I worry that the above is not an adequate expression of the counter-position. Let me elaborate.

When someone argues “scientists need to study philosophy” they are referring to a particular body of thought and knowledge. The most common case being ethics, but also frequently a demand that scientists should (for reasons explored below) understand the epistemological framework that underlies the scientific method, and the justifications of the idea of Knowledge that underlie that etc etc. But then when someone objects that actually that isn’t the case, their objection is not part of that same body of philosophy, but rather a broader “meta-philosophy” of ideas about philosophy itself.

How does one argue from “all objections to studying philosophy are meta-philosophical in nature” to “and therefore you should study philosophy”, without equivocating between the philosophy that was originally meant by the injunction to study philosophy, and the meta-philosophy that was introduced as part of the gotcha? Isn’t this a case of constructing a false referent class, “philosophy plus meta-philosophy (plus m2-philosophy etc)” that – while not unnatural – would not be a grouping that would be used in this discussion except to make people think they’ve already accepted that they are doing one thing when they’re actually doing the other? The original discussion, remember, was about ethics or the nature of truth or something. Proving that you can’t avoid arguments about what you can and can’t argue about doesn’t prove that you have to do any of those things. It’s a classic semantic bait-and-switch, and people need to stop using it.

To conclude this section: this gotcha stands only as long as it’s not thought about, at which point it collapses into a word-game and some shady inferences.

* – For example, you could argue that while scientists do engage in philosophical discourse, that they should never need to engage in philosophical discourse about science (which this argument is). Really, though, trying to salvage a solid argument from this rhetorical trap is a mug’s game.


Let’s examine further the question of why it’s considered necessary for scientists to justify their basic epistemological framework. One answer springs immediately to mind. I do try to resist the urge to be cynical, and fair warning: this is very cynical. But nonetheless, isn’t it possible that the reason philosophismists love to demand that scientists provide a rigorous defense of their underpinning assumptions (which usually aren’t too far from naïve falsificationism) is that the philosophy-fan in question has a really smack-bang-whiz-pop takedown of the expected reply? Something that they’re just aching to unleash on someone who never read a bunch of impenetrable books about how truth is dead? This is definitely what the scientism followers are afraid of when they’re wary, or even outright dismissive, of engaging in such discussions. Eventually (actually, very quickly) you get tired of hearing that knowledge is merely opinion and therefore you can’t know nuffin’, and of cleverer arguments to the same effect.

In fact, I would go further than this and say that there is no currently-available reply to the scientist’s philosophical justifications that is in any way worth pursuing in depth. The whole line of inquiry just leads into either an endless rabbit-hole of one or another interminable semantic and/or axiomatic argument over which assumptions are more basic**, or the inquirer says “oh okay I guess you did your research and gave exactly the answer I knew you should give in your position as a scientist, never mind.” I’ve never actually seen the latter happen, but it could. Maybe.

But it would be better to assume that the desire to have science be “grounded in (good) philosophy” is an honest one. Thus, we come to the question of what it actually means to “ground” science in good philosophy. An excellent answer to this was given by Daniel Kokotajlo, also no longer recently:

I’m not trying to say we can’t do science until we do philosophy. What I’m saying is that if we don’t do philosophy, we’re opening ourselves up to the possibility of being permanently and dramatically mistaken in our science. For example: If we think that Laws of Nature are literally Laws set down by God, as the original people who came up with the concept seem to have thought, and we don’t ever do any philosophy to question that, then the existence of God will be an unexamined assumption of science, and worse yet, people will eventually get confused and think science has actually proven it. My worry is that something similar might happen with consciousness.

This is a very good point, but I think it relies on a rather philosophy-oriented view of history. Let’s take an example. It was previously held that the universe would stand eternal, on the grounds of what seemed like rock-solid philosophical consideration. However, advances in science and engineering of steam power led to the notion of entropy, and the conclusion that everything is doomed to eventual dark and cold. The notion of an unlimited future was taken apart by this. Were there philosophers arguing for finite future extent of the world before this? Well, of course. Did some of them venture the notion of inescapably rising disorder as an a priori fact? Possibly, but not that I’ve found mention of. And this is an easy case, since you can derive the notion of entropy from any number of thought experiments once you know to think that way. But critically, even if it was voiced as an idea, it did not become widely known and lead scientists to wonder. The process seems to have been mostly the exact opposite of that, with feedback from the more abstract levels to the more concrete only occurring after reality called attention to the area and practical investigation had been done.

Even if we take the most realistic view, of side-by-side evolution – we should recall, after all, that there was no sharp distinction between ‘scientist’ and ‘philosopher’ at the time – it still seems unrealistic to say that philosophy was what took apart the eternal worldview leaving those same natural philosophers free to theorise about entropy, thereby enabling them to, in their engineering-time, design better steam engines.

Consider also the concern that people will think science has “settled” philosophy in the wrong direction. This, too, has already happened, with many thinkers of the 20th century deciding that quantum mechanics had disproved realism***. And yet we now have more interpretations of quantum mechanics than ever, including multiple realist interpretations such as Everett’s many-worlds and de Broglie/Bohm’s pilot wave. The important point here is that science didn’t become stuck after “proving” a wrong philosophical position.

Granted, it would have been better if Bohr hadn’t pushed his anti-realist interpretation so hard, and the whole philosophical question of what quantum mechanics means had just been left fallow until a broad range of ideas had been formulated. But Popper’s case wasn’t “let’s wait and see what else people think of,” but much more along the lines of “down with the anti-realist tribe!” So philosophy wasn’t exactly helping things. Also of note, Bohr was more of a philosopher than Everett or de Broglie (though a bit less than Bohm).

And consider the opposite case: people thinking science hasn’t settled philosophy when it has. The nature of time is the big one here. General Relativity might not prove Eternalism true, but it certainly wreaks a heavy toll on all non-B-theories of time. Continuing to hold to such theories seems at least as bad as the mistake that Bohr made. Especially when the people who do so don’t seem to have actually studied relativity and gained the deep understanding of it that makes it clear just how much damage the notion of a universal-now has sustained. It may be of interest to consider also William Lane Craig and his total failure to grasp the core concepts of transfinite cardinality, decades after the foundations were laid down by Cantor. But this would carry us off into mathematics, which is too useful a bridge between disciplines to trample over in this essay.

The important thing to take home is that having a state of philosophical debate on an issue isn’t sufficient to prevent scientists from thinking a result proves their own preferred philosophy to be true, but conversely scientific investigation does eventually overcome philosophical misguidedness.

** – I believe this is called “doing philosophy.”

*** – The latter of these two links is a surprisingly readable account; consider it generally recommended.


If all this has given you the impression that I am a Harris-following antiphilosophe about to propose we bulldoze the humanities department to make space for a statue of Newton standing on Descartes’ neck, then allow me to reassure you: that would be entirely too much of a good thing.

Alright, maybe that’s not reassuring.

We’re left with a few more arguments for adding philosophy to science courses. Scientists do seem to have a habit of making philosophical proclamations and getting much more respect and attention from the public than actual philosophers, which must be quite upsetting. This is bad, because…? Well, it gives the public a mistaken impression of the field. But you can imagine what would happen if we poured public scrutiny onto professional philosophy: the whole field would be seen as a bunch of overpaid bearded fellows generating an endless supply of pure hot air. Politics without the fun easy-access tribal bickering.

I’m not quite willing to endorse the further statement that this is bad because the scientists who do this are never right about anything to do with philosophy. Their contention that the existence of angels can be dismissed without first studying the supposed physiology of six-limbed endoskeletes is maybe not sufficiently elaborately phrased and carefully constructed, but fundamentally comes from the right impulse. No one can afford to make a perfect study of every field they decide is nonsense, and it’s better to be too aggressive in cutting out the useless than too merciful.

And of course, there’s the argument that studying multiple disciplines makes you a well-rounded person. Sadly, this argument is pure wishful thinking. Not to mention, it’s not really clear what “well-rounded” means. We’d all like to be wide-ranging masters of many skills, warrior-poets and New Age Renaissance People. But we can’t, and pretending that learning to regurgitate some pre-thought Deep Wisdom about Hegel is the same thing is just sad.

Or, to make a more forthright claim, people become “well-rounded” by pursuing the full breadth of their own interests. You can’t teach being an interesting person, and you certainly can’t do it by the methods employed to teach philosophy or other humanities.

Which leaves an argument about ethics. I want to say, “well, tell me about all these evil scientists who destroyed the world because they didn’t learn that Human Life Is Valuable in a classroom at some point. Oh, they’re all fictional? And the stories they take part in were pretty much universally written by humanities-fans? How convenient.” But of course, if science actually had destroyed the world, we wouldn’t know about it. And the power of science grows over time: we might not be able to generalise from a history of not-destroying-the-world if the capability only recently became available. Genetic engineering is the standout example here.

However, I think there’s an important point in the above. The conception of scientists as being inherently ethically untrustworthy is from popular perception, not any actual lack of ethics. For instance, it’s not uncommon to hear that science was willing to run the Trinity test without knowing if it would ignite self-sustaining nitrogen fusion in the atmosphere and end the world right there. Of course, they knew that couldn’t happen. The same story may be repeated nearly word-for-word with the Large Hadron Collider. The fact that the popular perception lambastes scientists for taking the risk rather than boggles at the sheer level of caution required to carefully consider such far-out possibilities in the first place says a lot.

Now, I’m confident that most people reading this don’t need to be told that it was well-known that neither of these experiments would destroy the world. But is it possible that the perceptual effect, the thinking that scientists must have been very foolish or very unethical to try, still lingers even after dismissing the original urban legend? I can’t think of any good reason to suppose that scientists are insufficiently ethical, and would be quite interested to hear one.

There are several object-level reasons to insist that scientists learn some philosophy. But to suppose that certain professions engender a lack of basic human ethics that must be corrected by education, or a lack of basic human interests that must be corrected by education, or a lack of basic human respect for others’ expertise that must be corrected by education: these are claims in desperate want of evidence.


I will use the conclusion to this piece to say that it is good for a scientist to also study philosophy. It’s good for anyone to study anything, in fact. The interested-amateur – the science-fan on the internet – is in a particularly good position to gain something, since they’re not exactly busy publishing papers about neuroscience or quantum mechanics.

And I don’t think the fields should be entirely separate and never talk except through a hole in the bedsheets. There is something quite sad about the loss of the old “Natural Philosophy” discipline. I wanted to avoid making this about who more commonly tramples whose territory, because I think that concedes too much to the idea that either discipline has any territory that is in some way sacrosanct.

The only conclusion here that I’m confident of is that people should stop using the gotcha at the top of this post. It’s trivially incorrect as well as deeply flawed, and inculcates a really poor attitude that reduces a difficult and interesting question to a moment of circlejerkery. Do not do this.


I’ll go back to Python and extended Tolkien metaphors soon, I promise.

Disciplined Disagreement

Substance Dualism

I’m no philosopher, but…

A.
So substance dualism is the view that, in addition to the material world governed by physics, there exists an immaterial world which presides over Agency (that is, free will) and Subjective Experience (muh qualia). Unlike Epiphenomenal Dualism, the more substantial brother asserts that the immaterial world really exists and has real effects on the material, and vice-versa. However, it operates under different laws which are not reductionist in nature, allowing Agents to be basic building-blocks of reality. This “explains” free will and qualia, by adding a bunch of undetectable ghosts that… have free will and qualia as basic properties.

Oh, and: the Immaterial world is completely undetectable except by its actions on human brains (none of which have ever been observed, but admittedly non-invasive scanning has poor resolution and brains in particle accelerators tend to stop making choices and having experiences fairly rapidly).

Now, the human spirit may be indivisible and immaterial, but the brain is not. And we know by now that the human brain is where actions in the material world come from: people’s mouths wouldn’t talk about their inner lives without muscles pulling around, which wouldn’t happen without nerve signals, which depend on other signals back into the dark recesses of the head. For substance dualism to be true, there must be some point, somewhere along the chain, where physics as we understand it from the physics lab is suspended and the immaterial world pushes some physical matter one way or another according to the will of that matter’s associated Agent.

This seems fine, provided there is some as-yet unknown mechanism by which this takes place. It is possible for the dualist to sidestep the charge that specific kinds of brain damage cause specific apparent changes to the Agent by supposing that the Agent has a message-sender that is exactly as complicated as the gaps in our understanding of the brain. That is to say, if we could selectively damage single cells and observe changes to the patient, then the Agent would need a message-sender as complicated as a cellular-level model of the brain, so that when that cell is damaged, the message can be corrupted in a corresponding way. For now we’re limited to very crude quantities of structural changes in the brain, so the dualist need only imagine that the Agent has a large handful of pieces of message-sending apparatus*.

But overall, this seems fine.

Except, there’s something a bit wrong here.

B.
The human brain is fantastically complex [citation needed]. It could yet prove to be the case that its process are beyond our comprehension entirely, though I doubt that. In any case, it is without doubt a marvelous piece of gear.

But. It’s also a wet, squishy, warm lump of hydrogen, oxygen, carbon, and various other light elements. To a physicist, the difference between a brain and the same brain after random permutation via kitchen blender is not particularly huge. For this to be detecting new physics would be – it would be something beyond unprecedented. New physics a few centuries ago meant electricity, sure, and it wasn’t inconceivable that there were similar advances waiting to vindicate the notion of the brain as Immaterial-Uplink-device. But new physics today? After all those centuries of new physics moving outside of the chemical and biological domain?

Look. If you want to find new physics, you either need to do something very cold, very hot, or very regularly arranged in an unusual way (e.g. Quantum computers, Higgs bosons, Majorana fermions). The notion that that all human brains, regardless of the large structural differences between them, are being affected by forces that no other particle in any situation ever experiences, is not something you can seriously believe in the face of this vast gap. So you have a feeling of internal experience that you think can’t possibly be explained without a stuff-of-internal-experience in a non-physical realm. How would you quantify that feeling of inexplicability? A thousand to one? A million? The chances that your brain is doing something no physics experiment ever has is smaller than that, by quite a lot.

C.
And even if we could find this new physics, the Immaterial is still made entirely of convenium. Isn’t it just – just silly to suppose that there are a bunch of Agents hanging out in paradox-space, who happen to vary among themselves in exactly the same ways that human brains vary among themselves? If you could look at the diverse variety of human behaviours, and the diverse variety of human brains, you could see that the two are the same. That the way brains vary, predicts exactly the way people vary. Isn’t it a bit much to suppose that on top of always and forever*** fitting exactly into the gaps in our knowledge of physics, the Immaterial also just so happens to always exactly produce the same predictions of the distribution of minds as its absence?

Why do we come with a laundry-list of cognitive biases? Because we’re brains built to approximate Agents, not Agents. Why do we have such a limited range of possible intelligence? Same reason. Why are willpower and sense perception, which one would think pretty integral to the free choice and qualia questions, so often damaged or broken by differences in the brain? Same reason Why does human variability look so exactly like the result of genetic variability and experiential variability? Same, simple, singular reason.

If you’re not a process implemented by a brain, what in the heavens are you? Make specific predictions. Tell me how to set up a combined Stern-Gerlach apparatus/MRI scanner or whatever so as to see when the soul pushes part of the brain around. There must be something you can do other than fiddling definitions around to make bad ideas sound cleverer.


 

* – Actually, we should expect to have brains that are the least complicated structures capable of picking up signals from the Immaterial, if we’re willing to accept things like the theory of evolution**. Which would seem to imply that the signals are at a minimum, as complicated as the entire brain – if they were simpler, we’d have simpler brains in turn (since the brain has no function except picking up the signal).

** – It’s possible for the history of evolution to be explained by, e.g., intermittent divine intervention, but it’s generally held that mutation/natural selection (or, as I have taken to calling it, mutation plus natural selection) is the best we’ve got.

*** – Maybe if physicists tomorrow said “OH, right, duh, obviously [X]. Well that solves everything, turned out there weren’t any souls in the Final Draft” then some substance dualists would change their minds. But merely watching the limit be approached – seeing how each door to extra physics affecting the brain has closed, with Primitive Type Agent Objects firmly on the outside – has not and will never have any effect. The notion that mere evidence can overcome an intuition is too strange.

Substance Dualism

Unthought Experiments

I am really not a fan of thought experiments of the form “this thing is never the right thing to do. In fact, it is super harmful. Whenever tried, it has caused tons of harm. For this reason, everyone has strong intuitions about how we shouldn’t do it. Also, we have strong societal taboos against doing it. Those taboos and those intuitions are entirely justified by how harmful it is. Now, imagine that it is the right thing to do, would you do it?”

I read this and was surprised by it, although it seems on further thought to be completely true. While the obvious thought is “but it’s just a thought experiment!” a careful analysis reveals that the experiment itself is quite useless: if something is the right thing to do, you do it. If something is the right thing to do despite violating a lot of obvious intuitions – well, still don’t actually do it in real life, because probably you’re overestimating how much of a special case it is – but in thought experiments, hypothesising that Action is the right thing to do removes all usefulness from the result. Of course you do it.

So what is the actual point of getting people to say “yes, if drowning kittens were right, I would do it” or the such? The most obvious explanation is that it weakens their position tremendously, because who would support someone who’d drown kittens just because it was the right thing to do? I mean, yeah, it’d be the right thing to do, but you’re still one of those kitten-drowners.

So I fully agree with disregarding certain thought experiments, conditional on them telling you no new information when addressed while being used to maneuver you into indefensible positions.

And it is always possible to be surprised by the truth.

Unthought Experiments