Utter Seriousness

Epistemic status: trying to talk about things that actively defy being talked about. Largely pointless. Occasionally descends into nonsensical prose for no reason.

1.
A basilisk is a fearsome serpent hatched from a toad’s egg, praise Kek, incubated by a cockerel. It possesses potent venom and, critically, the ability to kill those who look at it. The idea was brilliantly used in the famous short story BLIT for a deadly fractal image. A basilisk, then, refers to a particular type of antimeme: the kind that kills those who perceive it, thereby preventing its own perpetuation. There are others.

Post-Truth and Fake News have become the defining political issue on my mind lately, which is either pretty impressive given the circumstances or completely predictable given the zeitgeist. And indeed the world is noting the significance of the crumbling of the possibility of genuine discussion as right and left retreat into worlds not merely of separate ideals but separate facts. TUOC writes:

I bet there are a lot of people who read r/the_donald and have a vague impression that refugees committed six murders in Canada last night, a vague impression which will stack with other similarly unverified vague impressions and leave them convinced there’s an epidemic of refugee violence. I have no idea what to do about that, and it terrifies me.

As it turns out, there was a popular thread there about the true identity of the shooter. But note how none of the details are in the thread title – the memorable point will still be “uhh, terrorism’s sure rampant with all these refugees, isn’t it?” And also note this story in which the Orange Man himself joins in on the action. Now, it certainly seems like he was talking about some kind of event in Sweden on Friday 17th. But his fans quickly accepted the alternative interpretation he gave, that he was talking about a Fox News report about Sweden. And then proceeded to claim that it’s everyone else who’s just buying into a narrative. And kept the vague impression that there’s terror and crime in Sweden beyond all proportion to what was actually the case at the time of the statement (retrocausality being almost certainly impossible). Or consider this discussion which takes a look at exactly the same thing from the other political side.

James Hitchcock also weighs in:

A less-discussed innovation of modern politics is the collapse of earnestness in public discourse. Sarcastic and ironic modes of conversation have sprouted like fungi wherever political discussion occurs –in political speech, formal journalism, social media formats, and on online content aggregators such as Reddit and Tumblr. This mode of discourse provides lazy, comfortable white noise as a backdrop to political discussion, a rhetorical style that can be genuinely funny but that masks a lack of faith in one’s words. Moreover, it deprecates sincerity as a value worth striving for while engaging others.

Anderson and Horvath discuss one of the purveyors of antifactualism in depth here, saying:

In the past, political messaging and propaganda battles were arms races to weaponize narrative through new mediums — waged in print, on the radio, and on TV. This new wave has brought the world something exponentially more insidious — personalized, adaptive, and ultimately addictive propaganda. Silicon Valley spent the last ten years building platforms whose natural end state is digital addiction. In 2016, Trump and his allies hijacked them.

This widely-circulated post gives another good breakdown of the phenomenon, although I don’t know if it needs to be attributed to enemy action. This article discusses the notion under the name “the big joke.”

This is simply how modern ‘media’ works. People can’t maintain a cognitive network in which they keep track of what each source is saying, which people in their less immediate social-media circles can be expected to pursue true beliefs, which of the myriad links they need to follow to learn more and when they can safely trust someone’s summary. So people end up with vague impressions, ghosts of maps.

2.
Yudkowsky wrote on thought-terminating clichés in straightforward terms. Alexander wrote about the “bingo card” as part of a larger-scale discussion. The former is the negative-sense, “thing that you stop thinking when you encounter,” the latter the positive-sense “thing to which other ideas are drawn and approximated,” but in both paradigms, a mind adds on a structure that automatically resists attempts to modify that structure.

Consider, then, this comment suggesting that commentators who “will always wrap up their counterpoints in lengthy and citation-heavy word salads designed to give an impression of intellectual honesty” are malevolent, or this alt-right meme creating the impression that arguers who acknowledge complexities of positions are laughable. If you’re imagining a bingo card with squares like “But I Have Evidence” and *Is Polite and Acts Reasonably*, well. Bingo. With such a mentality becoming commonplace, discussion can become utterly impossible rather than merely “urgh talking to $OUTGROUP is impossible“-impossible.

But then consider in juxtaposition the notion of the thought-terminating cliché. What if you put up stop-signs around the action of thinking about things in the evidence-based, polite-and-reasonable fashion? What if noticing yourself taking any foreign idea seriously were cause to shut down inquiry along the lines of noticing that you’re questioning the sacred/taboo?

The idea of doublethink goes back at least as far as the 4th century BCE, when the tenets of Buddhism were first laid down. In typical meditative procedure, the practitioner attempts to dismiss their distracting thoughts as they form, eventually becoming proficient enough to be free from onerous mental diversion, which, it is held, is the root of all dhukka (like ‘suffering,’ but much less so). The goal is noble enough, and the technique actually quite useful, but it reveals an important secret of the human mind: it is possible, with training and practice, to go from avoiding pursuing thoughts, to avoiding thinking them at all. This has some implications for the nature of the conscious mind which I feel have not been fully explored by the non-reductionist crowd, but that is a different discussion entirely.

(my apologies for brutally over-simplifying this practice. It is meant to be illustrative of an idea, not dismissive of a religion)

Of course, when people hear “doublethink” they don’t think of ancient religious practice, but rather the comparatively very recent 1984. Wikipedia quotes Orwell describing it as:

To know and not to know, to be conscious of complete truthfulness while telling carefully constructed lies, to hold simultaneously two opinions which cancelled out, knowing them to be contradictory and believing in both of them, to use logic against logic, to repudiate morality while laying claim to it, to believe that democracy was impossible and that the Party was the guardian of democracy, to forget whatever it was necessary to forget, then to draw it back into memory again at the moment when it was needed, and then promptly to forget it again, and above all, to apply the same process to the process itself – that was the ultimate subtlety: consciously to induce unconsciousness, and then, once again, to become unconscious of the act of hypnosis you had just performed. Even to understand the word ‘doublethink’ involved the use of doublethink.

In Orwell’s fiction, when The Party demands doublethink, it is supposed to be demanding the impossible – an illustration of how the state is all too happy to make everyone a criminal and then selectively enforce the law against those it dislikes, as well as the particular anti-truth brand of impossible to which the Party adheres. However, the real doublethink is a simpler thing, something the brain is perfectly capable of doing – as has been known since antiquity. It is merely one more stone in a bridge to post-truth.

3.
Edit: This is by far the most contentious section, perhaps unsurprisingly. However, it’s also quite tangential – skipping ahead to the end is entirely reasonable. There’s also a rather more productive discussion in the comments!
So let’s talk about “postmodernism,” by which I mean “the thing referred to commonly as postmodernism” rather than postmodernism itself (for a good discussion of the distinction, see this thread. OP is a bit smug and wrong, but the overall discussion is good). But surely no one takes it truly seriously any more? Isn’t it just a funny game that humanities sophists use to amuse themselves? Didn’t Sokal prove that or something?

I used to joke about Virtue Mathematics, by analogy to and as a criticism of Virtue Ethics. “Mathematics is simple!,” I would say, “Just stop dragging up all these crazy notions of ‘proof’ and ‘axioms’ and ‘formalism’ and simply accept the conjectures that the Clever Mathematician would accept! True understanding, the kind that actually matters in day-to-day life, has nothing to do with carefully-constructed theoretical quandaries, and mostly comes down to intuition, so obviously intuition is the true root of all mathematics!” This struck me as quite funny, though it’s more mockery than real criticism. But are there really people who take this attitude and who can be taken seriously?

Jordan Peterson talks about “Maps of Meaning.” David Chapman talks about “Meaningness.” I am almost convinced that they are talking about the same hard-to-grasp thing. But I am also almost convinced that the thing they’re talking about is simply their own confusion.

In Peterson’s case it’s hard to directly quote him, as he has a habit of wandering off on huge tangents that will provide context for important statements – talking about zero to talk about trading to talk about Monopoly to talk about Pareto distributions to talk about Communism to talk about the USSR to talk about growing up in the 80s, in order to give the life-story context to a discussion of… well, I’m not sure, he didn’t really specify. Nonetheless we will make an effort.

He will say things like “I realised that a belief system is a set of moral guidelines; guidelines of how you should behave and how you should perceive.” This seems like word salad on the face of it, but maybe we can drain off some of the overabundant dressing and fish some tasty radish or cucumber out of the mass of soggy lettuce and bewildered mushrooms.

Well, undeniably some belief systems include moral guidelines on how you should act. That much seems, well, trivial? That is, that can’t possibly be a realisation. No, the position being sought here is that all belief systems are, contain, or break down to rules about how you should perceive the world. The fallacy of grey leaps to mind. Even if this were true, it would not be even slightly useful for helping determine which among the many belief systems is the most appropriate to adopt in consideration of the goals you wish to achieve. It indistinguishes belief systems, claiming that since scientific belief systems also guide how you should perceive, they’re not any better than any old random belief system you found in your grandfather’s attic.

In fact, his whole style is described as “immunising [the audience] from a totalitarian mindset.” Sounds lovely? Think back to the cognitive lobster-pot of previous paragraphs, the bingo-card thought-replacement. What is a totalitarian mindset, according to Peterson? Well, one example would be supporting laws against hate speech, of any form. Now, we can disagree about where exactly is the best place to put the boundaries of free speech. That can be a productive discussion. But when one side is screaming that anything less than total adherence to their absolutist position makes you the same as Stalin, that discussion evaporates.

He will also say things like “[…] so when everyone believes this , it becomes true in a sense.” This is referring to things like contracts, where indeed the truth is (at least partly) determined by what people’s beliefs are. But in that case, he’s not really saying anything. Money only has value insofar as we agree it does? Well, yes. I thought this was supposed to be important new information?

One notes a similarity to Dennett’s notion of the “deepity” – a statement that can be read as either true, but trivial; or deep, but false. “Reality is nebulous” – true if we’re talking about lack of sharp category distinctions, but then hardly a great insight, nor one that requires you to go beyond rationality. Deep if used to mean “there is no universal lawfulness,” but then entirely false. If there is one habit of the metamodernist that gets to me, it is the insistence that rationality can’t explain everything, so it must be incomplete/wrong/broken.

Chapman writes:

The exaggerated claims of ideological rationality are obviously and undeniably false, and are predictably harmful—just as with all eternalism. Yet they are so attractive—to a certain sort of person—that they are also irresistible.

Really? Because I’ve never met such a person or seen him present any examples, and yet his general tone seems to indicate that he thinks the reader is such a person. Yes, calling your readers’ approaches to cognition obviously, undeniably wrong and predictably harmful sounds like a great way to get them on your side. A++ implementation of meta-systemic pseudo-reasoning. But regardless, the reason such claims are exaggerated, obviously false etc is that no one is making them.

Essentially the problem with the meta-rationality, post-truth, prefix-word memeplex is that it explicitly demands non-thinking. Thinking is part of the wrong system, the dreaded Eternalist Ideological Rationality. Scott Alexander has discussed this kind of trap twice to my knowledge, once in a review, once in fiction – both theologically rather than meta-epistemologically, but the mechanism of the trap is the same regardless of the substance of which the teeth are made. The variant here is that whenever you think about metarationality using regular rationality, you’re already wrong by virtue even of trying – the same as trying to repent for the sake of avoiding Hell. You’re expected to already be on the “right” level, in order to understand the thoughts that justify why it’s the right level. Hence “presuppositionalism.”

Chapman does us the favour of writing directly:

Meta-systematic cognition is reasoning about, and acting on, systems from outside them, without using a system to do so.

Once you accept that something can’t be attacked by reason, or meta-reason, or anything anywhere up the chain (systems), it becomes completely immune to all criticism forever. You might say that it’s still vulnerable to criticisms made in the right way – on the right non-systematizing level – but the fact is you will very conveniently never come across any criticisms on that level. You will, weirdly, only ever encounter people trying to critique from “within the system.” Poor dears! They don’t know how helplessly stuck they are, how deeply mired in the Ideology of Rationality!

This essay isn’t meant to persuade people to come down from the tower of counterthought, of course: they are beyond the power of articulate reason to reach. They have rejected the implications of incompleteness proofs, preferring the idea that they are somehow above the chain of total regression, the Abyss of accepting that not being an anti-inductionist is okay, really, reasoning about your reasonability using your reason is the only option and that’s fine. Arguing with postmodernists is for giving yourself a headache, not for having fun or seeking truth. Likewise, the relation is mirrored: someone genuinely convinced of the merit of the object level (rather than merely operating there by default) will not be seduced by the appeal of meta-level 2deep4u-ing.

The emergence of post-rationality/post-truth/post-systemism/etc is the final triumph of what we might call Irony. The iron-clad position of ultimate immunity to everything, the ferrous dark tower against which pin the world must be turned aside, the point of nuclear stability from which no further action can be extracted. Not merely to unthink your thoughts, not merely to meet a stop-sign and turn back, but to unthink the thoughts about unthinking, and the thoughts about that, quine the whole thing and be done with discourse forever. Ironic detachment beyond merely a new level, but taken to a whole new realm of smug disengagement, an Alcubierre drive running on exotic logic, causally disconnected from the rest of reason and already accelerating away to some absurd infinity.

0.
This, then, is the antimemetic meme. Don’t take things too seriously. If someone tries to engage in a serious discussion, post a frog picture and move on. Don’t think too hard about it, don’t believe anything you read, don’t try to understand why other people disagree. They’re probably just signalling anyway. Definitely don’t do anything as uncool as caring. Why you mad though? Truth isn’t subjective, of course, we’re not peddling woo here, but don’t waste your time on a mere system. Your impression of reality is supposed to be a big blurry mass, isotropic and incoherent. And so on.

Douglas Adams wrote about a spaceship suffering a meteorite strike that conveniently knocked out the ship’s ability to detect that it had been hit by a meteorite. Thus the beauty of antimemetic warfare: the first and only thing that needs to be removed is the knowledge that you’re not fighting. Make the thought of not fighting unthinkable, and everything else follows. Can one fight a war with no enemy? Under such circumstances, I don’t see why not. Sam Hughes wrote that “every antimemetics war is the First Antimemetics War.” That a capable response to true antimemetic forces – even those arising purely through natural means – must require respondents who are as good on their first day as they’ll ever be. For the weaker antimemes of the real world, we have perhaps a little leeway, a little ability to learn counter-techniques.

Thus my conclusion. If we cannot re-learn honesty, earnestness, dialogue on the direct object level, then we will lose a war we can’t see being fought to an enemy that doesn’t even exist. I say this with utter seriousness.

Advertisements
Utter Seriousness

44 thoughts on “Utter Seriousness

    1. Hi, and thanks for commenting! In similar fashion to my reply to John Nerst, I will say only that I do think quite highly of Chapman’s work at times – at other times than these, that is.

      Like

  1. Just came here from SSC. This was a fun stream-of-consciousness-rant, albeit surreal – as if I’ve looked into a parallell universe. Do you really find much in common between alt-right-4chan-edginess anti-intellectual culture and people like Chapman (not familiar with Peterson)? The linkage seems extremely far-fetched to me.

    As far as I’ve understood Chapman, the thing he calls rationalist eternalism and criticizes is something far more circumscribed and limited than what most rationalist-types would mean by rationality. Like a complete formal system for finding knowledge or one completely correct map of the world that’s much smaller that the territory itself (something like complete platonic realism about categories). “Reality is nebulous” might be trivial but it surely is commonly forgotten – see any debate about definitions.

    I thought his description of rationality was strawmannish when I read it but your description of him also feels strawmanned to me, so you talking about his view of rationality becomes a strawman of a strawman. Just thought that was funny…

    Also, either you are or I am confused about what metamodernism is: the things you criticize seem more indicative of vulgar postmodernism, precisely the kind of thing metamodernism wants to get away from. Chapman looks much more like a sophisticated postmodernist than a metamodernist to me.

    Anyway, lost of thoughts, better stop here. Enjoyed the essay.

    Like

    1. Thank-you for your comment! Yes, I think I’ve given a rather unfair portrait of not just Meaningness itself (and Peterson’s work too, but I think Chapman is the stronger of the two – less politicized and much better about getting to his actual points), but also my own thoughts on it. Which isn’t to say I’d disagree with myself at any point, either, just that… Well, Chapman is trying to write about something very confusing, and I don’t think his explicit rejection of postmodernism is enough to prevent people stumbling into a very slippery trap. Because once someone has accepted certain presuppositions, they all but disappear out of the “mental universe” that I’m a part of, effectively. Essentially the postrationality movement comes across as being very very concerned that the castle-in-the-sky of epistemology has no foundations, but missing that it’s not actually falling. Their concern isn’t fake, or nonsensical, or even useless, it’s just, well, not a problem I care about that much myself, but constantly feel like I’m being asked to defend myself over. Yeah, biased perspective, skewed sense of priorities, entirely my own problem. And, like, how would I ever be able to tell the difference between a metamodernism that’s genuine and a metamodernism that’s vulgar postmodernism with the paintwork re-done? They look the same from the outside; but they also look like inescapable word-game traps from the outside, so I’m sure as sugar not climbing in to try to find out.

      As to the first question, whether I really think there’s enough similarity between the alt-right and the alt-rational to justify this alt-ranting – yes, definitely. Peterson in particular is adored on e.g. /lit/; /pol/ loves their memetic leftist-bashing which immunizes them against actual political arguments they disagree with (in another universe, this post is just links to Fashy Memes, the 8th Meditation on Superweapons, and screaming). People underestimate 4chan in my experience – they’re edgelords and trolls, yes, but not just that. As you say, they have an anti-intellectual culture – there’s a whole style of debate that originated or at least found fertile soil there, in classic memes like “u mad tho” and more recent ones like “virtue signalling” or Kek (well, okay, I don’t quite understand the Kek thing. I’m getting too old for this). A debate style which goes beyond merely disregarding or rejecting actual discussion of the point into a whole bizarre realm of negation – talking about the point is stupid, and talking about why we’re not talking about the point is stupid, and so on – and the rise of highly-propagandized media appears symptomatic of a massive increase in the popularity of that debate style. And it’s that process of going beyond merely rejecting the object-level discussion that I see reflected in post-rationality to some extent, and which worries me the most.

      Like

      1. Anon. says:

        I’ll try to steelman the 4chan approach: I think the 4chan style of discourse is not the source of “rejecting the object-level discussion”. It’s the result of the realization that there generally is no serious discussion at all. Even when people appear to be talking earnestly, honestly, etc. it’s basically just a thin veneer of deception to hide the irrational memetics beneath. Systems are imposed on ideologies post-hoc in an attempt to rationalize the unrationalizable.

        Take e.g. the left defending the scientific consensus on creationism or global warming. They’ll tell you how great science is, how reality has a liberal bias, etc. Then you bring up human evolution or GMOs and realize that they never actually liked science at all, it was just another cudgel with which to beat their opponents.

        So what do you do? Any hopes of widespread “honesty” fundamentally misunderstand how humans work. On an individual level of course you can have a serious discussion with certain people. But what about the public at large? Well, now that you see how everything is bullshit signaling, nobody cares about reality but only about beating the outgroup, all popular ideologies are more-or-less random assemblages of memes rather than coherent systems…why would you sustain this ridiculous situation? Show your superiority to it by refusing to take it seriously. Strip away the facade of soberness and just throw naked, idiotic memes all over the place.

        Liked by 2 people

        1. I think that what you stated might be particularly true of 4chan, because it is so open. I think that smaller communities where people are likely closer to your ingroup can have meaningful honest discussion. The problems arise when people on the far left try to argue with the far right and all personal belief systems get absorbed into the memeplex of your group respective group. Let me put it this way. The online communities like on /r/thedonald or /r/enoughtrumpspam or /r/kotakuinaction or … disgust me on a deep level. The perpetual cycle of chanting about nothing while throwing insults at an enemy that may not even exist is the dead end of all communication. However, that has been far from my experience everywhere on the internet. I think this cycle happens when two memeplexes start warring with eachother online, and values are traded off to gain an edge until nothing is left but hatred and intolerance. In any memeplex where there isn’t this constant battle, these issues of a lack of honesty seem much less.

          Like

        2. Seconding the anon above.

          “this, then, is the antimemetic meme. Don’t take things too seriously. If someone tries to engage in a serious discussion, post a frog picture and move on.”

          Rule #2: “Never go outside the expertise of your people.” It results in confusion, fear and retreat. Feeling secure adds to the backbone of anyone.

          Rule #3: “Whenever possible, go outside the expertise of the enemy.” Look for ways to increase insecurity, anxiety and uncertainty.

          Rule #5: “Ridicule is man’s most potent weapon.” There is no defense. It’s irrational. It’s infuriating. It also works as a key pressure point to force the enemy into concessions.

          Rule #6: “A good tactic is one your people enjoy.” They’ll keep doing it without urging and come back to do more. They’re doing their thing, and will even suggest better ones.

          these tactics are being adopted not simply because they work, but because they are pretty much all that has existed at the mass-communication level for all of living memory. There was a time when subtlety of execution and careful technique disguised this fact to some degree, but even that time is probably more than a decade past, and technology has rendered both unsustainable.

          We are all radicals now.

          Like

        3. Alright, so this has been the hardest comment for me to answer. Ultimately, it is a very good illustration of the thought processes that might be behind abandoning serious discourse. You have my appreciation for it, to be sure!

          That said, I will continue to believe in the sincerity of people’s stated opinions. That may prove to be irrational, but there’s something just too hopeless about trying to answer a lack of earnestness by becoming maximally ironic.

          Like

        4. @Dirdle – “That said, I will continue to believe in the sincerity of people’s stated opinions. That may prove to be irrational, but there’s something just too hopeless about trying to answer a lack of earnestness by becoming maximally ironic.”

          Note that the above is specifically in reference to mass communication.

          It is not possible for me to have a meaningful conversation with CNN or FOX. It is probably not even possible for me to have a meaningful conversation with Rick Wilson; the asymmetries are too numerous and too vast. Wilson is the single-atom tip of a colossal ideological spear; his views are bolstered by multiple institutions and endowments and think-tanks, all with deep object-level interests in supporting their cause. When I note that this vast intellectual and financial engine nevertheless seems to be stuck doing donuts on the lawn, I have to then figure out a way to build an opposition movement from scratch, with no influence, no funding, no academic or media support, while his establishment uses its ample supply of all of these to smother us in the cradle.

          The old saw is “never pick a fight with someone who buys ink by the barrel”. Equally, never pick a fight with someone who decides which studies to publish, which facts get presented, which orgs get funded. But what if you have to pick those fights? How do you balance out those massive asymmetries? The answer is Alinsky and his apostle Pepe. You demonstrate to your opponents that they don’t hold all the cards, that you have ways of making them bleed, and consequently that they have an incentive to treat your concerns with seriousness and respect. We don’t have multimedia empires or big-time celebrities like Colbert and Stewart to provide sophisticated, high-performance ridicule for us, so we’re forced to roll our own. Fortunately, ridicule doesn’t depend on production value for its effectiveness.

          You are correct to point out that all this is profoundly toxic, but it is adaption to a fundamentally toxic environment, not an aberration or defection. It is also, ideally, tactical, not ideological. I’m sure there are many Kekites who really have embraced peak nihilism, for whom the tribal chanting really is all there is. Likewise, there are many conservatives, progressives, liberals, socialists, etc for whom the sum of their personal ideology is nothing more than “us vs everyone else”. On the other hand, it isn’t the Kekites who argue for Hate Speech Laws, brand other people “Deplorables”, or come up with terms like “sea-lioning”. A great many of us are entirely happy to engage in earnest, productive conversation, if that’s what’s on offer. Alternatively, if others are going to call us names, we’ll call them names right back, and more colorful names to boot.

          Without mutual respect, debate is impossible.

          Like

      2. Rogelio Dalton says:

        “Essentially the postrationality movement comes across as being very very concerned that the castle-in-the-sky of epistemology has no foundations, but missing that it’s not actually falling.”

        With respect to Chapman specifically, I think this is wrong. The whole point of several of his articles is that categories are nebulous AND structured. The fact that structure exists in categories accounts for why epistemology has no foundations.

        “Essentially the problem with the meta-rationality, post-truth, prefix-word memeplex is that it explicitly demands non-thinking.”

        Again, with respect to Chapman, I don’t think he holds this view. As John Nerst said, when Chapman is referring to rationality he is referring to a specific system or mode of thought. His theory is that since no system can be applied BY HUMAN MINDS to solve any given problem (some systems are more apt to solving a given problem than others), there must be a higher-level meta-system that selects with system to use. That higher level meta-system IS THINKING, but it’s thinking in a human-cognition complete way. You can’t describe the system that selects systems because you don’t have enough brainpower. It takes all the mental faculties to make that selection, including many that are inaccessible to self-reflective consciousness. The best description of this line of thought is from this article: https://meaningness.com/metablog/bongard-meta-rationality

        He says: “Rational thought is not a special type of mental activity. It is a diverse collection of patterns for using your ordinary pre-rational thinking processes to accomplish particular kinds of tasks, by conforming to systems. Likewise, meta-rationality is a diverse collection of patterns of ordinary thinking that accomplish particular kinds of tasks, by manipulating systems from the outside.”

        In that post he also discusses human-complete problems, but in a way that is difficult to excerpt. I think there are many valid critiques to Chapman’s philosophy, but this is not among them. I also don’t think that puts his idea or any particular idea above critique.

        To steelman your argument about post-fact you could imagine a Chapmanite saying “You are using the wrong system to analyze this situation, you should be using X system.” Then the two people can argue about which system is more appropriate, but by the nature of systems it’s unlikely that either will be provably or disprovably appropriate. In reality individuals will select the system of thought to use according to heuristics and goals, essentially intuition.

        One could write an entire post analyzing what a Chapmanian disagreement would look like, but I don’t have that much time so I will provide a scenario and sketch. Imagine that two people are arguing about the death of Michael Brown in Ferguson. One person is using a social-justice oriented system to analyze the situation, and one is using an individual responsibility system to analyze the situation. Postmodernism would say that these are just two equally valid interpretations by which to view the situation since all truth is relative. The Chapmanian view is that these are two systems, and that one can be more valid than the other, but that proving which is more valid is potentially impossible (possibly except by reference to a goal, but that may be me adding to Chapmans explicit writings). The normal person would not even realize that there are two systems at work here and would argue about how evil one or the other is. In real reality, there is no little tag in the universe that says that one system is correct or not, all of the facts surrounding it come down to molecules changing configurations in various ways. But because categories have structure, we can decide a useful system, and because some systems can be more applicable than other, we can influence the systems other people use through argumentation and changing their intuition.

        I am not familiar with Jordan Peterson, and I’ve never even heard of several of the other groups you describe, so can’t say whether they are the same. I should also point out that post-rationality or meta-rationality has almost nothing to do with postmodernism, except that these authors like to write articles lambasting it as well.

        Like

        1. Rogelio Dalton says:

          I should have taken a moment to actually review this comment, but in my first paragraph it should say “accounts for why the castle is not falling.” And I didn’t end up steel-manning your argument but just went off on a tangent. Among other confusions.

          Like

      3. > I don’t think his explicit rejection of postmodernism is enough to prevent people stumbling into a very slippery trap

        Yes, there absolutely is a danger here. It’s that, until you understand the distinction, meta-rationality may look like anti-rationality; and that can be a step backward. (From Kegan’s 4 to 3, rather than step forward from 4 to 5.)

        > concerned that the castle-in-the-sky of epistemology has no foundations, but missing that it’s not actually falling

        Understanding *both* parts of this is critical! The first half is the 4->4.5 transition, which can land you in nihilism. (That’s what happened to the best of pomo, mostly, unfortunately.) The second half begins the 4.5->5 transition, in which you recover an ungrounded and relativized rationality.

        > Chapman is trying to write about something *very* confusing

        Alas, it’s quite a large topic, and I haven’t managed to explain most of it, and probably my explanatory skills are less than ideal. So, yeah, I’m afraid it is very confusing at this point. I hope to make it less so, gradually, by explaining more of the story.

        The Bongard post specifically explains what I mean by “metarationality” in terms that ought to make sense to you. It should stand alone, as understandable without having read anything else I’ve written.

        Like

      4. I can’t say more than a few words or I’ll go on for too long – but this is wrong:

        they have an anti-intellectual culture

        Doing all your political debates on /pol/ would be crazy – and of course, individual boards on each instance of futaba go through cycles of flourishing and decay – but if you think anonymous short-format relay-style debate is anti-intellectual, try to have a good think about what might be valuable about it.

        There are various limits on any debate format. Lack of persisting identities for the parties to the debate causes the debate to fall short of the rational ideal of a Socratic symposium, of course. But the presence of persistent identities causes serious problems as well….

        there’s a whole style of debate that originated or at least found fertile soil there, in classic memes like “u mad tho” and more recent ones like “virtue signalling” or Kek…

        A separate issue is whether your aversion to the “style” of debate which you associate with certain meme-phrases has something to do with a the way the argument is structured or instead only with your (un)familiarity with the lingo, which makes it seem irritating and ugly. The latter is a very, very old phenomenon, but not intellectually interesting (w.r.t. differences of style between different intellectual subcultures, I mean; the functions of shibboleths are very interesting in their own right).

        Like

  2. > If we cannot re-learn honesty, earnestness, dialogue on the direct object level, then we will lose a war we can’t see being fought to an enemy that doesn’t even exist. I say this with utter seriousness.

    We’re in violent agreement about this!

    I am deeply concerned about the same issues you are. I wrote several blog posts about that last year. I won’t link them because the spam filter might not like it. “Tribal, systematic, and fluid political understanding” might be the most directly relevant; “The Court of Values and the Bureau of Boringness” is a light-hearted proposal for how to deal with it (which might be more serious than it pretends).

    The meta-rational view *requires* and *respects* rationality. It’s about what to do when the specific rational system you are using isn’t working. Insisting that one such system provides the Ultimate Truth means persisting in applying a tool after its repeated failures are obvious.

    We need that for just the reason you point out—that in the “post-truth era,” our systems for debunking malicious nonsense aren’t working as well as they used to. Saying “this is untrue” and “that argument makes no sense” has lost a lot of its power. (Due in part to pomo bs, as you point out.)

    (I haven’t read Jordan Peterson. From brief skims, I think our views are fairly different, but I don’t know enough to be sure.)

    Like

    1. Wow, thankyou for coming here to comment! Yes, it was kind of inevitable that we’d agree on the conclusion – if nothing else because I retreated hard from any really decisive position, but also because you’re ultimately a reasonable human being =)

      I think my problem here is that no one (no one of my acquaintance, at least) is claiming that rationality is the one system to reach the Ultimate Truth, but rather the more circumspect claim that it’s the system they’re going to carry on using regardless of the apparent incompletenesses. The castle-in-the-sky analogy remains my favorite: of course there are problems you can’t solve by building more towers or sprucing up the crenellations, like why the heck is the castle not falling down, but the only use I ever see questions about the foundations put to, is discouraging tower-building. Yes, we’re aware that the towers won’t ever reach the ground. And then people say, well, if you don’t join us in digging, you’ll never reach the ground, and without that, what’s the point? Do you not care about the ground? But there is another saying: “if you’re gonna dig, dig to the heavens” – the ground is nice, but we’re going elsewhere.

      Essentially – to draw this back to the main point – it’s very hard to express statements of positive caring in the face of certain epistemologies or debate styles. And one of those styles is the “but first, meta-levels” style. It’s really difficult to say “I care about getting the right answers for this problem in science or other rationalist fields” when it’s trivially met with “that’s why you need to shut up and listen to the philosophers talk about the basis of knowledge,” which is then mercilessly followed with an explanation of how there is no answer. This feels very self-defeating, but it’s hard to explain why exactly it’s wrong. Granted, this comes from “vulgar postmodernists” much more than yourself. In fact, my understanding is that choosing a system which gets the answer regardless of whether using said system is “allowed” under some set of rules, is pretty close to what you’re trying to explain?

      The best explanation for why I’d say that and still disagree would probably be the anti-inductive paradigm. You could hop into an anti-inductive system, and then… well, then anti-induction would seem perfectly justified! After all, it’s never worked before! But from outside that system – that is, from the inductive system that comprises all realistic human cognition – that would be wrong. Just because the set of paradigms is symmetric, doesn’t mean I have to give equal consideration to both sides. While it is indeed impossible to prove which one is the right one, it’s easy to choose. And for all but the most contentious cases, it’s still pretty easy.

      Speaking purely as a third party, I suspect Peterson’s disagreement with you may be only surface-deep – where you say stances, he says narratives, and the pedagogy changes accordingly, but the usage is the same: stances/narratives are something “above” truth-seeking systems, which guides which systems you should use. A deeper disagreement might be that Peterson seems to frame the question as a search for the right narrative, whereas I’ve generally read your position as being more about building an effective library of stances, but functionally they’d work very similarly – like the difference between having a good package manager, and having a system with the apps you want already installed. I don’t think my understanding of either construct is good enough to be sure, though.

      Like

      1. > no one is claiming that rationality is the one system to reach the Ultimate Truth

        I don’t think this is a strawman. I think it’s common in rationalist circles.

        I’m puzzled about why we disagree about this empirical claim. Let me point to two species of this-system-explains-everything rationalists, as examples, and maybe that will help.

        First, there are people who claim that probability theory (or decision theory, its obvious extension) is a complete theory of rationality (or even a complete theory of epistemology). I pointed out that this is *mathematically* false in “Probability Theory Does Not Extend Logic.” Yet, as recently as two days ago, I read in a rationalist forum someone saying “My opinion is that Chapman is wrong, and probability theory does extend logic.”

        This is like saying “My opinion is that Chapman is wrong, and the square root of two is rational. His claim that it is an ‘irrational number’ is nonsense. These ‘irrational numbers’ make no sense to me, and I don’t see why we need them. After all, between any two rational numbers, no matter how close they are, there’s an *infinite* number of other rational numbers! There’s no place the supposed ‘irrational numbers’ could fit! This is bullshit. Also, I’ve been doing arithmetic just fine all my life, and I’ve never seen any need for ‘irrational numbers’ at all. You can add, subtract, multiply, and divide, and do you ever get one of these things? NO! they’re spooks. Chapman’s whole ‘irrationality’ nonsense is a bunch of mystical mumbo jumbo, and nobody should listen to him.”

        Dude… you don’t get to “have an opinion” about irrational numbers. THIS. IS. MATHEMATICS!

        You also don’t get to “have an opinion” about nested logical quantifiers (which are what probability theory can’t do). This isn’t something where there’s any ambiguity or room for argument. It’s totally standard undergraduate-level math.

        I explained it in great patient detail, and even gave specific, concrete examples of rational reasoning that you can’t do with probability theory. No one has given any argument whatsoever that this is wrong, but they “have opinions.”

        So, this is a case that is entirely cut-and-dried. You don’t need to go beyond systematic rationality to dismiss this particular claim that a particular rational system is complete.

        Let’s look at a case where you have to go meta-systematic. Some rationalists are committed to anarcho-capitalism, and achieve “epistemic closure” within it. It’s an internally consistent, complete system, which allows them to answer all objections, evidence, and alternatives with formally elegant and rational arguments. On the other hand, it’s obviously nonsense. (I hope you agree—I’ve no idea what your own political ideas are.) So you can’t argue with it at the systematic level. Instead, you have to exercise “meta-systematic judgement” and say “no, this is just silly; the world doesn’t work that way, at all.”

        This relates to:

        > Once you accept that something can’t be attacked by reason, or meta-reason, or anything anywhere up the chain (systems), it becomes completely immune to all criticism forever. You might say that it’s still vulnerable to criticisms made in the right way – on the right non-systematizing level – but the fact is you will very conveniently never come across any criticisms on that level.

        I have a post on “meta-systematic judgement” that gives many specific examples of criticizing claims at the meta-systematic level. It looks at “George Washington was the first US President.” That’s obviously true. However, I construct 10 arguments against it, each of which mis-applies some rational system *that is perfectly correct in its own terms*, but that is the wrong system to apply in this case. None of the objections can be defeated with any sort of knock-down rationalist argument. They aren’t illogical, and no available evidence rules them out. They’re just, sort of, silly. Obviously wrong. You can say, in each case, roughly why they are wrong, but if someone wanted to keep arguing for them, all you could do is say “no, that’s stupid” and walk away.

        > my understanding is that choosing a system which gets the answer regardless of whether using said system is “allowed” under some set of rules, is pretty close to what you’re trying to explain?

        Yeah. So, I’m not doing the thing you apparently originally took me as doing. At all. 🙂

        > You could hop into an anti-inductive system, and then… well, then anti-induction would seem perfectly justified!

        That’s just, sort of, silly. Obviously wrong. If someone wanted to keep arguing for that, all you could do is say “no, that’s stupid” and walk away. 🙂

        Liked by 1 person

        1. I pointed out that this is *mathematically* false in “Probability Theory Does Not Extend Logic.” Yet, as recently as two days ago, I read in a rationalist forum someone saying “My opinion is that Chapman is wrong, and probability theory does extend logic.”

          Was that someone me? Because I certainly didn’t refer to this as an “opinion”!

          But yes, I’ve looked at your argument, and it doesn’t make sense to me, and I think it’s mistaken. I’ll repeat here the same comment I’ve made in the past:

          So I looked at Chapman’s “Probability theory does not extend logic” and some things aren’t making sense. He claims that probability theory does extend propositional logic, but not predicate logic.

          But if we assume a countable universe, probability will work just as well with universals and existentials as it will with conjunctions and disjunctions. Even without that assumption, well, a universal is essentially an infinite conjunction, and an existential statement is essentially an infinite disjunction. It would be strange that this case should fail.

          His more specific example is: Say, for some x, we gain evidence for “There exist distinct y and y’ with R(x,y)”, and update its probability accordingly; how should we update our probability for “For all x, there exists a unique y with R(x,y)”? Probability theory doesn’t say, he says. But OK — let’s take this to a finite universe with known elements. Now all those universals and existentials can be rewritten as finite conjunctions and disjunctions. And probability theory does handle this case?

          I mean… I don’t think it does. If you have events A and B and you learn C, well, you update P(A) to P(A|C), and you update P(A∩B) to P(A∩B|C)… but the magnitude of the first update doesn’t determine the magnitude in the second. Why should it when the conjunction becomes infinite? I think that Chapman’s claim about a way in which probability theory does not extend predicate logic, is equally a claim about a way in which it does not extend propositional logic. As best I can tell, it extends both equally well.

          …OK, I’m guessing you weren’t referring to me, since I gave an actual argument, and you say this person didn’t, just stated their opinion. (Which I will agree is dumb.)

          (I also think what you’re arguing against is a strawman; to me it looks like you’re making this claim based on conflating two different meanings of “system”. But that’s a different argument…)

          Like

        2. Ugh, I tried to reply here and it seems WordPress spam-filtered it; would you mind fishing it out of spam? Thank you!

          (WordPress has been spam-filtering a ton of my comments across the internet lately, I don’t know why…)

          Like

        3. Dude… you don’t get to “have an opinion” about irrational numbers. THIS. IS. MATHEMATICS!

          Once more, we strongly agree.

          (Aside: I’ve been finding myself drifting closer to Ultrafinitism over time. What would you say, I have to wonder, if I said I do think irrationals are a bunch of spooks? Well, you could use the number 16 instead. That’s very probably a meaningful number. So no need to drop the case.)

          However, I don’t think the problem there is someone too stuck in a dogmatic system! After all, the problem is that they’re not accepting the verdict of laws of logic.

          Some rationalists are committed to anarcho-capitalism, and achieve “epistemic closure” within it. It’s an internally consistent, complete system, which allows them to answer all objections, evidence, and alternatives with formally elegant and rational arguments. On the other hand, it’s obviously nonsense. (I hope you agree—I’ve no idea what your own political ideas are.)

          Ha ha! Yes, we agree again. This time I even had the wit to say so half a year ago, so as not to be accused of having a position that evaporates when you try to disagree with it. That said, I will concede the empirical point. There are in fact some people who take having a set of rules far too far, make it the only important factor, and that’s a bad idea.

          Is that the same as saying rationality is the one system for reaching Ultimate Truth? We might be meaning very different things by that. At the very least, no one seems to talk about finding Capital Letters Cosmic Truths So Blindingly True That Even Non-Agentic Objects Would Believe Them, that kind of Ultimate Truther. When they say “rationality is the one system for reaching ultimate truth,” I find they mean “ultimate” as “that to which no successor exists,” not “that to which no successor can exist.”

          To consider an example, the person claiming that probability theory can extend predicate logic might mean something like, “well, I’m a creature of physics myself, I can’t – I mean actually can’t – form belief patterns that are truly outside of mathematics, rather than merely corresponding to words like ‘outside of mathematics.’ And using probabilities to describe beliefs in some way must be right – it’s the most useful thing ever. So whatever the best decision theory is, it really seems like it should look kind of like an extension of predicate logic using probability theory.” okay, that’s not very rigorous, and obviously asymmetric charitability. Hmm.

          Like

        4. Hi, Sniffoy,

          It wasn’t you I was referring to. I haven’t seen the argument you present before.

          So, what propositional logic especially doesn’t let you do is reason with generalizations about more than one object. Coulomb’s Law, for instance: given any two distinct objects, the electrostatic force between them is the product of their charges divided by the square of the distance between them. It’s just a fact that this can’t be stated in propositional logic. It’s a matter of definition. There isn’t any room for argument about it.

          Now, there’s two relevant follow-on questions: do you *need* to be able to reason with generalizations, or can you get the equivalent effect without doing so? And, does *rationality require* generalizations, even if you could do with out it?

          So, let’s say you are God, and therefore omniscient. You *just know* the electrostatic force between every pair of objects (ranging from quarks to teapots to galaxies). In that case, you don’t need Coulomb’s law. Indeed, you simply know the answer to every question, and you never need to do any reasoning at all. Generalization is useless for you.

          So, if you are God, you don’t need predicate calculus. (Actually, if you are God, you don’t need propositional calculus either. If it happens that p and q are both true, you know that, and separately you also just know that p-and-q is true, and you don’t have to do propositional logic to get the truth value of p-and-q from p and q.)

          Suppose you are not God. You may still be able to assign a probability to every possible fact about some domain. Then you can compute the probability of conjunctions—including ones that cover every object in the domain. For example, suppose that for each individual man and woman on earth, you have the probability that the man is the son of that woman. Then you can compute the probability of a giant conjunction of disjunctions that necessarily has the same probability as “every man is the son of some woman.”

          However, you still cannot *reason with* the generalization; that is a different operation. If you come across one man you missed, you are out of luck. This happens.

          Also, you may have high confidence in “every man is the son of every woman” without having a list of all of them.

          Also, it’s a lot more efficient to reason with one generalization than with the roughly 10^19 individual facts about human mother/son relationships, and/or the size-10^19 formula that includes all of them.

          It is rational to summarize these as Coulomb’s Law, and as “every man is the son of some woman.” So, rationality requires the ability to generalize about multiple objects, which probability theory can’t do—and predicate calculus can.

          Like

        5. OK, finally getting back to this…

          So, hm, you seem to be looking at logic pretty differently from me. I want to break this down a bit.

          But, before anything else, though — do you agree that your original claim makes no sense? That if we consider what you pointed out to be a failure of probability theory fails to extend predicate logic, it is equally a failure of probability to extend propositional logic?

          Anyway, first part of this, let me just lay this out as I see it. So, yes, probability theory is probability theory. It works with sets (of possible worlds), not statements. In this you can say that it assumes logical omniscience — or more than that, mathematical omniscience even about things that can’t be proven! It deals with how to update based on new information, in the information theory sense, not new insight, not better use of your existing information. In situations where information rather than insight is the limiting factor, it’s pretty useful.

          Basically, your “if you are God” condition is underspecified. Of course to apply probability theory as it is one has to have godlike powers. But godlike powers are not a uniform thing. If you are logically omniscient, and can store an infinite amount of information, are not affected by the outside universe except via a designated input channel, but don’t know what is going to come in via that channel — well, then, Solomonoff induction is for you. If you have a different set of godlike powers, things will be different!

          This is part of the whole LW project, really — starting with probability theory as an idealization for agents with impossible powers, and then removing the impossibilities one by one, in the hopes that this will eventually result in something that can be used by an actually buildable AI. That sort of thing has moved off of LW and into MIRI these days, of course.

          So — does probability theory extend propositional logic, or predicate logic without nested quantifiers, or predicate logic? Well — when you just phrase the question that way, without the particular argument you made earlier… I guess actually I basically agree that probability theory only extends predicate logic without nested quantifiers. Because probability theory deals with sets, which correspond essentially to predicates of one variable (that variable being, which world are we in). So I agree that the claim that “probability theory extends logic” is indeed misleading. But when you ignore slogans like that and get into questions of how it works — well, this is where it looks like you’re strawmanning people, because limitations like “probability assumes you are logically omniscient” and “probability assumes you have infinite resources to assign probabilities to different possibilities about the entire history of the universe” aren’t little-known, they’re mentioned all the time. They’re why MIRI went on to come up with logical induction and is trying to do more in that vein. I just don’t think people are making the sort of mistake you’re attributing to them.

          Now, second part of this — time for some more point-by-point replies.

          It’s just a fact that this can’t be stated in propositional logic. It’s a matter of definition. There isn’t any room for argument about it.

          (This one is a nitpick rather than a serious point, you may want to skip it.) I can find room for argument about it! 😛 Not as in I’m claiming “yes you can” (obviously not), but rather as in I’m wondering, does that question even make sense?

          As in: I’m used to only ever seeing “propositional logic” with propositional variables in, not actual sentences, so I guess referring to the system of deduction, not to the set of things you can state. But I guess you could call a sentence “stateable in propositional logic” if it has no quantifiers? It’d have to be in a signature that has some constant symbols, then, or you won’t be able to make any sentences at all!

          So, let’s say you are God, and therefore omniscient. You *just know* the electrostatic force between every pair of objects (ranging from quarks to teapots to galaxies). In that case, you don’t need Coulomb’s law. Indeed, you simply know the answer to every question, and you never need to do any reasoning at all. Generalization is useless for you.

          So, if you are God, you don’t need predicate calculus. (Actually, if you are God, you don’t need propositional calculus either. If it happens that p and q are both true, you know that, and separately you also just know that p-and-q is true, and you don’t have to do propositional logic to get the truth value of p-and-q from p and q.)

          OK so this is where the “we are coming at this from pretty different approaches” first appears.

          I would say, that in the standard mathematical point of view as I understand it, that giant conjunction is what Coulomb’s law is (well, in spirit). That’s what a universal is, an infinite conjunction. I mean, OK, not formally, you can’t actually reason with infinite conjunctions; but in spirit that’s what it is. (Of course, it’s important that it’s the electrostatic force between every pair of objects, across the entire history of the universe, not just at that instant. Or rather, this will be important later.) Rather than saying that “You don’t need Coulomb’s law”, I’d say, “You know Coulomb’s laws, if and only if Coulomb’s law holds”.

          That’s assuming a God’s-eye view, of course, but we do that all the time in math. And let’s note what sort of God it is — one that doesn’t need to do reasoning. But probability theory isn’t really about doing reasoning but about incorporating totally unknown information. (More agreement, I guess, that it doesn’t really extend logic.)

          Suppose you are not God. You may still be able to assign a probability to every possible fact about some domain. Then you can compute the probability of conjunctions—including ones that cover every object in the domain. For example, suppose that for each individual man and woman on earth, you have the probability that the man is the son of that woman. Then you can compute the probability of a giant conjunction of disjunctions that necessarily has the same probability as “every man is the son of some woman.”

          However, you still cannot *reason with* the generalization; that is a different operation. If you come across one man you missed, you are out of luck. This happens.

          Well, you’re if the sort of God you seem to be talking about, you don’t need to reason with it, as you’ve pointed out!

          But OK, let’s suppose you’re a somewhat less powerful god. You’re omniscient about the present, but you can’t see the future. Then it seems to me that there’s no problem here? Like, you don’t need to do any thinking regarding “every (present) man is the son of some woman”, but you still need to assign some probability to “every (ever-existing) man is the son of some woman”. And you have some probability distribution over all possible future histories of the world (because you’re also not omniscient about the laws of physics), and in some this is satisfied and it isn’t, and at each time step you rule out the ones where it isn’t true in the present and renormalize probabilities, which affects the probability that it will always be true… like, yup, assuming such omniscience, probability theory seems to work fine here. I’m not seeing the problem.

          Also, it’s a lot more efficient to reason with one generalization than with the roughly 10^19 individual facts about human mother/son relationships, and/or the size-10^19 formula that includes all of them.

          Right, once computational limitations are introduced, this sort of God’s-eye probability theory no longer suffices by itself. But it’s a good starting point.

          Third part: Eh I think I’ll skip the third part for now, I’m not so convinced it’s relevant. Hoping this has clarified things somewhat?

          Like

        6. Hi Sniffoy,

          Thanks, I think I now understand how we are talking past each other. (“Double crux”?)

          you seem to be looking at logic pretty differently from me

          Also probability theory. I am looking at both as precisely-defined mathematical systems, period. If I understand you correctly, it seems that you are looking at them as philosophical world views. And these world views have only metaphorical relationships with these mathematical systems.

          Considered as mathematics, much of what you say in your most recent comment is unambiguously false, or not-even-false. You seem to acknowledge this in statements such as “I mean, OK, not formally, you can’t actually reason with infinite conjunctions; but in spirit that’s what it is.”

          So my page “Probability theory does not extend logic” explains these two systems as mathematics. In particular, it explains how Cox’s Theorem has been misunderstood.

          If I understand correctly, you are taking “probability theory” to mean something like “a world view in which all knowledge is considered uncertain, to varying degrees.” I agree strongly that it is important that all knowledge must be considered uncertain, to varying degrees. (There’s a page titled “The promise of certainty” in the Meaningness book, which explains why.) So I share at least that much of your world view.

          Unfortunately, “probability theory,” in standard usage, means something different. Namely, probability theory is a particular system of mathematics. That branch of mathematics does assign degrees of uncertainty to propositions. It is not, however, a philosophical world view. It is also not, by itself, an adequate account of rationality. That is because—among other things—it does not have as much power as predicate calculus, which any account of rationality needs to include.

          As mathematics, logic and probability theory don’t operate “in spirit.” They are precisely defined, so they do exactly what they do, and no more. As philosophy, one might make metaphorical extensions, or speak about what they do in spirit. (I think it would be best to be explicit about this when you do it.)

          Does it seem like we understand each other now?

          Like

        7. Sorry, yes, I should have been more careful with my language — I was using “probability theory” not to mean probability theory in general, but rather the God’s-eye-Bayesian view described above. To wit: You have a set (let’s call it X) of possible words (a “world” consists of the entire history of the universe); you have a probability measure on this set (and a sigma-algebra of which sets of possible worlds are measurable lurking in the background I guess); “statements” correspond to subsets of X; your observations limit you to some subset E of X; you look at P(A|E) for various A.

          Because this is basically the sort of probability theory that LW discusses. Again, this is not a worldview for actual direct human use; it’s one for imaginary mathematically omniscient agents. For ordinary humans, it remains quite informative, though.

          If we want to talk purely about the mathematical systems, then yes, logic and probability have little to do with one another. The same is true philosophically; logic deals with “insight”, probability with “information”, assuming that the problem of insight has already been solved (a, uh, strong assumption). One could say that probability “extends one-variable predicate logic” in that rather than merely saying “this statement is true for all/some/no objects” (i.e., this set is all of/some of/none of X, as probability theory directly deals in sets, not statements), it allows us to assign a probability to the set. But as you say it doesn’t really extend predicate logic more generally. (I continue to claim, however, that your original argument for this continues to make no sense, and could equally well to be taken to be a way in which it does not extend propositional logic.)

          (I, of course, would say that probability shoudl be founded in Savage’s theorem, not Cox’s theorem, but… 🙂 )

          When I say a universal is an infinite conjunction “in spirit”… it really does act exactly like an infinite conjunction in a number of contexts. Actually doing reasoning, where your axioms don’t actually completely describe the system you care about (because, say, that might be impossible) is indeed not one of those contexts.for all purposes except actually performing reasoning. The analogue of DeMorgan’s laws hold, etc. Worth noting — “first-order logic” really has two meanings. Sometimes it’s used just to refer to the language of what can be expressed, without the question of what proves what. I believe this is actually the technical meaning of the term, actually, with the question of how-do-you-prove-things actually falling under various deductive systems, which then all turn out to be equivalent to one another so we just call it all “first-order logic” without worrying about the distinction normally.

          But note that a lot of logic things can be discussed without thinking about what proves what at all. Like, say, in the language of graphs, is such-and-such a set of graphs axiomatizable? The completeness theorem says that this is equivalent to a what-proves-what question, of course, but it can be stated without it. And in this point of view, where you ignore such things, a universal really is just like an infinite conjunction! But obviously that is not every context. Like if your theory is Peano arithmetic, but what you really care about is the “actual” natural numbers, then I guess there’s quite a difference between the two…

          So yes — different things when actually doing reasoning. Not really different in e.g. the “Coulomb’s law” example described above, where we’re not doing any reasoning because we’re mathematically omniscient (and, doing probability theory, are once again really working with sets rather than statements).

          Anyway, I dunno. Not sure how relevant most of this. At the moment I can’t even find where Eliezer made the statement that started all this.

          But basically my overall reaction to “Probability theory does not allow you to better do what logic does” is basically… yes, of course, they’re for different things, and nobody was seriously claiming otherwise, even if Eliezer or someone may have made a statement that sounded like that at some point.

          Like

        8. @Sniffnoy: I can’t seem to figure out why your comments keep needing to be approved – it’s supposed to be approve-once-per-person, and that’s definitely working for e.g. David. It doesn’t seem to be the IP address or email changing, and you’re not exceeding the links limit. Apologies, it’s probably quite annoying for you.

          Like

        9. It’s not just you — WordPress in general seems to be convinced lately that I’m a spammer. I’ve had this problem all over. Thanks for fishing out my comments though!

          Like

      2. Alex says:

        “I think my problem here is that no one (no one of my acquaintance, at least) is claiming that rationality is the one system to reach the Ultimate Truth, but rather the more circumspect claim that it’s the system they’re going to carry on using regardless of the apparent incompletenesses.”

        How do rationalists communicate with non-rationalists? The major two options seem to be to (at least briefly) stop using rationality as a system to understand the non-rationalits’ statements within the system within which they were uttered or to carry on using rationality to judge non-rationalist statements within the system of rationality.

        The former seems to acknowledge the need for a meta-systems perspective, the latter seems indistinguishable from treating rationality as a road to Ultimate Truth. Am I missing a third option?

        Like

  3. “Once you accept that something can’t be attacked by reason, or meta-reason, or anything anywhere up the chain (systems), it becomes completely immune to all criticism forever. You might say that it’s still vulnerable to criticisms made in the right way – on the right non-systematizing level – but the fact is you will very conveniently never come across any criticisms on that level.”

    In this case, you want to criticize something — we’ll call it “S” — and you perceive mean old Chapman and the post rationalists aren’t letting you because you’re not criticizing it at “the right level”. Let’s call your criticism S*.

    S* is no doubt some kind of propositional argument. Let’s say that I decide to defend S by being skeptical about S*. I demand you justify S*.

    Munchausen’s trilemma suggests that you only have two ways to proceed: you can declare S* to be axiomatically true, or you can justify S* on the basis of other premises. In the latter case, I’m bound to challenge the premises on which S* depends, and you are again faced with two choices: either continue to construct new justifications in an infinite regress, or eventually bottom out to a circular justification for some premise of S* (or revert back to axiomatic reasoning). So in summary, three choices: circular justifications, infinite regresses, or axioms.

    Pretty much everyone who isn’t V.O. Quine has picked axioms. The thing with axioms is you either agree with them or you don’t, and if you don’t then they have no power to convince you of anything.

    So if Alice is trying to convince Bob of S* and Bob is skeptical and demands a justification, and Alice’s justification bottoms out in axioms, then we’re stuck arguing about whether Alice’s axioms (A) or Bob’s axioms (B) are better. How do you even do this? You have to reverse the process that Alice just went through: you have to talk about the conclusions implied by the systems A and B and ask which set of conclusions is more consistent with observed reality.

    But what if Alice and Bob disagree about observed reality in such a way that they disagree about whether A or B is a better description of reality? OK, maybe Alice and Bob can work through their disagreement by agreeing to take some new measurements and agreeing that whichever system is indicated by those measurements is the official winner.

    Two important points here:
    1. Suppose Bob’s theories come out ahead by this test. There’s nothing stopping Alice from saying “Wait, I think the thermometer is miscalibrated”, which prevents the resolution of their epistemological dispute. And it seems like Alice is just being difficult at this point, but there’s always the possibility that she’s actually right! Maybe the thermometer really is miscalibrated!
    2. Instead of the resolution of the empirical debate depending on some timeless system of scientific logic or on some scientific authority beyond the ken of either Alice or Bob, the resolution depends on an ad-hoc scientific “system” created specifically for this purpose by Alice and Bob, and that system doesn’t provide a justification for any other proposition other than the one that Alice and Bob agreed it would justify, and even then it would only justify it for those two people.

    This is all just to say that Chapman is obviously correct. There is no bedrock. There is no Court of Rationality to which you can appeal your epistemological disputes. There is no single methodology or system that will always give the right answer; even if there was, there is no moral way to force people to accept either the answer or the methodology.

    Essentially the problem with the meta-rationality, post-truth, prefix-word memeplex is that it explicitly demands non-thinking. Thinking is part of the wrong system, the dreaded Eternalist Ideological Rationality.

    First of all, post rationality doesn’t demand non-thinking. It absolutely allows for thinking. (Hence “post” not “anti”.) Second of all, the reason the way it seems this way to you is because you seem to have some sense that there is only one correct way of thinking, and that any other form of thinking is really “not thinking”. So post rationality encourages many different kinds of thinking, and because you only recognize one of those as thinking, and because it’s explicitly de-emphasized in post-rationalism relative to all the other kinds of thinking that you don’t consider to be thinking, it seems to be non-thinking. But that’s just because you’re the drunk groping around under the streetlight while post-rationalists are using infrared, millimeter wave radar, sonar, and sense of smell.

    This essay isn’t meant to persuade people to come down from the tower of counterthought, of course: they are beyond the power of articulate reason to reach.

    AND THEY SHALL BE BARRED FROM THE TEMPLE! Seriously, this is exactly what the fellow I’ve never heard of was talking about with respect to psychological totalitarianism. “They think differently from me and they reach different conclusions by doing so, and so therefore they are definitely wrong and not only that — they are beyond the power of [sacred] reason to reach!” This is exactly the kind of rationalist eternalism Chapman is criticizing.

    Once you accept that something can’t be attacked by reason, or meta-reason, or anything anywhere up the chain (systems), it becomes completely immune to all criticism forever. You might say that it’s still vulnerable to criticisms made in the right way – on the right non-systematizing level – but the fact is you will very conveniently never come across any criticisms on that level. You will, weirdly, only ever encounter people trying to critique from “within the system.” Poor dears! They don’t know how helplessly stuck they are, how deeply mired in the Ideology of Rationality!

    In fact, you’re drawing the complete opposite of the correct conclusion here.

    Above, I described how Alice and Bob were at complete loggerheads to resolve an epistemological debate when they clung strongly to their pre-existing axiomatic systems A and B. Both Alice and Bob are strict rationalists, but they’re unable to resolve their dispute because Alice insists that all attacks on S* have to be derived from A while Bob insists that all attacks on S have to be derived from B.

    The problem is not that they’re post rationalists — it’s that they’re strict rationalists with a subtly different idea of how to decide what’s true! The strictness that you advocate in the OP is exactly what prevents Alice and Bob from resolving their dispute. They resolve their dispute by temporarily constructing an entirely new system that has no validity beyond their agreement — entirely in keeping with the post-rational approach.

    You’re looking for the truth, but there is no such thing. There’s an unending hallway of doors. Each door is a tentative, flimsy gesture at truth, and it leads to another unending hallways of doors. There is no one “correct” door. There is no one “correct” hallway.

    But in this metaphor, there are other people walking the same hallways as you, and sometimes if you stop and listen, they can tell you about the interesting things they saw and people they met through other doors and hallways, you can follow in their footsteps and see for yourself or you can use their preferred doors to find hallways you otherwise never would have seen. The rationalist has a very clear idea of what door they “should” go through next, and once passed through they rarely return. A post rationalist is open to trying a few different ones to see where they lead.

    Sorry for the super long comment. I’ll probably have more to say when I reread your last section. I think the question of irony and how seriously to take things is sometimes important, and in this case I would like to take it seriously.

    Like

    1. This is indeed a long and insightful comment, thankyou!

      You’re looking for the truth, but there is no such thing. There’s an unending hallway of doors. Each door is a tentative, flimsy gesture at truth, and it leads to another unending hallways of doors. There is no one “correct” door. There is no one “correct” hallway.

      But this… sounds like we might be talking completely past each other here, so I’ll be brief.

      You can of course prove that formal logic is incomplete. The problem is that, as far as I can tell, formal logic IS sufficient to describe the operation of a human brain. You can no more step fully outside it than you can step outside yourself – well, perhaps you can, actually. I cannot: a pity, I suppose. The best I can do is imagine what it would be like if I did – analogous to a pseudo-RNG perhaps? It can’t ever be truly random, because it’s just silicon following laws, but it can look like the same thing to real observers.

      And yes, I’ll admit I do take a dogmatic/poetic/religious approach at some points. In fact, I think this conversation is hopeless – it’s fun for now, but ultimately I can’t explain why you should come down in terms you’d accept, and you can’t explain why I should come up in terms I’d accept. There’s a scene in the Narnia books with some dwarfs, which makes a similar point, but with a Christian flavor. So why shouldn’t I be dogmatic? It’s not like I lose anything: I’m going to be called a fanatic anyway, and I am in fact a big fan.

      Like

      1. 1Z says:

        “The problem is that, as far as I can tell, formal logic IS sufficient to describe the operation of a human brain.”

        Although it is not what neurologists use.

        And why would it work? Logic is about propositions, which are at much higher level of granularity than neurons and synapses?

        And which formal logic? Probability theory is bound to contain elements predicate logic doesn’t, and vice versa.

        Like

        1. Physics is lawful everywhere and always, and its laws are written in mathematics. Obviously one cannot construct a model of the whole human brain out of only the most reduced elements, but that’s not really relevant. Anything the human brain can do, physics can do – indeed, is doing – it’s only the large disparity in scale that hides this at all. And anything physics does, mathematics describes.

          Our ideas about logic are represented using structures bigger than neurons, which are bigger than molecules, but unless we want to say that physics can’t be described mathematically or that mathematics does not follow formal logics, the arrangements of fields that comprise the molecules that comprise the neurons that support the mental states that correspond to ideas about logic, are ultimately subject to logical laws.

          Like

        2. 1Z says:

          If it were common knowledge that physics, maths and logic were the same, it would be reasonable to express the idea that the brain runs according to physics using the phraseology “formal logic IS sufficient to describe the operation of a human brain”. But it isn’t common knowledge, because it isn’t known to be true — there are many wrinkles and unresolved issues. For instane, quyantum mechanics, if it is aform of logic, is not dientical to any pre-existing “classical” logic — it has different rules. https://en.wikipedia.org/wiki/Is_Logic_Empirical%3F

          Like

  4. Brian Slesinsky says:

    This reminds me a bit of Alexander’s “epistemic learned helplessness” and memes like “wow, randoms in my feed”.

    Perhaps we can simplify? There are legitimate reasons to refuse conversation with strangers. Some are shallow reasons (not the right time or place) and others are deeper.

    We could have a “marketing” discussion about the various forms of defensiveness and how to get around them. But I think it’s also important to remember why such memetic defenses are necessary. It’s a noisy world filled with shiny and possibly dangerous distractions. We all need ways of disengaging from scammers and time-wasters. The more raw input we get from online media, the more selective our filters have to be.

    Some tactics are over-defensive and make it seriously difficult to connect. But they’re efficient. That’s where you get bingo cards. Think of it as an insufficiently sophisticated spam filter.

    (There are also people who go out in the world to try to promote their point of view. That doesn’t mean they’re less defensive though; if anything you need more defenses to deal with all the rejection.)

    I’m not sure what to do about that other than to remember that behind the defensiveness, there are still many people who want to connect.

    Liked by 1 person

  5. ryanwc4 says:

    I once won an argument about what equipment a local government agency should buy by posting factual information anonymously on a blog title Deadly Earnest. In this case, I’m not sure I convinced anyone so much as scared them that if they chose the non-optimal, because of relationships, that a lot of people might come to understand their mistake.

    Like

  6. jz says:

    good post. i’ll go to the mat with you though on absolute free speech. i will happily defend ‘i’m going to kill you’ or whatever else you might object to.

    Like

    1. To be clear, I don’t necessarily support restrictions on free speech (but removing all the currently-existing restrictions in law, such as copyright law, harassment and threats, or the more overt and extreme cases of hate speech, strikes me as incautious). But I worry that Peterson, and especially his fandom, take an overzealous view.

      Perhaps the most important piece of background on the subject would be Yudkowsky’s How to Convince Me That 2 + 2 = 3. Once I read that and realised that yes, actually, that would convince me in the counterfactual where it happens, which I am presently convinced would never happen, it became pretty obvious that there’s a big difference between explaining why you think you need some particular freedom of speech and simply saying that it’s a horrible injustice for the state to suppress your speech and basically those damn SJWs are going to be putting people in gulags any day now. Essentially, challenges to free speech should be treated as genuine concern that the right to speak is being abused to perpetrate needless harm, and argued properly on that basis. “But free speech” isn’t an argument for free(r) speech in the same way that “but some speech causes definite harm for little benefit” is an argument against.

      It’s very hard to convince people of that, though, since a large part of an anti-serious mindset is that everyone else takes as little harm from mere words as they do. And then, as originally described, recursive mental mechanisms kick in to protect that idea from scrutiny – like that people saying they have different internal experience are just virtue signalling, that kind of thing – and so forth.

      It’s too easy to get lost in the intent of things, rather than looking soberly at the outcome (or expected outcome). I think originally there was going to be some more discussion of Buddhism in this post based around that idea. Like how a sufficiently-enlightened person might be able to cut your throat with a loving intent and so not be wrong, a sufficiently-ironic person can reduce people to a nervous wreck without ever having any evil in their heart.

      I don’t have a good answer on where the boundaries of free speech should be drawn, but I’m willing to listen to people explaining their ideas. But they have to have explanations.

      Like

      1. jz says:

        my explanation for absolute free speech (literally) is that sound waves cannot do physical harm unless at high volume. written words cannot do harm. ‘speech’ encompasses more than i’ve given it credit for (and i’ll defend that too), but for now i’m just defending language. sticks and stones.

        Like

        1. Brian Slesinsky says:

          If you drain all the meaning out of words (“sound waves”) then there’s no such thing as a lie. Secrets can never be betrayed. There’s no such thing as malicious gossip. False accusations? Mere bits. Privacy? What’s that? The power of the media? Nonexistent.

          To be sure, there is something to this notion. You sit in front of your computer, I sit in front of mine. Physical violence is ruled out. There is a possibility of a safe space for discussion, where controversial topics can be discussed in an non-threatening way. That’s important. We need more of that.

          But we shouldn’t forget that words and ideas can cause enormous change, in the right circumstances. Spreading information can get people killed, wars started, and rulers toppled, because people act on what they believe. Often harmless, but the exceptions are important.

          Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s