Meta-rationality: An introduction | Meaningness
Some people are able to cut through ill-defined situations that seem to baffle skilled, smart people. They do this by using what Chapman calls “meta-rationality”: they zoom out from formal, well-defined systems to incorporate contingent properties of the situation at hand.
It’s a rare skill. One reason is that these kind of insights require both expertise at the relevant formal systems and also these “meta-rational” methods—but the latter can’t be taught formally. Because meta-rationality is so situational, it’s often ineffable. It’s obvious when someone’s using induction to reason in a proof; it’s often not obvious when people are using meta-rational techniques to navigate a nebulous situation.
Q. What’s a (new) example of rationalism derailing a workplace situation?
A. (e.g. a decision-making process leads inexorably to decisions which everyone agrees seem wrong)
Rationality and refrigerators | Meaningness
It seems reasonable to try to only believe true statements. But often it’s not actually possible: in most cases, truth-values depend on {“who’s asking and why.”}
That doesn’t make answers matter any less: there often really is a “right” truth in a given situation, and getting it right may be incredibly important. Meta-rationality approaches this fuzziness not in a post-modern sense (“truth is socially constructed!”) but by building on rational systems to produce more meaning in context.
Q. What are two interpretations of “is there water in the fridge?”
A. 1. No, there’s no drinkable water in the fridge; 2. There’s water in the cells of the eggplant.
Q. What’s the characteristically meta-rational rejoinder to “Is X true?”
A. “In what sense?”
Clouds and eggplants | Meaningness
Nebulosity is the force which necessitates meta-rationality: rational systems can’t give eternally-true answers in human situations because those situations are inescapably fuzzy when closely examined.
The problem’s not that we can’t determine the factual answer to precise questions: we have the formal tools for that. The problem is that human-scale questions usually aren’t (and often can’t be) phrased precisely enough for a formal system to apply consistently.
Clouds are a great example of essential nebulosity.
Q. Why is a cloud’s boundary nebulous?
A. They simply become less dense at their edges. There’s no definite point at which it starts or stops. Answers about cloud boundaries depend on why you’re asking.
Q. Why is a cloud’s identity nebulous?
A. Is that two clouds or two parts of one cloud? Hard to say; depends on why you’re asking.
Q. Why is a cloud’s category nebulous?
A. There are categories of clouds like cirrus and altocumulus, but they shade into each other continuously.
Q. Why is a cloud’s shape nebulous?
A. They’re complex and highly detailed, but general phrases like “sheets” and “filaments” are still meaningful in many situations.
A credibility revolution in the post-truth era | Meaningness
One explanation for Scientific progress appears to be slowing down may be that their emphasis on systemic rationality makes creativity rare: incentives tend towards mechanical crank-turning, failing to reflect on the meaning of what’s being produced, how it should be produced, what problems should be solved, etc.
Creativity flows from wonder, curiosity, play, and enjoyment. These feature prominently in biographies of great scientists and inventors. Current institutional arrangements discourage them, in favor of constant competitive pressure for mindless rote productivity.
The present day’s culture (“postmodernity”) has recognized that universal rational systems (“modernity”) aren’t tractable, but this has led to the disastrous total abandonment of rationality and the belief that there is no way to evaluate “truth” at all, even in a contingent, situational sense. Those systems are still useful—they just aren’t absolute.
Part One: Taking rationalism seriously | Meaningness
Rationality, rationalism, and alternatives | Meaningness
Rationality emphasizes “formal, systematic, explicit, technical, abstract, atypical, non-obvious ways of thinking and acting.” Chapman also notes that one way to think about a formal system is as a procedure which could be printed in a book and followed (like an Executable strategy). Rationality aspires to be universal.
Chapman coins {“rationalism”} to describe a belief in {the universal efficacy of rationality}.
Such believes usually involve explaining why rationality works using a rational system, for instance by defining a decision function for true beliefs. Rationalism is normative: people holding these beliefs think that others should hold them too, and they’ll promote them accordingly. It would be nice in some ways if some brand of rationalism were true, but none is: the world is too nebulous.
By contrast, Chapman defines {“reasonableness”} as {the everyday meaning of “rational”: sensible action that’s likely to work in the moment}.
Rationalism views reasonableness as a poor attempt at rationality, but that’s mistaken: it gives good answers in some situations where rationality does not.
“Meta-rationality” is the practice of {negotiating} the appropriate use of rationality and reasonableness for a given context. This is distinct from anti-rationality, which is the belief that {systematic rationality doesn’t work when appropriately applied}. It’s also distinct from irrationality, which is {failure to act effectively, neither reasonable nor rational}.
“Meta-rationalism” is a replacement for rationalism: “how and when and why reasonableness, rationality, and meta-rationality work.”
Q. In what sense is rationalism normative?
A. Rationalists think that people ought to conform to their definition of rationality whenever possible.
Q. What interpersonal factor distinguishes Chapman’s definition of a “rationalist” from a person who thinks that rationality is useful?
A. Rationalists actively promote rationalism; their beliefs are normative, not just descriptive.
Q. Why isn’t meta-rationality an “alternative to” rationality?
A. Meta-rationality is about choosing appropriate rational methods in a given situation, so it’s not useful without rationality.
Q. Distinguish anti-rationality and irrationality.
A. Anti-rationality is an explicit denial of the value of systematic rationality, whereas irrationality is the failure to act effectively in a given situation.
Q. Why is it contradictory to imagine that meta-rationality means the application of rationality to itself?
A. One of meta-rationality’s core claims is that you can’t use systematic rationality to figure out how to apply reasonableness and rationality in a given situation.
Most people don’t notice that this isn’t really how they actually solve most problems in their lives (once they leave school).
Rationalism’s responses to trouble | Meaningness
The vagaries which confront formal systems come in several flavors which look similar but which ultimately represent very different root situations:
Q. Contrast ontology and epistemology.
A. Ontology is about what there is, while epistemology is about what we know.
Q. What’s an example of an ontological question?
A. (e.g. what categories of things are there? what are the properties of this category? what are the relationships between these things?)
Q. Why does rationality deal poorly with questions of ontology?
A. Ontology is intrinsically nebulous: boundaries of human-scale categories are fuzzy and contextual.
Positive and logical | Meaningness
Logical positivism was an attempt to combine deductive reasoning and intuition (which Chapman calls “rationality”) and sensory experience (empiricism). That is, they wanted to provide a logical basis for {the scientific method: making general claims based on experimental data}.
Its initial approach followed a path which Chapman calls “logicism”: essentially, mathematical predicate logic applied to broader questions of epistemology. Ideally, it would begin by using logic to prove that logic works as we expect. This didn’t work (see Gödel).
Later, it moved towards an approach which Chapman calls “probabilism,” which is like probability theory as applied to epistemology. If you see the sun rise in the East a thousand times, you can’t conclude that it always rises in the East, but you can be progressively more sure that it will. Logical positivists tried to unify this kind of probabilistic reasoning with predicate logic, but (according to Chapman) failed. Classic Less Wrong posts seem to suggest this type of epistemology.
Q. Where do true statements come from in what Chapman calls “logicism”?
A. Other true statements, through mathematical derivations.
The world is everything that is the case | Meaningness
One way to caricaturize logicism is to imagine that you have a list of sentences in your head, each of which you’ve marked as “true” or “false.” This is ridiculous, because lots of things can’t be marked in this way. Chapman’s nice example: “So are Hannah and Martin having an affair, or what?” “Sort of… They haven’t actually done it, but they spent hours kissing on a park bench last night.”
If you press people on this point, no one would really claim to believe that they think this way, but lots of theories of epistemology—particularly those of logicism—seem to rely on this fundamental model.
Q. What did Wittgenstein mean by “The world is everything that is the case?”
A. He’s suggesting that the world is defined by the list of all true statements (i.e. logicism).
See also this nice footnote:
“The world is everything that is the case” is the first sentence, and the central thesis, of Ludwig Wittgenstein’s Tractatus Logico-Philosophicus, one of the central texts of logical positivism. “Positivism” is sometimes defined as the claim that the world is nothing more than the list of all true statements. And, indeed, Wittgenstein’s second sentence was “The world is the totality of facts, not of things.”
Depends upon what the meaning of the word “is” is | Meaningness
One obvious challenge for logicism is that ordinary language is often ambiguous. “The dog is a Samoyed.” Which dog? Any dog? In 1879, Gottlob Frege developed a logical syntax which removed this kind of ambiguity by adding universal and existential quantifiers. So, for instance, you could say “there is a dog that’s a Samoyed,” or “this uniquely-identifiable dog-symbol is a Samoyed.”
The challenge here is that we can and do appropriately interpret sentences like “The dog is a Samoyed.” It’s just that their interpretation requires context, and these types of formal logic systems can’t incorporate that kind of contingency.
Q. What’s the rationalist’s solution to ambiguous language?
A. Invent and communicate in a language which is precise and specific.
Q. Why does the claim “the dog is a Samoyed” require meta-rationality?
A. It’s contextual: what dog? Depends on who’s asking, and why.
The value of meaninglessness | Meaningness
Hegel’s “idealism” advocated a kind of solipsism. As summarized by Bertrand Russell:
Time and space are unreal, matter is an illusion, and the world consists of nothing but mind.
The logical positivists call that statement neither true, nor false, but meaningless, introducing a new value into logic. They also introduced “unknown.” This step was called “multi-valued logic,” and it solved some technical problems, but it was eventually subsumed by probability theory, which can make more fine-grained statements about truthiness.
Q. How did logical positivists respond to claims like “The world consists of nothing but mind?”
A. They declared these statements ‘meaningless,’ introducing a new truth value.
https://meaningness.com/eggplant/sort-of-truth
Propositions at human scale are rarely either true or false. They’re often “generally” true or “true in principle” or “pretty much” true. The problem here is that the arithmetic of both logicism and probabilism depend on absolute values of truthfulness. It’s not clear what inferences you can make based on a variable which is “true in principle.”
Rationalism often misinterprets this problem as uncertainty or a lack of precision. Sometimes that’s the case! When it is, you can define more precise syntax or treat things probabilistically. But often the truth-value is ontologically nebulous: if “Alain is bald” is “pretty much true,” the issue is not uncertainty (he’s bald with p=0.8) or even imprecision (he’s 80% bald). It’s more that the degree of his baldness depends on why you’re asking.
We want to know things about cottage cheese and dance moves and puppy training—but nothing is absolutely true about them. Obviously, all sorts of things are true about them, in a common sense way. But we can’t even say definitely whether or not something is cottage cheese. There are always marginal cases, like cottage cheese that has been in the refrigerator too long and is gradually turning into something else. Nor is it absolutely true that cottage cheese is white. That is only “more-or-less true”; examined closely, it’s slightly yellowish.
Q. Why is it a problem that the arithmetic of logicism and probabilism depends on absolute truth values?
A. In practice, human-scale propositions are often “sort-of” true, and it’s not clear how the formal systems operate on those values.
Reductio ad reductionem | Meaningness
A beautiful fantasy: each topic should be expressible in terms of some “lower-level” topic, biology on top of chemistry on top of physics. Such reductions are often quite useful—see e.g. layers of abstraction in computer science and the ideal gas laws. But complete scientific reductions are quite rare. In practice, it’s usually the case that either high-level terms can’t be fully defined in terms of the next level, or that lower-level abstractions “leak through” to higher-level ones.
Biologists are able to coherently explain plenty of phenomena despite their inability to reduce cells to chemistry, so we must find some other way to understand this type of rationality.
Are eggplants fruits? | Meaningness
Rationalism responds to ontological nebulosity by trying to define terms more precisely. That is, it tries to reduce the constituents to “ontologically definite” terms—terms which can be either absolutely true or false about the object (e.g. 3 is prime). But this usually isn’t possible for human-scale objects, whose properties’ interpretations typically depend on context.
The example Chapman uses is: “is an eggplant a fruit?” Botanically, yes; culinarily, no. We can be more specific to avoid this ambiguity. But then we have to ask: what makes it a fruit, botanically speaking? Well, it’s seed bearing, and it’s part of an angiosperm… but what does “part of an angiosperm” mean once it’s been severed from the plant? There are lots of exceptions and corner cases, each of which seem tractable in the moment, but they don’t end; you can’t produce a totalizing definition.
This isn’t a problem with our use of language: “The difficulty is not that we can’t get a statement to refer to the right category. It is that there is no sharp dividing line in the world that reliably does the work we want.… The problem is in the territory, not in the map.”
And yet, in everyday life, we can ask and answer questions like this without apparent trouble.
Q. Why is it hard to answer “are eggplants fruits?”
A. e.g. They’re fruits botanically but not culinarily.
Q. What sort of objects and properties is rationalism true of?
A. Ontologically definite objects and properties—absolutely delineated, present or absent in a category, with no nebulosity.
Q. Why can’t one transform “are eggplants fruits?” into an absolutely-precise question?
A. There’s unavoidable semantic ambiguity: the category we’re trying to access isn’t cleanly delineated in physical reality, so we can’t make a word that points to the delineation.
When will you go bald? | Meaningness
One kind of nebulosity is quantitative variation—that is, a property which varies along a continuum, rather than occupying just a few discrete values. Rationalism can sometimes handle this well (the literature calls this “vagueness”). If the variable, in reality, is continuous, then it can formally model that continuum (shades of gray as luminance values between 0 and 1). If the quantitative variation is about how confident we are—an epistemic problem—then we can introduce probability to handle the situation.
But that doesn’t handle all such situations. Consider the claim “Alain is bald.” Is that true? He does still have some hairs, but they’re fine and mostly near his ears. Counting the hairs wouldn’t help you answer the question: the problem’s not that you don’t have a precise quantitative measurement. Phrasing the proposition probabilistically wouldn’t help (Alain is bald with p=0.73): the problem’s not that you’re not sure if Alain is bald. And it’s also not the case that he’s ontologically “0.9 bald”: the problem doesn’t go away if you attach numbers to the truth-value. Nevertheless, you can take a look at him, and it’s obvious that he’s bald.
Q. What 3 approaches does rationalism use to resolve vagueness that comes from quantitative variation?
A. Attaching probabilities; methods of formal measurement and logic; attaching a number to the ontological truth-value
Q. Why doesn’t probability theory help when “Alain is bald” is “pretty much” true?
A. The problem isn’t about uncertainty; his baldness depends on why you’re asking.
Q. Why don’t methods of formal measurement and logic help when “Alain is bald” is “pretty much” true?
A. Counting how many hairs he has doesn’t help: his baldness depends on why you’re asking.
Q. Why doesn’t it help to attach a number to the truth-value of “Alain is bald”?
A. It’s not at all clear how you would use that number. The problem is that the statement’s truth value is contextual.
Overdriving approximation | Meaningness
An engineer’s approach to the problem of nebulosity might be to say: all the matters is getting a model which produces good-enough approximations. The challenge here is that a good approximation model requires a clearly-understood domain of applicability (within which it works well) and bounded error (describing how badly it might fail). In physics and electrical engineering, we can have these models. But in everyday life, we usually don’t know all the conditions of applicability; deciding how, when, and why to apply a given approximation model is itself a meta-rational challenge.
A related interesting challenge is that for non-numerical phenomena, approximation is often an inappropriate way to think about a claim:
Some particular bit of DNA is not approximately a gene. It may definitely be one, or not, and it may be “sort of” a gene, but never a gene “to within an error bound.” … Ways of reasoning that work for numerically approximate truth do not work for usually-adequate truth, so approximation is not an adequate general model of model adequacy.
Q. Why isn’t it enough that rational models are “approximately true”?
A. That only really holds for physics-like models. For human-scale questions, their domains of applicability are ill-defined, and their error bounds are contextual.
https://meaningness.com/eggplant/rational-reference
One surprising and subtle way in which rationalism breaks down is that reference itself is nebulous. “Samantha the Samoyed is white” describes a relationship between a property (being white) and a reference to a dog (Samantha the Samoyed), but that relationship is very difficult to ground absolutely in physical reality. In practice, this kind of reference is negotiated based on context—for instance, maybe I’ve found a white dog with a collar that says “Samantha,” which might make this statement seem more true.
The National Omelet Registry | Meaningness
Many everyday statements are nebulous because they use indexicals to refer to objects: “My omelette is bigger.” This statement’s truth value depends on who’s speaking. To make such statements ontologically definite, you’d need to replace any indexicals with absolute identifiers: “Omelette 92349817 is bigger than omelette 8951875.” But, alas, “there is no national omelette registry.” And there can’t be one, even in principle, because objects’ boundaries are not ontologically definite.
Again, nevertheless, this rarely creates problems in practice because mere reasonableness can deal with it.
Q. Why can’t the statement “My omelette is bigger” be made ontologically definite?
A. You’d need to replace “my omelette” with an absolute reference to an object, which is impossible.
Objects, objectively | Meaningness
Nebulous statements can’t be made ontologically definite by reducing everything to statements in quantum physics because the boundaries human-scale objects can’t be defined with ontological definiteness.
Feynman puts it well:
What is an object? Philosophers are always saying, “Well, just take a chair for example.” The moment they say that, you know that they do not know what they are talking about any more. The atoms are evaporating from it from time to time—not many atoms, but a few—dirt falls on it and gets dissolved in the paint; so to define a chair precisely, to say exactly which atoms are chair, and which atoms are air, or which atoms are dirt, or which atoms are paint that belongs to the chair is impossible. So the mass of a chair can be defined only approximately.
There are not any single, left-alone objects in the world. If we are not too precise we may idealize the chair as a definite thing. One may prefer a mathematical definition; but mathematical definitions can never work in the real world
In practice, we’re able to work with indefinite objects, but always in a contextual fashion. Often the limitations of definitions don’t matter, but “The meta-rational questions are: does it matter, for a particular purpose? If so, how and why? What does this imply about how we should deploy rationality?”
Q. Refute the notion that objects are subjective or created socially.
A. e.g. Pluto existed long before it was discovered in the twentieth century.
Q. Refute: “Objects are nebulous, but they have a definite core; they’re just fuzzy around the edges.”
A. Two clouds are connected by a wispy tendril. Is it one cloud or two?
Q. Refute: “Objects are nebulous because of physical nondeterminism.”
A. A cloud’s boundary would still be nebulous even if you could freeze all its matter.
Is this an eggplant which I see before me? | Meaningness
If we’re going to make claims about objects in the world, we need to adjudicate the respective roles of perception and reason. Perhaps rationality can consume true statements and produce new ones, but where are the original true statements coming from? Ultimately there must be some axiomatic source of truth. This chapter discusses the difficulty of negotiating that interface by attacking four common approaches.
Maybe perception’s output is a set of statements about what you can see. But what syntax can these statements use? Can one perceive an eggplant? If you say no, then you leave rationality to confidently classify “eggplant” from a set of descriptors (which we’ve already noted is impossible). If you say yes, then how can you ever learn new labels, if such processes begin with definitions expressed at the rational level?
Another approach might be for perception to output raw, neutral observations. But now we get into reductionist issues of leaky abstractions: just because you know that something is purple, oval, firm, and bitter, you can’t know that it’s an eggplant.
Yet another approach is to eliminate the separation and to make rationality do everything, consuming raw sensory stimuli. This is effectively the model used by modern “deep learning” systems, which consume raw pixel data. But it’s not clear how this could be possible, given the enormous bandwidth involved.
One final approach is to trust only objective scientific instruments to form one’s beliefs. But those instruments are also fallible and theory-laden.
What can you believe? | Meaningness
One fairly abstract problem with rationality is that it concerns itself with propositions and beliefs, but it’s not clear what those things actually are. Presumably they’re non-physical, but they’re still meaningfully connected to reality, since some propositions make predictions that reliably correlate with measurement and others don’t. They’re mind-independent, since e.g. the mass of a proton was the mass of a proton before we knew what it was. But what role do they play in reasoning and how? Chapman claims that none of the standard explanations hold up (but isn’t very specific about how/why).
Where did you get that idea in the first place? | Meaningness
If rationality is about deciding between many alternatives, where do those alternatives come from? How are new ideas generated? One explanation is that invention is inherently non-rational, but clearly there’s something coherent going on. Chapman foreshadows that meta-rationality will better explain how new alternatives arise.
The Spanish Inquisition | Meaningness
Rationality can account for all the known knowns and known unknowns, but it can’t make reliable inferences about reality because there are unenumerable unknown unknowns.
In practice, we do make working inferences regularly, but that depends on meta-rational moves, like appealing to idealized scenarios, making the world less nebulous, or using “reality checks” on the outputs of rational inference. But deciding when and how to apply these strategies requires meta-rational skill.
Probabilism is often proposed as an improvement over logicism for the foundations of rationality. It’s characterized by the use of probability theory and decision theory. Probabilistic rationality is very useful, but only in some situations. For instance, it doesn’t help you deal with representationally vague situations (“that omelette is mine”), nor ontologically vague situations (“Alain is bald”).
Leaving the casino | Meaningness
Probabilistic rationality applies best to situations in which you reliably know {the actions which can be taken, the outcomes which can occur, and the payoffs attributable to each action/outcome}. This is rarely true in reality.
What probability can’t do | Meaningness
This chapter examines what it would mean if probabilism were true. If probabilism worked, it would give us a procedure we could perform to determine how strongly should we believe a proposition, given a set of perceptions and prior knowledge. This makes sense in the context of fair betting games but in very few others.
The probability of green cheese | Meaningness
You can’t enumerate the unknowns in the situation. You can’t enumerate the full list of possible outcomes. Reasoning about anything in everyday life using probability theory requires that you make the “small-world idealization”—that it’s reasonable for you to add up “all the situations you haven’t accounted for” and give that grouping some weight. This isn’t always reasonable in practice.
Chapman gives a great example: you point a spectrometer at an asteroid, and it tells you that with very high probability, it observes green cheese. You still probably wouldn’t believe it, because maybe someone got a fleck of cheese on the sensor while cleaning, or maybe someone’s playing a joke, or whatever. What are the probabilities of this? You can’t know, and the spectrometer software certainly isn’t accounting for that. You have to assess reasonableness yourself, using meta-rationality.
Q. Why is it a problem for probabilism that you can’t enumerate the set of possible outcomes?
A. This means you can’t make the probabilities of all outcomes add up to 1.0, which is an assumption of the math.
Q. What’s the “small-world idealization” in the context of probabilism?
A. Lump together all the outcomes you haven’t thought of and estimate the probability of that—now the probabilities sum to 1.0!
Statistics and the replication crisis | Meaningness
People think the science replication crisis is about people making errors in their statistical calculations. But that’s a shallow view. A deeper problem is that even if the calculations are correct, people draw the wrong conclusions from those analyses—for instance, wrongly assuming some kind of totemic truthiness in p-values. Yet even avoiding those inference errors is not enough because the analyses themselves have many built-in assumptions which may or may not hold, and which no closed-form algorithm can evaluate.
There’s no substitute for obstinate curiosity, for actually figuring out what is going on; and no fixed method for that. Science can’t be reduced to any fixed method, nor evaluated by any fixed criterion. It uses methods and criteria; it is not defined or limited by them.
Q. In what sense is the scientific replication crisis about more than making errors in calculations within a formal system?
A. It’s also about misunderstanding what inferences are valid.
Q. In what sense is the scientific replication crisis about more than just drawing invalid conclusions from correctly-executed formal analyses?
A. Those formal analyses make small-world idealizations which are not always valid.
Q. What does Chapman suggest should replace p values as the heuristic for scientific legitimacy?
A. Trick question: no closed-form algorithm can be used. No substitute for actually thinking.
Acting on the truth | Meaningness
So far, we’ve mostly talked about rationalism’s attempts to identify the truth. But the implication is that once you have the truth, you can choose the best action to perform.
One problem with that is that figuring out which action to perform often involves an infeasible computation due to the exponential number of combinations of possible actions and outcomes. We simplify the computation using heuristics, but choosing the correct heuristic requires meta-rational reasoning. In fact, in practice, we’re often able to choose the correct actions without being able to predict or understand them at all.
A deeper problem is that “taking an action” isn’t in reality a discrete operation. It’s continuously responsive to the specific situation you’re in—resistant to the kinds of generalizations rationalism wants to make.
Part Two: Taking reasonableness seriously | Meaningness
The first part of Eggplant is all about how rationality breaks down—why rationalism isn’t a reasonable complete epistemology. But systematic rationality does often work, and we are able to make decisions in everyday situations with reasonable reliability. This depends on a notion which Chapman calls reasonableness.
The ethnomethodological flip | Meaningness
Eggplant is going to make the argument that while rationalism views reasonableness as a defective approximation to rationality (i.e. a lossy subset), we should instead view rationality as a specialized application of reasonableness.
Q. What is Eggplant’s “ethnomethodological flip”?
A. View rationality as a specialized application of reasonableness, instead of reasonableness as a defective approximation of rationality.