Yet another (claimed) jhana-haver, checking in. With the unpopular opinion of "it's trivially obviously real, I verified it myself, but there are two major dynamics that lead to it being overstated, and most people who talk about jhana inadvertently shove those under the rug"
Issue 1: People selectively remember their successful meditation attempts.
It's exactly like fishing. People remember the times they caught a big fish, not the times they went fishing and didn't catch anything. Looking at ten previous first-jhana attempts, if two failed completely, three were lackluster, three were decent, one was pretty awesome, and one was sex-tier, if you riffle back through your memories associated with first jhana, the two most successful ones will be the most salient. The memory of the time where you tried to attain first jhana on a long airplane trip and mostly failed won't come up in most cases. If you're at a party and trying to chat with somebody about meditation stuff, you're going to bring up the "hits" that make good party anecdotes, and not bring up that time where you tried to meditate in bed but you fell asleep. Or that time where you tried to meditate on your couch but your back was too itchy. Or the time during the road trip where you managed to get it going pretty well, and it was a worthwhile experience, but could not reasonably be described as "sex-tier".
Descriptions along the lines of "comparable to sex", strike me as accurate. Even the one about blissing out in the Macy's afterwards strikes me as accurate, there is a really weird aspect of the afterglow where your sensitivity to other pleasures is enhanced and you find yourself spontaneously going "wow, that rock is really pretty and great" (or whatever else you're interacting with). Leigh Brasington went through an MRI, we know that the brain is doing some rather unusual stuff during these states. So any claim like the strong form of "they're making it up", is just outright false.
However, what these experiences do not strike me as, is typical. When asking somebody about jhana experiences, you're getting their highlights reel unless you're very specifically asking them otherwise, just like asking somebody about their fishing experiences. Ordinary-ass jhana-in-the-airport is quite nice, but nowhere near sex-tier.
Even beyond that, there's an issue where, in everyday life, feelings of excitement are usually paired with an exciting thing. A great thing happened to you, you say "it was very exciting", the other person correctly infers that it must have been pretty great. During jhana, feelings of excitement aren't paired with an exciting thing happening. It's possible to experience strong feelings of excitement, but your mind knows that it isn't actually paired with anything, and so it comes across as more of a physiological feeling, and the overall experience is generally a good deal more lackluster than it sounds like. And so you can truly say "it felt EXTREMELY exciting", the other person (falsely) infers that it must have been REALLY great, but feelings of extreme excitement that aren't about anything aren't actually that great.
Issue 2: The word "first jhana" is used to describe a pretty wide range of experiences.
Not too long after figuring out how to do it (ie, attain a mental state that is clearly abnormal while meditating), I recorded what happened, managed to consistently replicate it, and the state was something like a cross between being well-caffeinated, very tightly focused, highly excited, and with occasional waves of body chills, the same sort that music generates. Also it keeps turning off and on again like a car that refuses to properly start. Overall niceness level: like eating a pan of fresh-cooked brownies.
After getting it down well enough and practicing it for a while, it turns out there were a few missing mental steps, and if you don't try quite so hard to force it and just have fun with it, it's much nicer. Energy levels and focus go down a bit, the state gets more stable than it used to be, the excitement component gets a considerably larger dose of happiness added into it instead of being pure excitement, and there's a nice afterglow for ten or so minutes.
And now we get to stuff I haven't personally done.
Leigh Brasington, one of the main meditation teachers teaching this stuff, claims that the body chills/tingles become continuous, rather than coming in waves, when you REALLY hit first jhana, and the stuff I'm doing is just messing around with some pre-jhana states. Nick C's described experiences also seem around this level, if he's describing them as "10x better than sex".
And then there was a time where Leigh Brasington visited some of the really hardcore Thai forest monks, and after a week or so, (claimed to have) managed to get some extremely damn strong jhanas that near-perfectly matched up to the original descriptions in the Pali Canon.
There are many reasonable questions that could be asked at this point, like
"wait, if the word "jhana" is used to describe both an energetic/focused/nice state that can be worked out in a week and easily attained in a spare 20 minutes while waiting in an airport, and some sort of ultra-rare infinite cosmic bliss thing that you can only attain by meditating in a Thailand cave for five years with absolute silence, then isn't the word "jhana" uselessly vague and referring to way too many things?? It's one of the worst cases of motte and bailey I've ever seen!"
and
"Wait, if I have to be a meditation teacher and spend a week in a Thailand cave to get the highest levels of this, then aren't those highest levels totally useless for everyday life? Even if we buy that it's actually great enough to justify the time sink, what's the point of having mental motions for unbounded bliss if you can't practically use it without sitting in a cave for a week? Looks like wireheading."
and
"I extremely want to call bullshit on Nick C's experiences and people running around going "you dummy, jhana is totally real" when what they're describing is <1/10th of what Nick C is claiming IS NOT HELPING"
I agree with all of these points! When someone is claiming to have attained jhana, it's really important to try to figure out what they're claiming on this spectrum and not do the implicit motte/bailey thing.
I'm coming to the party kinda late so you probably won't end up reading this, but if you do, that'd be cool.
I’m unsure whether I’ve misinterpreted because some of the prose is muddled, so please forgive me if you’ve covered this. Here’s how I understand the dialectic, if we cleaned it up a bit.
You’re making an argument from analogy. You want to say that spoonies et all are relevantly similar to jhana people. You then argue using the more familiar spoonies example that we should default to low credence. You then apply that to jhana people, since their claims are relevantly similar from an epistemic POV.
Scott implicitly agrees with the framing of your argument, buying that the cases are analogous. However, he objects to your argument by claiming that we should actually believe the spoonies et al. He defends this claim by pointing out that psychosomatic pain is pain just as much as other pain, since pain is just the experience of the pain. This is meant to undermine a principle on which your argument is based. Specifically, he takes you to be claiming that we can infer from “lots of doctors have been unable to identify a physical cause of the pain” to “the patient is full of shit.” By explaining that psychosomatic pain isn’t the sort of thing that has that kind of physical cause, he takes himself to have undermined your principle and therefore crippled your overall argument. Thus, your argument to disbelieve the spoonies fails, and so does your argument to disbelieve the jhana people.
You respond by denying Scott’s claim that we should believe the spoonies. You do this by distinguishing between belief_1 and belief_2. Belief_1 is your kind, the better kind, and involves endorsing the content of what people say. Belief_2 is less clear, but seems to involve endorsing a related, different, and more plausible claim. It’s more charitable. You then argue that the word “belief” actually picks out the concept belief_1, because the patients themselves get mad when doctors say “I believe you and think it’s psychosomatic.”You argue that if “belief” really picked out concept belief_2, then they wouldn’t get so mad. After all, they’d presumably be happy the doctor believed them!
Here’s what Scott should say. The patients are making two separate claims. Claim 1 is about being in pain, which you should believe. Claim 2 is about the pain’s origin, which you shouldn’t necessarily believe. The patient, if he were rational, would realize he’s mad that the doctor doesn’t believe claim 2 and that his anger has nothing to do with whether the doctor believes claim 1. After all, the doctor DOES believe claim 1! So Scott is not using belief_2 at all. Rather, you’ve simply made a reasoning mistake by inferring from “patient mad at doctor” to “doctor doesn’t believe claim 1.”
That’s a very cleaned up version of the dialectic. The real thing is full of nonsense, like Scott’s bizarre complaints about how you use norms. Here’s why Bayesians shouldn’t get their panties in a knot over epistemic norms.
Even Bayesians want want to treat like cases alike, and you’re arguing that spoonies are relevantly similar to jhana, or at least close enough that our credences should be similar. When you argue for a norm, you’re not claiming it to apply everywhere. You’ve ALREADY argued that spoonies and jhana people are relevantly similar. The norm is meant to apply only to whatever group of people contains both and is epistemically at issue. And Scott implicitly admits that the cases seem to be analogous. So he really shouldn’t have beef with the norms stuff.
I do think, however, that your move to attribute an off base definition of belief to Scott doesn’t work for the reasons I outlined above. What you should’ve argued was that Scott’s assessment of what percentage of people are simply full of shit is just wrong. Or you could’ve argued that jhana is even more likely to be full of shit, since it presumably requires involves more complicated neural mechanisms than psychosomatic pain.
I think you can probably tell that I think highly of your thinking and work and want you to do well. From your POV, I am just another internet guy, so why should you give a shit about my views of your writing? Fair enough. It’s easy to ignore if you want. With that said, I have a couple of writing comments that I think would benefit your prose, coming from a philosophy PhD student.
I understand that you were pressed to get this article out quickly. The conversation moves on otherwise. But this piece needed a LOT more editing or to be organized very differently. It needed some understandable jargon instead of all the hyphenated words. It needed a concise reconstruction of Scott’s argument all in one place. It needed you to point where you identified which of Scott’s premises you objected to. This piece was structured as an “omg a big famous dude attacked me and was kinda unfair, let me defend myself.” I do think he misread you at various points, especially with the baseball example. But writing a piece explicitly designed to defend the merit of your previous one came at a huge cost to readability and credibility. Even if Scott wanted to engage again with your ideas, he couldn’t, because this isn’t written clearly enough. This isn’t entirely your fault, because Scott’s piece wasn’t much clearly or better organized. At no point did either of you lay out exactly what the key points of issue were. It led to a muddy back-and-forth. I believe that focusing on the arguments themselves and trying to make them very explicit would work wonders.
On the writing bits, I see your premise as several things:
1. You need to be more concise overall.
2. You need to use "jargon"
3. You needed to clearly summarize Scott's argument
4. I needed to make specific takedowns of the specific believability of specific unfalsifiable claims
Of the three, I disagree with 2 and 4 the most:
2. I find that a very common failure mode of rationalist-sphere writers is to assume that everyone on Earth either is or should be one of the about .01% of everyone who is enamored with the very specific, very unpopular set of terms for things they've developed for themselves.
So, say above. You used the most popular term rationalists have (bayesian). Some percentage of people reading the concept actually understand Bayes enough to know what you are trying to generally get at. The rest of them don't know what you are saying at all, unless they picked it up from context. I am of the opinion that this is bad - that joe-average should be able to understand my argument without ten minutes of googling, and that I've failed at my job if he can't.
I also find that usage of jargon tends to be a way to cover up bad thinking. If I think at best half of everyone in my audience understands "bayesian" in a general way, I then have to also do a calculation for people who understand it *well enough to know if you are using it wrong* (low double digits, probably) and then of those who both understand it well enough to know if you are using it wrong, and who are tuned-in enough to actually assess that (low single digits).
I specifically avoid jargon for these reasons. I want my arguments to be understood and to carry exactly the weight they justify as I have argued them. I think jargon gets in the way of both.
4.
I disagree with this because I'm very much attacking not a specific belief, but a generalized principle of how we assess truthfulness and then talk about it. The construction of my first article reflects this - I don't point out reasons Jhana is fake; I point out that things are sometimes fake, and that if we use the presented "Believe any claim a lot of people make that, while not having evidence for it besides that, doesn't have a lot of evidence against it" claim we'd have to accept the general class of things that principle justifies, which is pretty big.
The whole premise of this thing is that we are dealing with situations in which the evidence isn't really that strong one way or another once "was claimed" is factored out. "Is specific unfalsifiable claim X actually more falsifiable or supported than we thought" is a different argument I'm much less interested in.
1-3:
I think you've already noted somewhat that Scott's argument wasn't particularly condensible. I'm not criticizing him for being scattered here - I think this argument is weird, and there's a lot of elements to be organized. I'm assuming we will both discard individual points and refine to a smaller version as time goes on; for the moment, I'm fine with the "knots don't start out untied" structure of this.
In terms of general concision, I don't really do that. Some people like it, and some people don't; this is again a place where I think my audience is just different than what a person who argues from a "most people probably want what rationalists want" standpoint thinks they are. This will sound snarky/bragging but isn't meant that way: My numbers are actually pretty good overall, to the point where it's probably a bad idea to change to a writing style I'm much worse at and like less.
Basically you aren't wrong that some things could be shorter and that some people would prefer that; I get regular complaints about it, at least. But at the same time, I'm doing well and loathe to break something that's working just fine.
**Here’s what Scott should say. The patients are making two separate claims. Claim 1 is about being in pain, which you should believe.**
If I believed this, there wouldn't be an argument. What this does is take Scott's original argument (as I have read it) of "If enough people claim something, you should believe it" and changes it into "If enough people have claimed something, you should believe AT LEAST the most important part of it; it's not something you should question".
Now let's take Nick C as an example. In your model, I have to accept that "his pain is real" - that he is experiencing something very much like he's describing, even if he he has the cause wrong. In his case, he has an on-call bliss state of incredible power that has no downsides at all and several real-world non-meditative upsides; his diet is better, his substance abuse issues are better, and his brain is rewired to make coffee work better.
I can have this too, he says, for the low, low price of hundreds of hours of my free time - time I could use for other things.
I'm saying "I have listened to what he says, and he doesn't strike me as believable. I in fact do not believe his claim, and thus won't risk hundreds of hours of my time pursuing his claimed benefits, which I want as he has described them".
Your counterargument - that I should believe *just the part that makes me take the risk* - isn't compelling for me. And Scott's - that I should say I believe his claim even if I don't believe any of it before highly modifying it to a different claim with different implications - is similarly bad.
I think that both your's and Scott's viewpoint is contingent on one of or a combination of some unspoken assumptions that drive it - that people either don't lie, or else it's socially impolite enough to say they do that you should pretend they don't.
My viewpoint is still pretty unchanged here (with an exception detailed below). Basically, I think it's reasonable to assess someone's unverifiable claim and believe it, but I also think it's reasonable to assess it and decide you *don't* believe it, and to move forward accordingly. In this case, I think it's fine for me to say "I don't find Nick C.'s claims believeable, and I don't want to risk hundreds of hours on something, so I'm not going to.".
I think that (provided one thinks lying is a think that exists in a practical sense where it's a real risk) this generalizes out to any situation where the truthfulness of someone's words has a practical implication.
On the exception: I think one of the big exceptions here can be on matters of pre-determined policy. So, for instance, if we acknowledged that there exists a class of "faker spoonies" that represents some low-single-digit amount of spoonies, and that there also exists a large amount of spoonies who are usually ill-treated by the medical system, we might decide on "treat all of this like it's real, don't make judgments" because doing otherwise creates unacceptably large downsides for the "real spoonies".
On the writing: I have some thoughts, but I'll get to them in a separate comment.
Just to clarify, I don't agree with Scott here. In fact, I didn't take myself to be putting forward a position at all. (As an aside, I find myself very skeptical of all these phenomena, closer to your view.) I was merely attempting to clarify the back-and-forth, because given how you and Scott have written your pieces, it's not even obvious what the major disagreement is, besides one about how often people lie about unimportant stuff. I do think, however, that the way that you argued against Scott doesn't work for the reasons I outlined.
In your reply to my comment, you distort the argument I was toying with in several ways. I'll give one example. You suggest that if we accept "The patients are making two separate claims. Claim 1 is about being in pain, which you should believe," then we must also accept "If enough people have claimed something, you should believe AT LEAST the most important part of it; it's not something you should question." But this is just a bizarre claim. All I said was that if a person says A is true and B is true, we can say "I believe A but not B," and the person can get pissed that you don't believe B. Oftentimes in such scenarios, the pissed person won't even register that you believe A or won't even ask for clarification!
Re the writing. I don't mean to suggest jargon that keeps readers out. That's crap, especially on substack. But if you're going to name something, you might as well use a name that doesn't require half the reader's working memory to remember. For instance, I don't think that 'belief_1' (not the best, I admit), alienates anyone who's already reading.
Take this passage of yours: "So let’s imagine that for any claim, a person might mean one of two things: Whatever they claimed, or that they know what the word “sandwich” means. And say a person comes to you and says “I can literally fly through the clouds; I have literally and not at all figuratively touched the sun”. Though they used those words, you now have to remember: they might just mean they know what a sandwich is."
You could've said: Suppose that for any utterance, a person might be referring to one of two meanings. The plain meaning is the one that any normal person would understand the utterance to mean. For example, the plain meaning of the utterance "I can literally fly through the clouds" is simply that I can fly through the clouds. By contrast, the sandwich meaning is that I know what the word 'sandwich' means. Thus, the utterance "I can literally fly through the clouds" under such circumstances is ambiguous between the plain meaning and the sandwich meaning.
My version is slightly longer, but it has the benefit of being understandable at a first read, which makes it functionally half the length. Additionally, it provides concepts that you can easily appeal to later that the reader can REMEMBER, like plain meaning and sandwich meaning. This is not difficult jargon. It doesn't alienate at all. In fact, with some effort, you could make it genuinely funny and therefore more accessible. And I think that it should be clear by now that this sort of jargon doesn't cover up bad thinking. It makes the thinking crystal clear by simplifying the amount of working memory required to get through a sentence, freeing up valuable RAM for evaluating whether the sentence is true.
Regarding the summarizing Scott's argument issue, I'm puzzled by your take. You're writing a reply here. It's weird to do a reply and not be specific about what you're replying to. That's more of a second attempt to say what you were trying to say last time. And you sure talked an awful lot about Scott's objection for a guy simply taking another stab at it.
Re concision, I don't really care about word count. I care about clarity. The way you structured the thing made it a lot harder than it should've been to understand. When you structure a thing well, you often end up having to repeat yourself less, resulting in an overall lower word count.
I'm not trying to come at your numbers, man. And no, I'm not a rationalist (in the way that word is used these days), nor do I have a hard on for low word counts. I'm all for adding words if they are substantive, clarifying, or funny. That's why I generally like your writing and usually find it to be solid and refreshingly original. I suspect that you rushed this one, for understandable reasons.
Last comment. Scott and his ilk's writing has some serious structure problems as well. It's got this meandering quality that takes you along for the ride without making clear points. It irritates me to read a 4k word thing that could've been better communicated in 1k, or 1.5k if you add an obscene amount of flourish. It's a bit of a problem for any platform in which the writers are self-taught and pressured to produce rapidly. Some develop beautifully, some don't. I see your writing improving each time (with this article as the outlier, in my opinion) and am rooting hard for this page! I hope that my comments haven't come off as overly snobbish or backseat driver-y. I won't make further remarks on your page about the writing itself.
For what it is worth, I don't think this is too long at all.
Your point on lying reminds me of Swift's Gulliver's Travels with the... horse people... who don't know what lying is, and refer to it as saying a thing which is not so. (Or something to that effect.) That seems to pretty closely match your sense of lie, mine as well. Maybe it is a bit better to consider it merely misleading if they don't mean to lie, but I think it is fair to expect affirmative action to make sure what you are saying is not incorrect or misleading in that way. I can't tell what someone's intent is all the time, but if they say something that is not so the effect is the same for most purposes. Communication requires effort on both sides, and someone who doesn't care if they say what they mean is as troublesome as a liar, because you can't even predict when they will tell you the false thing.
My biggest beef with Scott, which is slightly your fault, too, is that he didn't focus on what you meant. Sure, ok, RC uses the examples of Spoonies and DID kids as evidence of people lying about their internal states, and Scott says "No no! Those are actually bad examples!" That only ends the discussion if one can't think of any other case where people knowingly lie about their internal states, or abilities reach internal states, to others. I have an extremely hard time believing Scott, a mental health professional, has not run across instances where people lie about their internal states. No Munchausen's? No one lying on intake forms? No kids, in general? No one claiming to have a wonderful relationship that is obviously a giant mess they are trying to cover up?
It seems to me that Scott is unfair, probably knowingly so, in not saying "Ok, those are bad examples, but I see what you mean: people often do lie or at least exaggerate their internal states to get things from other people, such as approval or admiration, just like they may lie to cover up an embarrassing internal state, like a bad marriage." I think you, RC, did get pulled in a little arguing those examples instead of saying "Ok, fine, bad examples, but what about _____?"
Still, the outcome of sorting out the different meanings of lie and belief seems worthwhile!
EDIT:I slipped into an odd 3rd person construct there... tried to clarify that some.
An intelligent back and forth between two individuals who mutually respect each others opinions and sincerely disagree... Substack is kind of the best! Really enjoyed the read, I feel like this argument tends to fall in your favor for the broad strokes, and Scott's favor for the specific ones. So I guess you both kind of succeeded in your arguments?
A brief word on your last few paragraphs; don't be fooled like the rest of the world seems to be. Trust is not always generous, nor is skepticism always cold. Both are immensely good when placed correctly, and immensely bad when placed incorrectly. Deserved skepticism is a virtue, not a flaw. Undeserved trust should be outcast.
No prize short of Truth itself lies in distinguishing the two.
You're clearly a smart, thoughtful person, and you can do better than this. I think I'm going to have to unsubscribe, if I can't come to a better conclusion than "RC is in this for the clicks and eyeballs, not for the good-faith discussion."
So, I don't think I can necessarily cure the impression you have, but one thing I think would be helpful here would be to dig down a bit on where you perceive me as being bad-faith. I think that works in a bunch of scenarios - if I *am* doing bad-faith hit-mongering, then it shows other people better how. But if I'm not - or if I'm doing it on accident, it would help me improve.
FWIW I think stuff like "how do we think and talk about honesty and dishonesty" is important - It's one of my longer-term focal points. So this isn't exactly new for me.
The "bad faith" verbiage is maybe not quite right. It was almost a direct response to the "I enjoy a scrap" quip; also, I recently had to unfollow someone who delivered good insights but who indulged in a lot of unrepentant sloppiness and whose username was literally something like "IfYouFollowMeYoullBeDisappointed" - he was right, and the occasional jewel was not worth the muck. So part of me wondered, if RC is optimizing for contrariness, can he also optimize for good epistemics and a good-faith shared pursuit of a map that reflects the territory?
I'm not sure how to say what I mean in few words without referencing the Sequences, so for TL;DR I'll say that intelligence is only valuable when it's not used to defeat itself, and unfortunately I see that in your writing. (Have you read the Sequences? If so, I'd love to hear what you think about how intricate thinking is especially vulnerable to tiny lapses in epistemic rigor.)
The longer version goes something like this. First, the previous article on this (Contra #1?) kind of got under my skin. Your thinking is pretty intricate, but in several places you kind of hide the ball, in ways that Scott already pointed out. The problem is that because your thinking is intricate, I can't really point out the ball-hiding without a long essay. Again, Scott wrote that essay better than I could. But it's a lot of work, and I wish you would organize your intricate thinking so that it would be easier to follow, refer to, and dispute. A good-faith thinker would see this as a huge plus, even though it would mean others could more easily point out flaws or contribute improvements.
Second, the big question at hand isn't "do some people lie about their internal states," or "are some people wrong about their perceived internal states." The question is pretty clear-cut and your counter-arguments only skim it, or only circuitously reach their conclusions. When a simple, clear-cut question gets an intricate, circuitous argument, that's evidence for bad faith. There are plenty of exceptions, but if you put yourself in my shoes, I hope you can appreciate the tedium of working carefully through every intricate argument, when usually I just find the same old flaws, like a subtle division-by-zero in a 20-page math proof. This doesn't really imply bad faith, but it sets up a kind of asymmetry that often accompanies bad faith, and even in good faith it reduces thoughtful engagement, for the same reason that most mathematicians won't bother going through a 20-page proof that concludes 2 + 2 = 3.
Bottom line: folks talked about mental states that you've never achieved doing things that you've never done. I can understand a healthy skepticism, with or without the curiosity to dive into it. But I find myself judging folks putting forth long, intricate, confident arguments that conclude it's impossible.
--
Again, I've enjoyed your stuff before. Please don't become another author who lowered their epistemic standards because they got substack famous - this trap is insidious.
I appreciated this comment as I feel like I understand your position much better now. I wanted to try my hand at what will hopefully be an illustrative hypothetical.
Say I'm at a bar or other social gathering place, and a stunningly attractive woman in a really nice dress takes a seat next to me and says "Hi there" in her best flirtatious voice. My gut feeling is that she wants something from me - maybe it's for me to sign up for something, or maybe it's my kidney. However, I would find the idea that she saw me, found herself helplessly drawn to me, and initiated contact so that we could get right into the business of forming a relationship with no ulterior motives hard to believe (even if that's the case).
The woman notices my seeming incredulity and makes her case: she's dressed nicely. She approached me. She's being overtly flirty, so that even if I've got below-average ability on reading social cues I can probably pick up on these. In our current society, she's more at risk of intimate or dating violence than I am, so why would she take the chance? If we're evaluating the arguments based on clarity and evidence, the woman's got me beat: she's laid out a clear, concise argument with supporting evidence, and I'm basically saying "I don't know; this whole thing seems fishy."
Is this woman genuinely attracted to me and is trying to get to know me better? It's plausible! I think the case RC is making broadly, though, is that there's value in taking the "this seems hard to believe" stance, even if you can't present it in elegant rationalist terms, because the people who take the "hard to believe" stance - even if they get significantly fewer dates - also lose significantly fewer kidneys.
I don't think RC wrote 3k words titled "On Unfalsifiable Internal State Claims, Politeness, and Broadly Applied Principles" to say "I dunno, this jhana stuff seems fishy to me but I can't lay out my reasons with clear, concise argument and supporting evidence."
If not, what did he write it for? As I mentioned in a different post, if he's taking lazy pot-shots at a much larger blogger with a much greater following in order to further his own blog's success, I don't think he's doing a great job of it.
It's not my goal to persuade you, but there does seem to be a little bit of irony in that we seem to have gotten into the weeds of RC's intent when he was writing the post, which sure seems awfully related to the claim of "I don't believe these people who claim to have attained jhana and enjoyed a number of incredible benefits of it". Not necessarily believing unfalsifiable internal state claims and making for a case for why that might be valuable (particularly in contrast to Scott's well-reasoned argument for why it is valuable to do the opposite!) seems like the entire point of that 3k word article.
We'll just have to take each other at our word, I guess! ;-)
So first, and this might be enough to disappoint you out of certain things entirely: I've tried reading the sequences. I think they are fine. I don't find them to be gospel like some do, though I think there's good bits in there.
On: "liking a scrap"
So assume a world where Scott is always right - he's just, you know, writing the articles because even though he's always right he still has to convince us since we aren't quite as fast. And assume I'm wrong, which I'd have to be if I was disagreeing If Scott minds a scrap and tries to avoid them, you don't get the article that you consider to shut me down pretty fully. Even in this world - where all Scott's approaches are the right approaches, and all his conclusions are the right conclusions - you want people around who don't mind a scrap to correct people who are wrong.
Now assume a world where Scott is sometimes wrong, fully or partially - where he's capable of error or bias. But just like our world, he's about 100 times bigger than anyone else in this space, much better respected, and much more of a long-term fixture. Despite being polite, he's a big, scary behemoth of a dominant force in this corner of the internet.
In that context, I'm noting a true thing - I enjoy fighting/arguing and I don't get daunted by celebrity as much as most. This matters for a couple reasons. The first is that (given that I like fighting) I want people to actually know that's not the only thing I'm considering (which you can confirm by looking at the part directly after that where I give up some ground based on a criticism I think was valid).
But I'm also noting it because it's important that people know that when I go to give up that ground, I'm not doing it because I'm afraid or because I work under the assumption that Scott's right all the time. He's right a lot of the time. I respect him. But he's also wrong sometimes, fully or partially.
On: Convoluted, dishonest arguments
I think you've concluded in some ways that I'm not arguing about what I say I'm arguing about - i.e. that it matters what we mean by "I believe you" or in what situations it's appropriate to say "I don't believe you". From what I can pick up from you inventing a claim I don't make (that any of these things are literally impossible) you think that I'm lying about arguing about the former to push the idea of the latter.
I think some of these things *seem untrue*, and I don't believe them. Scott thinks some of them *seem true* and believes them. I think he's implying a standard of "this is when you should believe people" that I don't like, and I'm arguing against it. And yes, it's very abstract, and takes a lot of words to do that. But I actually think it's important to do.
If you assume I'm lying about this (which in some cases isn't unreasonable), then I can see this looking like a lot of dishonest dissembling to get around to saying the much simpler "they are liars and I'd think they were lying no matter what, I'm unfalsifiable here". But once you get away from that to "How do we define belief, and when is it appropriate to believe or disbelieve, or demand others do the same" I think you would find there's not a great, compact way of doing that in a few hundred words.
With respect: no, I don't believe you. My conclusion is that if you had to pick between a lame, nothing-burger post that said some boring stuff about "hey, let's not jump to conclusions, maybe jhanas are real but maybe they're not" or a long, engaging post with lots of drama and details but terrible epistemics, you would write the latter. I weakly predict that you don't even think this is problematic. Which, hey, you do you. If I had more time, I'd probably keep reading your stuff.
Good luck, and if you want to keep up this chat, find me on Scott's substack. I do like your writing, I just wish it had more of what I want. :)
Madasario, to me it appears that in the above comment you are, first, claiming to read RC's mind about what he would do in a given circumstance, second, deriving an admittedly weak prediction from it, and, third, making a value judgment based on the first and second.
This is all well and good for any old internet comment, but the comment to which I am replying does not seem to live up to the level of epistemic mastery its author purports to have reached.
Ugh. I just got today's letter from Erik Hoel. I think I'm letting some of my disappointment in HIS epistemics bleed over into how I feel about YOU. In fact, I think I"m just very frustrated by bad epistemics lately, and I probably blame everyone involved, each individually, for the sum total of frustration caused by the whole. This isn't fair, and I'll have to think about what that means for me, but I wanted to at least put it out there.
What leads you to believe that RC isn't arguing in good faith?
Broadly summarized, I read this article as:
1) Admission of a weak part of his argument and a retraction based on Scott's response
2) Doubling down on a part of his argument he thinks is valid
3) A clarification of where he feels his argument was misrepresented
4) A proposal that he and Scott aren't using the same definitions of terms they're debating which might explain their different takes
5) A conclusion that I read as characterizing Scott pretty charitably, especially given that RC has stated disagreeing with Scott on a lot of social stuff
Those seems like the building blocks of good faith argument: admitting when you're wrong or didn't put forth convincing evidence, clarifying and offering alternative explanations, defending your positions where you feel you are right, and not caricaturizing your opponent and giving them a charitable, best-case characterization.
It's true that he's responding to Scott's response, which could be viewed as him trying to ride Scott's coattails for views or greater readership, but if he were to do that, why not be more incendiary and try and start some drama out of it?
This isn't necessarily the most important of the things I'm working out here, but it's interesting to me. I think I start from a place where I don't accept psychosomatic pain as easily as most, so stuff like this is very out-of-a-different-world for me. This is all at a low-confidence level sort of footing, mind you - I'm pretty far from expert on this facet of things.
I don't have it at my fingertips (nor am I enough of an expert to know what to Google) but I believe there is a pretty decent-sized medical literature suggesting that pain is at least partially psychologically and socially mediated, in addition to being biologically mediated. So in some sense all pain is at least partially psychosomatic--it's biopsychosociosomatic, if the biopsychosocial theory of pain is correct.
I think the latter is more important than the former in this particular discussion. I think as a general rule "what do we consider a lie" is more important for norms, disincentives/incentives, and that kind of thing. "What do we consider ourselves to believe, what will convince us, what is being convinced" is harder - it gets a lot more wibblywobbly.
Agreed completely, and there's a pretty stark disconnect between Scott and RC here.
If Scott says that someone who claims to have experienced astral projection has been having lucid dreams, he's saying that he believes them.
If RC makes the *exact same statement* (lucid dreaming etc), he's saying that he doesn't believe them.
I think RC's position makes more sense. When someone claims they can astrally project, they're not telling you they have really vivid dreams; if they were, they'd just say that. Coming back and telling that person that you believe them when you actually just think they're having unusual dreams seems...I don't know, patronizing, maybe.
I don't think Scott is being patronizing, except possibly as a follow-on effect or something like that. The impression I get is that Scott's history of developing as a writer was sort of happening in an overall flame-war era, where charity was at all-time lows and people were just calling each other names all the time. And he's been pretty consistent on "be respectful, be charitable" since I've been reading him.
I think those kinds of reflexes make these kinds of things happen here. If he's reading the stuff I'm typing in a framing of "RC is being uncharitable, calling people liars" and applying that history to it, you'd expect to see something like this - a reflex to find ways to believe and be respectful.
I think there's some differences in both what I'm trying to say here (blame the writing skill) and how we think about this in general.
In terms of what I was trying to specifically say, Scott is telling a story where he has a friend or friends with what's implied to be one-ish of these, and that it's close enough to being a WWJD-bracelet effect that they understand people mistaking it for that. I think there's at least some level of agreement that the people in the story, as represented, feel somewhat more credible than the Tik-Tok people with 30 alters.
There's the element of that that you are talking about - that the motivations to make something like this up are more understandable if someone is going to make a brand out of it. But here I'm specifically talking about the alters - the size of the claim, how biologically implausible or plausible it seems.
20 more-flamboyant alters seems like a lot, but so does 1 cultivated one to some. If the justification for the one is "listen, it's not like you are in his head, and the brain does weird stuff sometimes, we don't fully understand it", that's a gander/goose compatible sauce. It works for both, there's no bright line in saying "I believe 1 person could build this and have it be a real, substantial thing" and "But 20 is too many, this is impossible".
After that, we get into how it *feels*. And I agree with you that Scott's friends *feel* more credible as described than the TikTok people.
I think we will possibly see more about this in the future, but I think that while you are right (that Scott approaches this assuming a strong/reasonable form of all this kind of stuff) that he also mostly thinks that something significant is going on here beyond "here's a person who exaggerated" takes and similar.
Agreed. That's just good epistemics - it helps you determine whether jhana is real. Finding and dismissing bad arguments for jhana is terrible epistemics, and RC is great at it.
I think this is an interesting point of conversation. In my understanding of the steelman, the point of the practice is to make sure you are arguing against something strong - i.e. that you aren't strawmanning or weakmanning. It's almost like an anti-bullying thing.
But here, they've already been steelmanned - by Scott. And in terms of the risk of bullying, I think it's fair to note that my "target" here is Scott Alexander, a person who is about 200 times more famous than me and considered by many to be one of the intellectual powerhouses of our age, in a subject much more adjacent to his strengths than mine.
If the purpose of steelmanning is "to defeat a strong argument and render the victory meaningful and fair" I would argue I'm in the safe-zone here. And I'm struggling to see what else it could be, unless we are considering steelman something like "Assure you would be attacking a strong argument, but then don't attack it" which I don't think anyone thinks of it as.
If Scott says "lots of folks say they experience this, that should count as evidence for it" but you respond as if he'd said "you have to believe any experience that lots of people say they have" that's not steelmanning.
That is taking the thought to its logical conclusion. It's softer if you only allow that kind of evidence as a supplement to other evidence, but in the case of Jhanas, that's just about all of the evidence.
Are you saying we shouldn't believe anything based on second-hand accounts? Especially weird things that clash with your own lived experience? Especially if it takes years of training and exotic rituals in expensive locations to replicate the results?
If so, can I ask what you think about quantum entanglement?
The problem is two-fold. First, there's lots of reasons to doubt something that sounds like woo, absent pretty good evidence that it's true. Secondly, there's a really good reason to *default* to doubt, in that people use these kinds of woo to trick people (for scams, attention, whatever) all the time.
So when presented with something that might be woo, "don't believe anything based on second-hand accounts" seems like a good heuristic. Separately, this comes across like special pleading on Scott's behalf, because there's lots of things that Scott would doubt (for instance many religious beliefs) that have similar and similarly strong grounding. If he doubts that second-hand experience, but says "my friend, who I trust, says X and so I see that as evidence" I think it's fair to question why he doesn't see these other reports as similarly strong evidence. RC is asking if Scott feels he can reject "you have to believe any experience that lots of people say they have" while keeping a logically consistent outlook on this.
My feeling is that Scott cannot do so, without opening himself up to "well *my* friend tells me X is real, so I believe *him*" counterclaims that he has no ability to refute.
Yet another (claimed) jhana-haver, checking in. With the unpopular opinion of "it's trivially obviously real, I verified it myself, but there are two major dynamics that lead to it being overstated, and most people who talk about jhana inadvertently shove those under the rug"
Issue 1: People selectively remember their successful meditation attempts.
It's exactly like fishing. People remember the times they caught a big fish, not the times they went fishing and didn't catch anything. Looking at ten previous first-jhana attempts, if two failed completely, three were lackluster, three were decent, one was pretty awesome, and one was sex-tier, if you riffle back through your memories associated with first jhana, the two most successful ones will be the most salient. The memory of the time where you tried to attain first jhana on a long airplane trip and mostly failed won't come up in most cases. If you're at a party and trying to chat with somebody about meditation stuff, you're going to bring up the "hits" that make good party anecdotes, and not bring up that time where you tried to meditate in bed but you fell asleep. Or that time where you tried to meditate on your couch but your back was too itchy. Or the time during the road trip where you managed to get it going pretty well, and it was a worthwhile experience, but could not reasonably be described as "sex-tier".
Descriptions along the lines of "comparable to sex", strike me as accurate. Even the one about blissing out in the Macy's afterwards strikes me as accurate, there is a really weird aspect of the afterglow where your sensitivity to other pleasures is enhanced and you find yourself spontaneously going "wow, that rock is really pretty and great" (or whatever else you're interacting with). Leigh Brasington went through an MRI, we know that the brain is doing some rather unusual stuff during these states. So any claim like the strong form of "they're making it up", is just outright false.
However, what these experiences do not strike me as, is typical. When asking somebody about jhana experiences, you're getting their highlights reel unless you're very specifically asking them otherwise, just like asking somebody about their fishing experiences. Ordinary-ass jhana-in-the-airport is quite nice, but nowhere near sex-tier.
Even beyond that, there's an issue where, in everyday life, feelings of excitement are usually paired with an exciting thing. A great thing happened to you, you say "it was very exciting", the other person correctly infers that it must have been pretty great. During jhana, feelings of excitement aren't paired with an exciting thing happening. It's possible to experience strong feelings of excitement, but your mind knows that it isn't actually paired with anything, and so it comes across as more of a physiological feeling, and the overall experience is generally a good deal more lackluster than it sounds like. And so you can truly say "it felt EXTREMELY exciting", the other person (falsely) infers that it must have been REALLY great, but feelings of extreme excitement that aren't about anything aren't actually that great.
Issue 2: The word "first jhana" is used to describe a pretty wide range of experiences.
Not too long after figuring out how to do it (ie, attain a mental state that is clearly abnormal while meditating), I recorded what happened, managed to consistently replicate it, and the state was something like a cross between being well-caffeinated, very tightly focused, highly excited, and with occasional waves of body chills, the same sort that music generates. Also it keeps turning off and on again like a car that refuses to properly start. Overall niceness level: like eating a pan of fresh-cooked brownies.
After getting it down well enough and practicing it for a while, it turns out there were a few missing mental steps, and if you don't try quite so hard to force it and just have fun with it, it's much nicer. Energy levels and focus go down a bit, the state gets more stable than it used to be, the excitement component gets a considerably larger dose of happiness added into it instead of being pure excitement, and there's a nice afterglow for ten or so minutes.
And now we get to stuff I haven't personally done.
Leigh Brasington, one of the main meditation teachers teaching this stuff, claims that the body chills/tingles become continuous, rather than coming in waves, when you REALLY hit first jhana, and the stuff I'm doing is just messing around with some pre-jhana states. Nick C's described experiences also seem around this level, if he's describing them as "10x better than sex".
And then there was a time where Leigh Brasington visited some of the really hardcore Thai forest monks, and after a week or so, (claimed to have) managed to get some extremely damn strong jhanas that near-perfectly matched up to the original descriptions in the Pali Canon.
There are many reasonable questions that could be asked at this point, like
"wait, if the word "jhana" is used to describe both an energetic/focused/nice state that can be worked out in a week and easily attained in a spare 20 minutes while waiting in an airport, and some sort of ultra-rare infinite cosmic bliss thing that you can only attain by meditating in a Thailand cave for five years with absolute silence, then isn't the word "jhana" uselessly vague and referring to way too many things?? It's one of the worst cases of motte and bailey I've ever seen!"
and
"Wait, if I have to be a meditation teacher and spend a week in a Thailand cave to get the highest levels of this, then aren't those highest levels totally useless for everyday life? Even if we buy that it's actually great enough to justify the time sink, what's the point of having mental motions for unbounded bliss if you can't practically use it without sitting in a cave for a week? Looks like wireheading."
and
"I extremely want to call bullshit on Nick C's experiences and people running around going "you dummy, jhana is totally real" when what they're describing is <1/10th of what Nick C is claiming IS NOT HELPING"
I agree with all of these points! When someone is claiming to have attained jhana, it's really important to try to figure out what they're claiming on this spectrum and not do the implicit motte/bailey thing.
I'm coming to the party kinda late so you probably won't end up reading this, but if you do, that'd be cool.
I’m unsure whether I’ve misinterpreted because some of the prose is muddled, so please forgive me if you’ve covered this. Here’s how I understand the dialectic, if we cleaned it up a bit.
You’re making an argument from analogy. You want to say that spoonies et all are relevantly similar to jhana people. You then argue using the more familiar spoonies example that we should default to low credence. You then apply that to jhana people, since their claims are relevantly similar from an epistemic POV.
Scott implicitly agrees with the framing of your argument, buying that the cases are analogous. However, he objects to your argument by claiming that we should actually believe the spoonies et al. He defends this claim by pointing out that psychosomatic pain is pain just as much as other pain, since pain is just the experience of the pain. This is meant to undermine a principle on which your argument is based. Specifically, he takes you to be claiming that we can infer from “lots of doctors have been unable to identify a physical cause of the pain” to “the patient is full of shit.” By explaining that psychosomatic pain isn’t the sort of thing that has that kind of physical cause, he takes himself to have undermined your principle and therefore crippled your overall argument. Thus, your argument to disbelieve the spoonies fails, and so does your argument to disbelieve the jhana people.
You respond by denying Scott’s claim that we should believe the spoonies. You do this by distinguishing between belief_1 and belief_2. Belief_1 is your kind, the better kind, and involves endorsing the content of what people say. Belief_2 is less clear, but seems to involve endorsing a related, different, and more plausible claim. It’s more charitable. You then argue that the word “belief” actually picks out the concept belief_1, because the patients themselves get mad when doctors say “I believe you and think it’s psychosomatic.”You argue that if “belief” really picked out concept belief_2, then they wouldn’t get so mad. After all, they’d presumably be happy the doctor believed them!
Here’s what Scott should say. The patients are making two separate claims. Claim 1 is about being in pain, which you should believe. Claim 2 is about the pain’s origin, which you shouldn’t necessarily believe. The patient, if he were rational, would realize he’s mad that the doctor doesn’t believe claim 2 and that his anger has nothing to do with whether the doctor believes claim 1. After all, the doctor DOES believe claim 1! So Scott is not using belief_2 at all. Rather, you’ve simply made a reasoning mistake by inferring from “patient mad at doctor” to “doctor doesn’t believe claim 1.”
That’s a very cleaned up version of the dialectic. The real thing is full of nonsense, like Scott’s bizarre complaints about how you use norms. Here’s why Bayesians shouldn’t get their panties in a knot over epistemic norms.
Even Bayesians want want to treat like cases alike, and you’re arguing that spoonies are relevantly similar to jhana, or at least close enough that our credences should be similar. When you argue for a norm, you’re not claiming it to apply everywhere. You’ve ALREADY argued that spoonies and jhana people are relevantly similar. The norm is meant to apply only to whatever group of people contains both and is epistemically at issue. And Scott implicitly admits that the cases seem to be analogous. So he really shouldn’t have beef with the norms stuff.
I do think, however, that your move to attribute an off base definition of belief to Scott doesn’t work for the reasons I outlined above. What you should’ve argued was that Scott’s assessment of what percentage of people are simply full of shit is just wrong. Or you could’ve argued that jhana is even more likely to be full of shit, since it presumably requires involves more complicated neural mechanisms than psychosomatic pain.
I think you can probably tell that I think highly of your thinking and work and want you to do well. From your POV, I am just another internet guy, so why should you give a shit about my views of your writing? Fair enough. It’s easy to ignore if you want. With that said, I have a couple of writing comments that I think would benefit your prose, coming from a philosophy PhD student.
I understand that you were pressed to get this article out quickly. The conversation moves on otherwise. But this piece needed a LOT more editing or to be organized very differently. It needed some understandable jargon instead of all the hyphenated words. It needed a concise reconstruction of Scott’s argument all in one place. It needed you to point where you identified which of Scott’s premises you objected to. This piece was structured as an “omg a big famous dude attacked me and was kinda unfair, let me defend myself.” I do think he misread you at various points, especially with the baseball example. But writing a piece explicitly designed to defend the merit of your previous one came at a huge cost to readability and credibility. Even if Scott wanted to engage again with your ideas, he couldn’t, because this isn’t written clearly enough. This isn’t entirely your fault, because Scott’s piece wasn’t much clearly or better organized. At no point did either of you lay out exactly what the key points of issue were. It led to a muddy back-and-forth. I believe that focusing on the arguments themselves and trying to make them very explicit would work wonders.
On the writing bits, I see your premise as several things:
1. You need to be more concise overall.
2. You need to use "jargon"
3. You needed to clearly summarize Scott's argument
4. I needed to make specific takedowns of the specific believability of specific unfalsifiable claims
Of the three, I disagree with 2 and 4 the most:
2. I find that a very common failure mode of rationalist-sphere writers is to assume that everyone on Earth either is or should be one of the about .01% of everyone who is enamored with the very specific, very unpopular set of terms for things they've developed for themselves.
So, say above. You used the most popular term rationalists have (bayesian). Some percentage of people reading the concept actually understand Bayes enough to know what you are trying to generally get at. The rest of them don't know what you are saying at all, unless they picked it up from context. I am of the opinion that this is bad - that joe-average should be able to understand my argument without ten minutes of googling, and that I've failed at my job if he can't.
I also find that usage of jargon tends to be a way to cover up bad thinking. If I think at best half of everyone in my audience understands "bayesian" in a general way, I then have to also do a calculation for people who understand it *well enough to know if you are using it wrong* (low double digits, probably) and then of those who both understand it well enough to know if you are using it wrong, and who are tuned-in enough to actually assess that (low single digits).
I specifically avoid jargon for these reasons. I want my arguments to be understood and to carry exactly the weight they justify as I have argued them. I think jargon gets in the way of both.
4.
I disagree with this because I'm very much attacking not a specific belief, but a generalized principle of how we assess truthfulness and then talk about it. The construction of my first article reflects this - I don't point out reasons Jhana is fake; I point out that things are sometimes fake, and that if we use the presented "Believe any claim a lot of people make that, while not having evidence for it besides that, doesn't have a lot of evidence against it" claim we'd have to accept the general class of things that principle justifies, which is pretty big.
The whole premise of this thing is that we are dealing with situations in which the evidence isn't really that strong one way or another once "was claimed" is factored out. "Is specific unfalsifiable claim X actually more falsifiable or supported than we thought" is a different argument I'm much less interested in.
1-3:
I think you've already noted somewhat that Scott's argument wasn't particularly condensible. I'm not criticizing him for being scattered here - I think this argument is weird, and there's a lot of elements to be organized. I'm assuming we will both discard individual points and refine to a smaller version as time goes on; for the moment, I'm fine with the "knots don't start out untied" structure of this.
In terms of general concision, I don't really do that. Some people like it, and some people don't; this is again a place where I think my audience is just different than what a person who argues from a "most people probably want what rationalists want" standpoint thinks they are. This will sound snarky/bragging but isn't meant that way: My numbers are actually pretty good overall, to the point where it's probably a bad idea to change to a writing style I'm much worse at and like less.
Basically you aren't wrong that some things could be shorter and that some people would prefer that; I get regular complaints about it, at least. But at the same time, I'm doing well and loathe to break something that's working just fine.
So this is the problem here, for me:
**Here’s what Scott should say. The patients are making two separate claims. Claim 1 is about being in pain, which you should believe.**
If I believed this, there wouldn't be an argument. What this does is take Scott's original argument (as I have read it) of "If enough people claim something, you should believe it" and changes it into "If enough people have claimed something, you should believe AT LEAST the most important part of it; it's not something you should question".
Now let's take Nick C as an example. In your model, I have to accept that "his pain is real" - that he is experiencing something very much like he's describing, even if he he has the cause wrong. In his case, he has an on-call bliss state of incredible power that has no downsides at all and several real-world non-meditative upsides; his diet is better, his substance abuse issues are better, and his brain is rewired to make coffee work better.
I can have this too, he says, for the low, low price of hundreds of hours of my free time - time I could use for other things.
I'm saying "I have listened to what he says, and he doesn't strike me as believable. I in fact do not believe his claim, and thus won't risk hundreds of hours of my time pursuing his claimed benefits, which I want as he has described them".
Your counterargument - that I should believe *just the part that makes me take the risk* - isn't compelling for me. And Scott's - that I should say I believe his claim even if I don't believe any of it before highly modifying it to a different claim with different implications - is similarly bad.
I think that both your's and Scott's viewpoint is contingent on one of or a combination of some unspoken assumptions that drive it - that people either don't lie, or else it's socially impolite enough to say they do that you should pretend they don't.
My viewpoint is still pretty unchanged here (with an exception detailed below). Basically, I think it's reasonable to assess someone's unverifiable claim and believe it, but I also think it's reasonable to assess it and decide you *don't* believe it, and to move forward accordingly. In this case, I think it's fine for me to say "I don't find Nick C.'s claims believeable, and I don't want to risk hundreds of hours on something, so I'm not going to.".
I think that (provided one thinks lying is a think that exists in a practical sense where it's a real risk) this generalizes out to any situation where the truthfulness of someone's words has a practical implication.
On the exception: I think one of the big exceptions here can be on matters of pre-determined policy. So, for instance, if we acknowledged that there exists a class of "faker spoonies" that represents some low-single-digit amount of spoonies, and that there also exists a large amount of spoonies who are usually ill-treated by the medical system, we might decide on "treat all of this like it's real, don't make judgments" because doing otherwise creates unacceptably large downsides for the "real spoonies".
On the writing: I have some thoughts, but I'll get to them in a separate comment.
On the exception:
Just to clarify, I don't agree with Scott here. In fact, I didn't take myself to be putting forward a position at all. (As an aside, I find myself very skeptical of all these phenomena, closer to your view.) I was merely attempting to clarify the back-and-forth, because given how you and Scott have written your pieces, it's not even obvious what the major disagreement is, besides one about how often people lie about unimportant stuff. I do think, however, that the way that you argued against Scott doesn't work for the reasons I outlined.
In your reply to my comment, you distort the argument I was toying with in several ways. I'll give one example. You suggest that if we accept "The patients are making two separate claims. Claim 1 is about being in pain, which you should believe," then we must also accept "If enough people have claimed something, you should believe AT LEAST the most important part of it; it's not something you should question." But this is just a bizarre claim. All I said was that if a person says A is true and B is true, we can say "I believe A but not B," and the person can get pissed that you don't believe B. Oftentimes in such scenarios, the pissed person won't even register that you believe A or won't even ask for clarification!
Re the writing. I don't mean to suggest jargon that keeps readers out. That's crap, especially on substack. But if you're going to name something, you might as well use a name that doesn't require half the reader's working memory to remember. For instance, I don't think that 'belief_1' (not the best, I admit), alienates anyone who's already reading.
Take this passage of yours: "So let’s imagine that for any claim, a person might mean one of two things: Whatever they claimed, or that they know what the word “sandwich” means. And say a person comes to you and says “I can literally fly through the clouds; I have literally and not at all figuratively touched the sun”. Though they used those words, you now have to remember: they might just mean they know what a sandwich is."
You could've said: Suppose that for any utterance, a person might be referring to one of two meanings. The plain meaning is the one that any normal person would understand the utterance to mean. For example, the plain meaning of the utterance "I can literally fly through the clouds" is simply that I can fly through the clouds. By contrast, the sandwich meaning is that I know what the word 'sandwich' means. Thus, the utterance "I can literally fly through the clouds" under such circumstances is ambiguous between the plain meaning and the sandwich meaning.
My version is slightly longer, but it has the benefit of being understandable at a first read, which makes it functionally half the length. Additionally, it provides concepts that you can easily appeal to later that the reader can REMEMBER, like plain meaning and sandwich meaning. This is not difficult jargon. It doesn't alienate at all. In fact, with some effort, you could make it genuinely funny and therefore more accessible. And I think that it should be clear by now that this sort of jargon doesn't cover up bad thinking. It makes the thinking crystal clear by simplifying the amount of working memory required to get through a sentence, freeing up valuable RAM for evaluating whether the sentence is true.
Regarding the summarizing Scott's argument issue, I'm puzzled by your take. You're writing a reply here. It's weird to do a reply and not be specific about what you're replying to. That's more of a second attempt to say what you were trying to say last time. And you sure talked an awful lot about Scott's objection for a guy simply taking another stab at it.
Re concision, I don't really care about word count. I care about clarity. The way you structured the thing made it a lot harder than it should've been to understand. When you structure a thing well, you often end up having to repeat yourself less, resulting in an overall lower word count.
I'm not trying to come at your numbers, man. And no, I'm not a rationalist (in the way that word is used these days), nor do I have a hard on for low word counts. I'm all for adding words if they are substantive, clarifying, or funny. That's why I generally like your writing and usually find it to be solid and refreshingly original. I suspect that you rushed this one, for understandable reasons.
Last comment. Scott and his ilk's writing has some serious structure problems as well. It's got this meandering quality that takes you along for the ride without making clear points. It irritates me to read a 4k word thing that could've been better communicated in 1k, or 1.5k if you add an obscene amount of flourish. It's a bit of a problem for any platform in which the writers are self-taught and pressured to produce rapidly. Some develop beautifully, some don't. I see your writing improving each time (with this article as the outlier, in my opinion) and am rooting hard for this page! I hope that my comments haven't come off as overly snobbish or backseat driver-y. I won't make further remarks on your page about the writing itself.
For what it is worth, I don't think this is too long at all.
Your point on lying reminds me of Swift's Gulliver's Travels with the... horse people... who don't know what lying is, and refer to it as saying a thing which is not so. (Or something to that effect.) That seems to pretty closely match your sense of lie, mine as well. Maybe it is a bit better to consider it merely misleading if they don't mean to lie, but I think it is fair to expect affirmative action to make sure what you are saying is not incorrect or misleading in that way. I can't tell what someone's intent is all the time, but if they say something that is not so the effect is the same for most purposes. Communication requires effort on both sides, and someone who doesn't care if they say what they mean is as troublesome as a liar, because you can't even predict when they will tell you the false thing.
My biggest beef with Scott, which is slightly your fault, too, is that he didn't focus on what you meant. Sure, ok, RC uses the examples of Spoonies and DID kids as evidence of people lying about their internal states, and Scott says "No no! Those are actually bad examples!" That only ends the discussion if one can't think of any other case where people knowingly lie about their internal states, or abilities reach internal states, to others. I have an extremely hard time believing Scott, a mental health professional, has not run across instances where people lie about their internal states. No Munchausen's? No one lying on intake forms? No kids, in general? No one claiming to have a wonderful relationship that is obviously a giant mess they are trying to cover up?
It seems to me that Scott is unfair, probably knowingly so, in not saying "Ok, those are bad examples, but I see what you mean: people often do lie or at least exaggerate their internal states to get things from other people, such as approval or admiration, just like they may lie to cover up an embarrassing internal state, like a bad marriage." I think you, RC, did get pulled in a little arguing those examples instead of saying "Ok, fine, bad examples, but what about _____?"
Still, the outcome of sorting out the different meanings of lie and belief seems worthwhile!
EDIT:I slipped into an odd 3rd person construct there... tried to clarify that some.
An intelligent back and forth between two individuals who mutually respect each others opinions and sincerely disagree... Substack is kind of the best! Really enjoyed the read, I feel like this argument tends to fall in your favor for the broad strokes, and Scott's favor for the specific ones. So I guess you both kind of succeeded in your arguments?
A brief word on your last few paragraphs; don't be fooled like the rest of the world seems to be. Trust is not always generous, nor is skepticism always cold. Both are immensely good when placed correctly, and immensely bad when placed incorrectly. Deserved skepticism is a virtue, not a flaw. Undeserved trust should be outcast.
No prize short of Truth itself lies in distinguishing the two.
You're clearly a smart, thoughtful person, and you can do better than this. I think I'm going to have to unsubscribe, if I can't come to a better conclusion than "RC is in this for the clicks and eyeballs, not for the good-faith discussion."
So, I don't think I can necessarily cure the impression you have, but one thing I think would be helpful here would be to dig down a bit on where you perceive me as being bad-faith. I think that works in a bunch of scenarios - if I *am* doing bad-faith hit-mongering, then it shows other people better how. But if I'm not - or if I'm doing it on accident, it would help me improve.
FWIW I think stuff like "how do we think and talk about honesty and dishonesty" is important - It's one of my longer-term focal points. So this isn't exactly new for me.
The "bad faith" verbiage is maybe not quite right. It was almost a direct response to the "I enjoy a scrap" quip; also, I recently had to unfollow someone who delivered good insights but who indulged in a lot of unrepentant sloppiness and whose username was literally something like "IfYouFollowMeYoullBeDisappointed" - he was right, and the occasional jewel was not worth the muck. So part of me wondered, if RC is optimizing for contrariness, can he also optimize for good epistemics and a good-faith shared pursuit of a map that reflects the territory?
I'm not sure how to say what I mean in few words without referencing the Sequences, so for TL;DR I'll say that intelligence is only valuable when it's not used to defeat itself, and unfortunately I see that in your writing. (Have you read the Sequences? If so, I'd love to hear what you think about how intricate thinking is especially vulnerable to tiny lapses in epistemic rigor.)
The longer version goes something like this. First, the previous article on this (Contra #1?) kind of got under my skin. Your thinking is pretty intricate, but in several places you kind of hide the ball, in ways that Scott already pointed out. The problem is that because your thinking is intricate, I can't really point out the ball-hiding without a long essay. Again, Scott wrote that essay better than I could. But it's a lot of work, and I wish you would organize your intricate thinking so that it would be easier to follow, refer to, and dispute. A good-faith thinker would see this as a huge plus, even though it would mean others could more easily point out flaws or contribute improvements.
Second, the big question at hand isn't "do some people lie about their internal states," or "are some people wrong about their perceived internal states." The question is pretty clear-cut and your counter-arguments only skim it, or only circuitously reach their conclusions. When a simple, clear-cut question gets an intricate, circuitous argument, that's evidence for bad faith. There are plenty of exceptions, but if you put yourself in my shoes, I hope you can appreciate the tedium of working carefully through every intricate argument, when usually I just find the same old flaws, like a subtle division-by-zero in a 20-page math proof. This doesn't really imply bad faith, but it sets up a kind of asymmetry that often accompanies bad faith, and even in good faith it reduces thoughtful engagement, for the same reason that most mathematicians won't bother going through a 20-page proof that concludes 2 + 2 = 3.
Bottom line: folks talked about mental states that you've never achieved doing things that you've never done. I can understand a healthy skepticism, with or without the curiosity to dive into it. But I find myself judging folks putting forth long, intricate, confident arguments that conclude it's impossible.
--
Again, I've enjoyed your stuff before. Please don't become another author who lowered their epistemic standards because they got substack famous - this trap is insidious.
I appreciated this comment as I feel like I understand your position much better now. I wanted to try my hand at what will hopefully be an illustrative hypothetical.
Say I'm at a bar or other social gathering place, and a stunningly attractive woman in a really nice dress takes a seat next to me and says "Hi there" in her best flirtatious voice. My gut feeling is that she wants something from me - maybe it's for me to sign up for something, or maybe it's my kidney. However, I would find the idea that she saw me, found herself helplessly drawn to me, and initiated contact so that we could get right into the business of forming a relationship with no ulterior motives hard to believe (even if that's the case).
The woman notices my seeming incredulity and makes her case: she's dressed nicely. She approached me. She's being overtly flirty, so that even if I've got below-average ability on reading social cues I can probably pick up on these. In our current society, she's more at risk of intimate or dating violence than I am, so why would she take the chance? If we're evaluating the arguments based on clarity and evidence, the woman's got me beat: she's laid out a clear, concise argument with supporting evidence, and I'm basically saying "I don't know; this whole thing seems fishy."
Is this woman genuinely attracted to me and is trying to get to know me better? It's plausible! I think the case RC is making broadly, though, is that there's value in taking the "this seems hard to believe" stance, even if you can't present it in elegant rationalist terms, because the people who take the "hard to believe" stance - even if they get significantly fewer dates - also lose significantly fewer kidneys.
I don't think RC wrote 3k words titled "On Unfalsifiable Internal State Claims, Politeness, and Broadly Applied Principles" to say "I dunno, this jhana stuff seems fishy to me but I can't lay out my reasons with clear, concise argument and supporting evidence."
If not, what did he write it for? As I mentioned in a different post, if he's taking lazy pot-shots at a much larger blogger with a much greater following in order to further his own blog's success, I don't think he's doing a great job of it.
It's not my goal to persuade you, but there does seem to be a little bit of irony in that we seem to have gotten into the weeds of RC's intent when he was writing the post, which sure seems awfully related to the claim of "I don't believe these people who claim to have attained jhana and enjoyed a number of incredible benefits of it". Not necessarily believing unfalsifiable internal state claims and making for a case for why that might be valuable (particularly in contrast to Scott's well-reasoned argument for why it is valuable to do the opposite!) seems like the entire point of that 3k word article.
We'll just have to take each other at our word, I guess! ;-)
So first, and this might be enough to disappoint you out of certain things entirely: I've tried reading the sequences. I think they are fine. I don't find them to be gospel like some do, though I think there's good bits in there.
On: "liking a scrap"
So assume a world where Scott is always right - he's just, you know, writing the articles because even though he's always right he still has to convince us since we aren't quite as fast. And assume I'm wrong, which I'd have to be if I was disagreeing If Scott minds a scrap and tries to avoid them, you don't get the article that you consider to shut me down pretty fully. Even in this world - where all Scott's approaches are the right approaches, and all his conclusions are the right conclusions - you want people around who don't mind a scrap to correct people who are wrong.
Now assume a world where Scott is sometimes wrong, fully or partially - where he's capable of error or bias. But just like our world, he's about 100 times bigger than anyone else in this space, much better respected, and much more of a long-term fixture. Despite being polite, he's a big, scary behemoth of a dominant force in this corner of the internet.
In that context, I'm noting a true thing - I enjoy fighting/arguing and I don't get daunted by celebrity as much as most. This matters for a couple reasons. The first is that (given that I like fighting) I want people to actually know that's not the only thing I'm considering (which you can confirm by looking at the part directly after that where I give up some ground based on a criticism I think was valid).
But I'm also noting it because it's important that people know that when I go to give up that ground, I'm not doing it because I'm afraid or because I work under the assumption that Scott's right all the time. He's right a lot of the time. I respect him. But he's also wrong sometimes, fully or partially.
On: Convoluted, dishonest arguments
I think you've concluded in some ways that I'm not arguing about what I say I'm arguing about - i.e. that it matters what we mean by "I believe you" or in what situations it's appropriate to say "I don't believe you". From what I can pick up from you inventing a claim I don't make (that any of these things are literally impossible) you think that I'm lying about arguing about the former to push the idea of the latter.
I think some of these things *seem untrue*, and I don't believe them. Scott thinks some of them *seem true* and believes them. I think he's implying a standard of "this is when you should believe people" that I don't like, and I'm arguing against it. And yes, it's very abstract, and takes a lot of words to do that. But I actually think it's important to do.
If you assume I'm lying about this (which in some cases isn't unreasonable), then I can see this looking like a lot of dishonest dissembling to get around to saying the much simpler "they are liars and I'd think they were lying no matter what, I'm unfalsifiable here". But once you get away from that to "How do we define belief, and when is it appropriate to believe or disbelieve, or demand others do the same" I think you would find there's not a great, compact way of doing that in a few hundred words.
With respect: no, I don't believe you. My conclusion is that if you had to pick between a lame, nothing-burger post that said some boring stuff about "hey, let's not jump to conclusions, maybe jhanas are real but maybe they're not" or a long, engaging post with lots of drama and details but terrible epistemics, you would write the latter. I weakly predict that you don't even think this is problematic. Which, hey, you do you. If I had more time, I'd probably keep reading your stuff.
Good luck, and if you want to keep up this chat, find me on Scott's substack. I do like your writing, I just wish it had more of what I want. :)
No problem!
Madasario, to me it appears that in the above comment you are, first, claiming to read RC's mind about what he would do in a given circumstance, second, deriving an admittedly weak prediction from it, and, third, making a value judgment based on the first and second.
This is all well and good for any old internet comment, but the comment to which I am replying does not seem to live up to the level of epistemic mastery its author purports to have reached.
EDIT: changed original "post" to "comment" (3x)
Ugh. I just got today's letter from Erik Hoel. I think I'm letting some of my disappointment in HIS epistemics bleed over into how I feel about YOU. In fact, I think I"m just very frustrated by bad epistemics lately, and I probably blame everyone involved, each individually, for the sum total of frustration caused by the whole. This isn't fair, and I'll have to think about what that means for me, but I wanted to at least put it out there.
What leads you to believe that RC isn't arguing in good faith?
Broadly summarized, I read this article as:
1) Admission of a weak part of his argument and a retraction based on Scott's response
2) Doubling down on a part of his argument he thinks is valid
3) A clarification of where he feels his argument was misrepresented
4) A proposal that he and Scott aren't using the same definitions of terms they're debating which might explain their different takes
5) A conclusion that I read as characterizing Scott pretty charitably, especially given that RC has stated disagreeing with Scott on a lot of social stuff
Those seems like the building blocks of good faith argument: admitting when you're wrong or didn't put forth convincing evidence, clarifying and offering alternative explanations, defending your positions where you feel you are right, and not caricaturizing your opponent and giving them a charitable, best-case characterization.
It's true that he's responding to Scott's response, which could be viewed as him trying to ride Scott's coattails for views or greater readership, but if he were to do that, why not be more incendiary and try and start some drama out of it?
re 10: https://www.cochrane.org/CD006380/SYMPT_drugs-treat-phantom-limb-pain-people-missing-limbs Sometimes not only do we prescribe drugs to treat people with psychosomatic illness, but they also work, at least for a while.
This isn't necessarily the most important of the things I'm working out here, but it's interesting to me. I think I start from a place where I don't accept psychosomatic pain as easily as most, so stuff like this is very out-of-a-different-world for me. This is all at a low-confidence level sort of footing, mind you - I'm pretty far from expert on this facet of things.
I don't have it at my fingertips (nor am I enough of an expert to know what to Google) but I believe there is a pretty decent-sized medical literature suggesting that pain is at least partially psychologically and socially mediated, in addition to being biologically mediated. So in some sense all pain is at least partially psychosomatic--it's biopsychosociosomatic, if the biopsychosocial theory of pain is correct.
More usefully, I don’t think the big disconnect here is about what the word “lie” means. It’s more about what the word “believe “ means.
I think the latter is more important than the former in this particular discussion. I think as a general rule "what do we consider a lie" is more important for norms, disincentives/incentives, and that kind of thing. "What do we consider ourselves to believe, what will convince us, what is being convinced" is harder - it gets a lot more wibblywobbly.
Agreed completely, and there's a pretty stark disconnect between Scott and RC here.
If Scott says that someone who claims to have experienced astral projection has been having lucid dreams, he's saying that he believes them.
If RC makes the *exact same statement* (lucid dreaming etc), he's saying that he doesn't believe them.
I think RC's position makes more sense. When someone claims they can astrally project, they're not telling you they have really vivid dreams; if they were, they'd just say that. Coming back and telling that person that you believe them when you actually just think they're having unusual dreams seems...I don't know, patronizing, maybe.
I don't think Scott is being patronizing, except possibly as a follow-on effect or something like that. The impression I get is that Scott's history of developing as a writer was sort of happening in an overall flame-war era, where charity was at all-time lows and people were just calling each other names all the time. And he's been pretty consistent on "be respectful, be charitable" since I've been reading him.
I think those kinds of reflexes make these kinds of things happen here. If he's reading the stuff I'm typing in a framing of "RC is being uncharitable, calling people liars" and applying that history to it, you'd expect to see something like this - a reflex to find ways to believe and be respectful.
Or perhaps the difference between objective and subjective truths? Falsifying a subjective truth is hard.
I didn't have time to write a short letter, so I wrote a long one instead.
Mark Twain
Yeah. For better or worse, there aren't that many people describing me as concise.
Fair enough, but he at least warned you up front.
Wow, never noticed the major kerning in the Substack font. "wamed" != "warned"
Interesting, those look pretty clearly different to me: https://files.jfmonty2.com/substack_kerning.png
Admittedly this can vary a lot based on your screen size/density, browser, OS etc, so I'm not denying your unfalsifiable experience here. :D
I think there's some differences in both what I'm trying to say here (blame the writing skill) and how we think about this in general.
In terms of what I was trying to specifically say, Scott is telling a story where he has a friend or friends with what's implied to be one-ish of these, and that it's close enough to being a WWJD-bracelet effect that they understand people mistaking it for that. I think there's at least some level of agreement that the people in the story, as represented, feel somewhat more credible than the Tik-Tok people with 30 alters.
There's the element of that that you are talking about - that the motivations to make something like this up are more understandable if someone is going to make a brand out of it. But here I'm specifically talking about the alters - the size of the claim, how biologically implausible or plausible it seems.
20 more-flamboyant alters seems like a lot, but so does 1 cultivated one to some. If the justification for the one is "listen, it's not like you are in his head, and the brain does weird stuff sometimes, we don't fully understand it", that's a gander/goose compatible sauce. It works for both, there's no bright line in saying "I believe 1 person could build this and have it be a real, substantial thing" and "But 20 is too many, this is impossible".
After that, we get into how it *feels*. And I agree with you that Scott's friends *feel* more credible as described than the TikTok people.
I think we will possibly see more about this in the future, but I think that while you are right (that Scott approaches this assuming a strong/reasonable form of all this kind of stuff) that he also mostly thinks that something significant is going on here beyond "here's a person who exaggerated" takes and similar.
Agreed. That's just good epistemics - it helps you determine whether jhana is real. Finding and dismissing bad arguments for jhana is terrible epistemics, and RC is great at it.
I think this is an interesting point of conversation. In my understanding of the steelman, the point of the practice is to make sure you are arguing against something strong - i.e. that you aren't strawmanning or weakmanning. It's almost like an anti-bullying thing.
But here, they've already been steelmanned - by Scott. And in terms of the risk of bullying, I think it's fair to note that my "target" here is Scott Alexander, a person who is about 200 times more famous than me and considered by many to be one of the intellectual powerhouses of our age, in a subject much more adjacent to his strengths than mine.
If the purpose of steelmanning is "to defeat a strong argument and render the victory meaningful and fair" I would argue I'm in the safe-zone here. And I'm struggling to see what else it could be, unless we are considering steelman something like "Assure you would be attacking a strong argument, but then don't attack it" which I don't think anyone thinks of it as.
If Scott says "lots of folks say they experience this, that should count as evidence for it" but you respond as if he'd said "you have to believe any experience that lots of people say they have" that's not steelmanning.
That is taking the thought to its logical conclusion. It's softer if you only allow that kind of evidence as a supplement to other evidence, but in the case of Jhanas, that's just about all of the evidence.
Are you saying we shouldn't believe anything based on second-hand accounts? Especially weird things that clash with your own lived experience? Especially if it takes years of training and exotic rituals in expensive locations to replicate the results?
If so, can I ask what you think about quantum entanglement?
The problem is two-fold. First, there's lots of reasons to doubt something that sounds like woo, absent pretty good evidence that it's true. Secondly, there's a really good reason to *default* to doubt, in that people use these kinds of woo to trick people (for scams, attention, whatever) all the time.
So when presented with something that might be woo, "don't believe anything based on second-hand accounts" seems like a good heuristic. Separately, this comes across like special pleading on Scott's behalf, because there's lots of things that Scott would doubt (for instance many religious beliefs) that have similar and similarly strong grounding. If he doubts that second-hand experience, but says "my friend, who I trust, says X and so I see that as evidence" I think it's fair to question why he doesn't see these other reports as similarly strong evidence. RC is asking if Scott feels he can reject "you have to believe any experience that lots of people say they have" while keeping a logically consistent outlook on this.
My feeling is that Scott cannot do so, without opening himself up to "well *my* friend tells me X is real, so I believe *him*" counterclaims that he has no ability to refute.