One bit of advice you hear about blogging is to pick a focus - to choose one thing you are particularly knowledgeable about or good at and mostly write about that, so people know what your blog is about and what they can expect from it.
I think it sounds a lot like what we want "scientist" to be. I think there's a lot of failure modes for scientist, and one of the big ones is "career-oriented guy" in a field where careers are driven by counter-intuitive/unlikely results; that makes you generate and ignore a lot of fishiness.
I bring up tobacco control a lot because it's sort of a perfect storm of this - we know cigarettes are pretty bad for you, so everyone feels like they are fighting the good fight with any anti-tobacco finding, and nobody really wants to question a result that even if unaccurate pushes in the the right direction - combining that with career concerns you get super unplausible stuff (third hand smoke, etc) that never gets seriously examined, or doesn't get examined for decades.
I used to work for a quantitative hedge fund. We hired really really smart scientists and statisticians and gave them all the resources they could ever want. A high level description of their job was basically to develop hypotheses and test them. Incentives were aligned: Once their results were verified in the real world they got paid lots of money, and if not, they got nothing. It would be difficult to come up with a more ideal setting for correct research and statistics to take place. Whenever I read articles like this I think yup we solved those problems.
And bad science still happened. I can't overstate how insanely hard it is to do correctly. Humans are fallible. So are all the processes we design. We can fix every single issue anyone has ever thought up and we'll still be far from the ideal. A healthy understanding of the scientific method incorporates this.
But hey, Humans keep trying and it's been working pretty well, on average, over long periods of time.
I think that it's valid to point out that if every problem I pointed out was fixed, there would still be some level of bad science slipping through. As an example of this, the bulk of the article deals with the idea that incorrect conclusions often slip through just because of data artifacts and random chance - you can't fully correct for that, and every correction that you make in trying is expensive.
I really wish that schools, the media, et al - really all of us - would internalize the reality that a particular field is best understood through consensus. Sure, a single study may shake the ground in a field (i.e., "paradigm shift"), but understanding is really about integrating and synthesizing a breadth of findings. No wonder the public feels whipsawed when they constantly read contrary results (often unintentionally p-hacked and/or with small effect sizes) in click-bait articles.
I think that's mainly correct, but we need to recognize an important problem with it: The consensus deriving process isn't nearly as distributed and democratic as we'd like to believe. Even in theoretical physics, which is supposed to be the least susceptible to these problems, consensus often looks terrible in hindsight. A single prominent physicists can denounce an idea, or even an entire field, and then no one will touch it for decades.
Look to the early 80's. Edward Witten was such a powerhouse that whatever he said quickly became "consensus". Forty years later we are slowly realizing what happened and that it wasn't a good idea. I'm probably exaggerating a little, and this is no slight against Witten. But the way institutions and research are funded usually means that consensus develops artificially quick based on the input of a few prominent individuals. Any future research must conform if it has a chance of being funded, so the consensus has little competition.
A modern example of this is the social media publication route regarding COVID. Anything that does not toe the CDC's (or really just YouTube, Facebook and Twitter's lines) gets pulled from their sites. Ignore the ethical discussion of those actions and just consider what that does to the discussion. All the dissenting and non-consensus arguments disappear, leaving those who don't go past the major information outlets clueless about outside arguments. To the layman, and many near experts who aren't actively involved in the primary research areas and researchers, there is only consensus and no one questioning it. Until later when "Whoops, it turns out we were wrong" get admitted. Maybe.
Does this happen in journals? Well, I have had one paper recommended for rejection by a referee only and explicitly because they didn't like the conclusions the paper suggested. I suspect I am not the only one. In economics, at least it is well known that most journals lean one way or the other, although usually more accurately described as leaning one way or less that way. Trying to publish contra that lean, or contra current popular fads, is really difficult if you are not one of the top names in the field.
Consensus tends to look a lot more like "what people want to be true" in such circumstances rather than "what is most likely to be true." In the social sciences especially this is pretty much entirely the case. I include my own field, economics, in that class. Data and statistics only protects against normative science so much when how you collect and classify the data is flexible.
As a Brazilian, I read lots of Evangelicals complaining (regarding COVID-motivated restrictions to religious services and other activities) about science being "deified". What they meant, as far as I can tell, is that authorities were being too risk-adverse, but, since saying more people should die doesn't play well, they decided scientists warning about how the disease spreads were the problem. Also, every time the government tries to deny science data (usually generated by the government technical bureaus, like the deforestation and forest fires ones), Evangelicals have been in the front lines of the pro-government propaganda. So that is that.
As for the American situacion -- and vaccination specifically --, well, there are the New Age, hippie-like types on the left, but Trump, who knows political expediency when he sees it, attacked vaccines durante the Republican primaries for 2016. Not some rushed-out COVID vaccine no one had dreamed about back then, but time-tested vacinas. It is hard to be more anti-science than that and still having a right to turn the lights on at night. Yet, he knew it would play well with the Republican right, and it did. At this point, it is hard to see any moral difference between these tapes and the Talita, except maybe the Taliban is braver.
Back when I was teaching college I often would spend an entire class talking to this point. I would have LOVED to have this post to link to as a reading assignment! Thank you for writing it so that I don't have to in the future.
If you are looking for a patron saint, John Ioannidis might be your man. His article "Why Most Published Research Findings are False" from 2005 was a big perspective changer for me. (link https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 ). He does a good explanation on EconTalk with Russ Roberts as well. There is just a lot wrong with how modern science gets done, which is almost entirely by statistical tests.
"I approach every study I read assuming potential fishiness and needing an awful lot of scientific rigor to shake me out of that pattern"
That sounds a lot like "scientist" if you ask me.
I think it sounds a lot like what we want "scientist" to be. I think there's a lot of failure modes for scientist, and one of the big ones is "career-oriented guy" in a field where careers are driven by counter-intuitive/unlikely results; that makes you generate and ignore a lot of fishiness.
I bring up tobacco control a lot because it's sort of a perfect storm of this - we know cigarettes are pretty bad for you, so everyone feels like they are fighting the good fight with any anti-tobacco finding, and nobody really wants to question a result that even if unaccurate pushes in the the right direction - combining that with career concerns you get super unplausible stuff (third hand smoke, etc) that never gets seriously examined, or doesn't get examined for decades.
I used to work for a quantitative hedge fund. We hired really really smart scientists and statisticians and gave them all the resources they could ever want. A high level description of their job was basically to develop hypotheses and test them. Incentives were aligned: Once their results were verified in the real world they got paid lots of money, and if not, they got nothing. It would be difficult to come up with a more ideal setting for correct research and statistics to take place. Whenever I read articles like this I think yup we solved those problems.
And bad science still happened. I can't overstate how insanely hard it is to do correctly. Humans are fallible. So are all the processes we design. We can fix every single issue anyone has ever thought up and we'll still be far from the ideal. A healthy understanding of the scientific method incorporates this.
But hey, Humans keep trying and it's been working pretty well, on average, over long periods of time.
I think that it's valid to point out that if every problem I pointed out was fixed, there would still be some level of bad science slipping through. As an example of this, the bulk of the article deals with the idea that incorrect conclusions often slip through just because of data artifacts and random chance - you can't fully correct for that, and every correction that you make in trying is expensive.
I really wish that schools, the media, et al - really all of us - would internalize the reality that a particular field is best understood through consensus. Sure, a single study may shake the ground in a field (i.e., "paradigm shift"), but understanding is really about integrating and synthesizing a breadth of findings. No wonder the public feels whipsawed when they constantly read contrary results (often unintentionally p-hacked and/or with small effect sizes) in click-bait articles.
I think that's mainly correct, but we need to recognize an important problem with it: The consensus deriving process isn't nearly as distributed and democratic as we'd like to believe. Even in theoretical physics, which is supposed to be the least susceptible to these problems, consensus often looks terrible in hindsight. A single prominent physicists can denounce an idea, or even an entire field, and then no one will touch it for decades.
Look to the early 80's. Edward Witten was such a powerhouse that whatever he said quickly became "consensus". Forty years later we are slowly realizing what happened and that it wasn't a good idea. I'm probably exaggerating a little, and this is no slight against Witten. But the way institutions and research are funded usually means that consensus develops artificially quick based on the input of a few prominent individuals. Any future research must conform if it has a chance of being funded, so the consensus has little competition.
A modern example of this is the social media publication route regarding COVID. Anything that does not toe the CDC's (or really just YouTube, Facebook and Twitter's lines) gets pulled from their sites. Ignore the ethical discussion of those actions and just consider what that does to the discussion. All the dissenting and non-consensus arguments disappear, leaving those who don't go past the major information outlets clueless about outside arguments. To the layman, and many near experts who aren't actively involved in the primary research areas and researchers, there is only consensus and no one questioning it. Until later when "Whoops, it turns out we were wrong" get admitted. Maybe.
Does this happen in journals? Well, I have had one paper recommended for rejection by a referee only and explicitly because they didn't like the conclusions the paper suggested. I suspect I am not the only one. In economics, at least it is well known that most journals lean one way or the other, although usually more accurately described as leaning one way or less that way. Trying to publish contra that lean, or contra current popular fads, is really difficult if you are not one of the top names in the field.
Drat, hit post before I typed the last bit...
Consensus tends to look a lot more like "what people want to be true" in such circumstances rather than "what is most likely to be true." In the social sciences especially this is pretty much entirely the case. I include my own field, economics, in that class. Data and statistics only protects against normative science so much when how you collect and classify the data is flexible.
As a Brazilian, I read lots of Evangelicals complaining (regarding COVID-motivated restrictions to religious services and other activities) about science being "deified". What they meant, as far as I can tell, is that authorities were being too risk-adverse, but, since saying more people should die doesn't play well, they decided scientists warning about how the disease spreads were the problem. Also, every time the government tries to deny science data (usually generated by the government technical bureaus, like the deforestation and forest fires ones), Evangelicals have been in the front lines of the pro-government propaganda. So that is that.
As for the American situacion -- and vaccination specifically --, well, there are the New Age, hippie-like types on the left, but Trump, who knows political expediency when he sees it, attacked vaccines durante the Republican primaries for 2016. Not some rushed-out COVID vaccine no one had dreamed about back then, but time-tested vacinas. It is hard to be more anti-science than that and still having a right to turn the lights on at night. Yet, he knew it would play well with the Republican right, and it did. At this point, it is hard to see any moral difference between these tapes and the Talita, except maybe the Taliban is braver.
Back when I was teaching college I often would spend an entire class talking to this point. I would have LOVED to have this post to link to as a reading assignment! Thank you for writing it so that I don't have to in the future.
I appreciate that! I have very little in the way of formal education so it's always nice to hear I got it at least nearly-right.
If you are looking for a patron saint, John Ioannidis might be your man. His article "Why Most Published Research Findings are False" from 2005 was a big perspective changer for me. (link https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 ). He does a good explanation on EconTalk with Russ Roberts as well. There is just a lot wrong with how modern science gets done, which is almost entirely by statistical tests.