Really powerful point about how journals become gatekeepers of their own reputation rather than neutral truth-arbiters. The conflict of interest is so embedded that even when corrections are technically possible, the friction makes them practically unlikely. I ran into something similar when flagging methodological issues in a physiology paper once, the whole 'low priority' thing kinda forces you to let it go.
We talk about academic publishing and the system's flaws a lot, but I feel the conflict of interest in correcting / addressing their papers is something that doesn't get much attention. Yes,n journals are slow to issue retractions, but even just publishing a letter to the editor documenting major flaws, misinterpretations, etc. does send a message that undermines perceptions about the vetting process.
I'd be interested to hear about the physiology paper you had issues with, and your experience attempting to address it.
Science is absolutely NOT self-correcting. Journals - especially the high profile ones - are utterly loathe to publish any letters that point out serious issues that are plain to see. After all, both the editor and the reviewers should have noticed them. Things like checking whether the references say what the paper claims they say. Checking the basic logic of the arguments made in the paper. Or papers whose whole analysis is based on incorrectly using a method where the literature on the method makes it abundantly clear that that the method simply cannot be used that way. I could go on.
I have tried diligently to address numerous such issues and the journal editors not only did not accept the letters but did not even begin to address the issues. They just blow off letter writers.
So, when people say they don't trust science, sometimes they have good reason. Even when those people are themselves not particularly scientific. When we letter writers have especially strong arguments about problems with papers, the journals are especially resistant to admitting anything.
We can do so much better!
First step: Establish a mechanism by which the journals have an incentive to address errors in their papers. Some kind of reward to correcting problems and punishment for refusing to do so. Right now they have an incentive to cover them up. There is currently "free market" solution because the vast majority of readers and subscribers never realize how flawed some papers are.
NB - PubPeer can help some but it's not sufficient.
Everything you mention is so true. Peer-review quality varies tremendously, and stuff gets missed.
There have been numerous times where I read a paper, and either knew the reference used didn't support the claim, or thought to myself "really, is that claim accurate" and then looked at the reference.
It's actually funny - a few years back I wrote a paper critiquing "digit ratio" (more on that some other time), and it tore apart the whole concept (complete with original data). I have later found it cited by proponents of digit ratio, claiming my paper serves as evidence to support their findings. I literally laugh aloud when I see it, and think "umm, no, I actually said exactly the opposite of that." But, people miss this stuff in peer-review... and if you try to correct it, well, good luck playing that game of chance!
I agree - we need to incentivize the journals to be open to fixing errors in the literature. That is definitely a major, complex topic, worthy of further discussion!
Suggestion 1. Write the authors directly. Journals should require published email addresses for all authors to facilitate direct correspondence from critics. WSJ does. They may not answer, in my experience.
Suggestion 2. Require comment options for all journal articles, open to all (except maybe publisher-blacklisted trolls), as is done on Substack and LinkedIn. We could have AI bots scarf up comments and make quality statements about authors and journals.
Suggestion 3. (~yours) Make raw data available in all journal articles. For example, the raw, noisy data about low level radiation exposure from nuclear workers and from atomic bomb survivors is NOT available. [Some is, but pre-binned.] Authors typically claim privacy rules implying that only they, not you, are trustworthy.
Suggestion 4. Publish readable critiques, with NUMBERS, on Substack. Here's one of mine to be published Jan 29.
The piece by Dr. David Allison talks about some of these also. It sounds like his experience in writing the authors directly was mixed (at best). I think it depends on the individual author; some will take the issue seriously and be very collaborative... others will try to pretend the issue doesn't exist. But still, giving the authors an opportunity to respond is a reasonable solution. Personally, I have typically just followed the process of writing a letter-to-the-editor, with the idea being that the piece is published in a public forum, so public commentary is appropriate, and may generate further conversation.
I think the public comments on the journal website can be a good idea, but agree that it could be hard to manage trolling, and comments that isn't trolling, but also is non-sensical. I think about the constant debates we see in hot topic areas like vaccines, COVID, and climate change and wonder what journal comment pages would look like if comments were open. Maybe it would be a good thing? Maybe it would be a bad thing? Hard to know.
Raw data seems like a must. Allison's article describes how many issues he had obtaining raw data, despite data availability statements. The easiest way to streamline that process is just making it permanently available. I wish data transparency and availability culture was more mainstream years ago, and if it was, I would have done this as standard practice, but I am doing it now whenever possible.
For the fourth point... YES. I think this is one of my favorite parts about Substack. I feel like there is a great community of highly-trained scientists from various backgrounds here on Substack, who regularly do this (including you). We can eliminate the gatekeeping of journals, and make our points available to the masses, complete with opportunity for comments (as you also mention). I wish a place like this existed (or was popular) many years ago... But, it's here now, so it's a great tool.
Really powerful point about how journals become gatekeepers of their own reputation rather than neutral truth-arbiters. The conflict of interest is so embedded that even when corrections are technically possible, the friction makes them practically unlikely. I ran into something similar when flagging methodological issues in a physiology paper once, the whole 'low priority' thing kinda forces you to let it go.
Thanks for sharing your comment.
We talk about academic publishing and the system's flaws a lot, but I feel the conflict of interest in correcting / addressing their papers is something that doesn't get much attention. Yes,n journals are slow to issue retractions, but even just publishing a letter to the editor documenting major flaws, misinterpretations, etc. does send a message that undermines perceptions about the vetting process.
I'd be interested to hear about the physiology paper you had issues with, and your experience attempting to address it.
Science is absolutely NOT self-correcting. Journals - especially the high profile ones - are utterly loathe to publish any letters that point out serious issues that are plain to see. After all, both the editor and the reviewers should have noticed them. Things like checking whether the references say what the paper claims they say. Checking the basic logic of the arguments made in the paper. Or papers whose whole analysis is based on incorrectly using a method where the literature on the method makes it abundantly clear that that the method simply cannot be used that way. I could go on.
I have tried diligently to address numerous such issues and the journal editors not only did not accept the letters but did not even begin to address the issues. They just blow off letter writers.
So, when people say they don't trust science, sometimes they have good reason. Even when those people are themselves not particularly scientific. When we letter writers have especially strong arguments about problems with papers, the journals are especially resistant to admitting anything.
We can do so much better!
First step: Establish a mechanism by which the journals have an incentive to address errors in their papers. Some kind of reward to correcting problems and punishment for refusing to do so. Right now they have an incentive to cover them up. There is currently "free market" solution because the vast majority of readers and subscribers never realize how flawed some papers are.
NB - PubPeer can help some but it's not sufficient.
Everything you mention is so true. Peer-review quality varies tremendously, and stuff gets missed.
There have been numerous times where I read a paper, and either knew the reference used didn't support the claim, or thought to myself "really, is that claim accurate" and then looked at the reference.
It's actually funny - a few years back I wrote a paper critiquing "digit ratio" (more on that some other time), and it tore apart the whole concept (complete with original data). I have later found it cited by proponents of digit ratio, claiming my paper serves as evidence to support their findings. I literally laugh aloud when I see it, and think "umm, no, I actually said exactly the opposite of that." But, people miss this stuff in peer-review... and if you try to correct it, well, good luck playing that game of chance!
I agree - we need to incentivize the journals to be open to fixing errors in the literature. That is definitely a major, complex topic, worthy of further discussion!
Suggestion 1. Write the authors directly. Journals should require published email addresses for all authors to facilitate direct correspondence from critics. WSJ does. They may not answer, in my experience.
Suggestion 2. Require comment options for all journal articles, open to all (except maybe publisher-blacklisted trolls), as is done on Substack and LinkedIn. We could have AI bots scarf up comments and make quality statements about authors and journals.
Suggestion 3. (~yours) Make raw data available in all journal articles. For example, the raw, noisy data about low level radiation exposure from nuclear workers and from atomic bomb survivors is NOT available. [Some is, but pre-binned.] Authors typically claim privacy rules implying that only they, not you, are trustworthy.
Suggestion 4. Publish readable critiques, with NUMBERS, on Substack. Here's one of mine to be published Jan 29.
The Bad Science paradox in nuclear waste management strategies https://hargraves.substack.com/p/fe128a80-7efa-4138-9763-aa4302441e25
Thanks for the comments!
The piece by Dr. David Allison talks about some of these also. It sounds like his experience in writing the authors directly was mixed (at best). I think it depends on the individual author; some will take the issue seriously and be very collaborative... others will try to pretend the issue doesn't exist. But still, giving the authors an opportunity to respond is a reasonable solution. Personally, I have typically just followed the process of writing a letter-to-the-editor, with the idea being that the piece is published in a public forum, so public commentary is appropriate, and may generate further conversation.
I think the public comments on the journal website can be a good idea, but agree that it could be hard to manage trolling, and comments that isn't trolling, but also is non-sensical. I think about the constant debates we see in hot topic areas like vaccines, COVID, and climate change and wonder what journal comment pages would look like if comments were open. Maybe it would be a good thing? Maybe it would be a bad thing? Hard to know.
Raw data seems like a must. Allison's article describes how many issues he had obtaining raw data, despite data availability statements. The easiest way to streamline that process is just making it permanently available. I wish data transparency and availability culture was more mainstream years ago, and if it was, I would have done this as standard practice, but I am doing it now whenever possible.
For the fourth point... YES. I think this is one of my favorite parts about Substack. I feel like there is a great community of highly-trained scientists from various backgrounds here on Substack, who regularly do this (including you). We can eliminate the gatekeeping of journals, and make our points available to the masses, complete with opportunity for comments (as you also mention). I wish a place like this existed (or was popular) many years ago... But, it's here now, so it's a great tool.
Thanks again for your comments!