Friday, 20 February 2026

Guest post: Stealth corrections are still a threat to scientific integrity


Authors

René Aquarius, Floris Schoeters, Alex Glynn, Guillaume Cabanac

 

An update on stealth corrections

Last year, we published an article describing stealth corrections, a phenomenon in which a publisher makes at least one post-publication change to a scientific article, without providing a correction note or any other indicator that the publication was temporarily or permanently altered.

 

Now, we have expanded our database with newly identified stealth corrections. We also wrote a freely accessible COSIG guide describing how to report stealth corrections in a transparent fashion.

 

Difficult to pinpoint

Stealth corrections are, by nature, extremely difficult to track down and most stealth corrections are identified by science sleuths who might notice a mismatch between different versions of an article. It is impossible to provide a comprehensive overview and one must assume that we have only identified a small minority of these issues.

 

For this update we applied the same pragmatic approach in documenting stealth retractions as previously: registering stealth corrections on PubPeer ourselves, asking around within the science sleuthing community and searching the PubPeer database for terms as “no erratum”, “no corrigendum”, or “stealth” (repeat the search yourself).

 

Stealth corrections were further categorized into the following types:

  • Changes in author information (addition or removal of authors, changes in author affiliation, etc.);
  • Changes in content (figures, data or text, etc.);
  • Changes in the record of editorial process (editor name, date of submission, acceptance or publication, etc.);
  • Changes in additional information (ethics statements, conflicts of interest statements, funding information, etc.).

New cases

We found 32 published articles that were affected by stealth corrections in addition to the 131 we had identified last year. An overview of all stealth corrections (#1-163) can be found in the online database, which also contains the links to all accompanying PubPeer posts for additional detail. Table 1 shows the type of correction per publisher for the 32 new cases.

 

Table 1. Type of correction per publisher for newly identified stealth corrections. 

  Changes in additional information Changes in author information Changes in content Changes in the record of editorial process
ACM 3 0 0 0
Am Phytopathological Soc 0 0 1 0
CV Literasi Indonesia 0 0 1 0
Elsevier 0 0 3 0
Impact Journals 0 0 1 0
Int Soc Computational Biology 1 0 0 0
MDPI 0 0 0 19
Oxford University Press 0 1 0 0
Springer Nature 0 0 1 0
Taylor & Francis 0 0 1 0
 

 

Why are stealth corrections still a thing?

Last year we wrote “post-publication amendments that are made silently, without a visible correction note, will give rise to questions regarding the ethics and integrity of the specific journal, editors and publisher, and might undermine the validity of the published literature as a whole”. Again, we have identified stealth corrections that might be used as a shortcut to ‘repair’ more serious issues. The three publishers with the most stealth corrections in this update were: MDPI, Elsevier, and the Association of Computing Machinery (ACM).

MDPI was involved in 19 new stealth corrections. Sixteen cases (#141, #144-158) were registered in August of 2024, too late for our initial pre-print and subsequent article on stealth corrections. All of these involved moving articles out of a ‘special issue’ and into a ‘section’. What stands out for all these 16 articles, is that the special issue editor was also an author on all of these papers. The Directory of Open Access Journals (DOAJ) has dictated that the number of articles co-authored by a special issue editor needs to be below lower than the 25% for each special issue. When it is higher than 25%, the DOAJ can delist the journal for not adhering to best practice, as detailed on their change log. Thus, by moving these articles silently out of special issues, MDPI is retroactively lowering this percentage to adhere to the rules of the DOAJ and therefore preventing potential delisting of their journals. In September 2024 -after publication of our pre-print- MDPI refuted that removing a Special Issue article from the digital SI website can be considered a ‘stealth correction’”. Possibly, the updated correction process (which now includes ‘minor corrections’) facilitated a complete stop of this practice by MDPI. We have not identified any recent cases, which is an encouraging sign.

In the remaining three cases (#135-137), the name of a peer reviewer was suddenly set to anonymous, while the contents of the peer review reports did not change. According to MDPI, this was done to adhere to GDPR requirements. However, this only happened after the peer reviewer was identified as being part of a review mill. The reviewer claims on PubPeer that they were not involved in writing the peer review report. These cases prove that a request for anonymity might hamper the desire for transparency and strengthening research integrity.

Elsevier was involved in 3 new stealth corrections. All of them involved changes in content. In 3 cases an image was silently replaced (#133, #139-141) according to PubPeer reports that were posted between December 2024 and May 2025. In response to our pre-print, Elsevier stated that they “do not correct articles without a formal notice”. However, in this update we -again- present clear evidence of major changes to the scientific record that went through without any formal acknowledgement in the form of a correction notice. This directly contradicts earlier statements from Elsevier. Eventually, all of these articles have been retracted, but only 4-12 months after the stealth correction was noticed, meaning there was a substantial window of time that allowed for interaction with these flawed articles, without any proper indication that there might have been a problem.

ACM silently made multiple changes to the introduction from three conference proceedings written by the conference chair (#159-161). References were removed and in one case the text was heavily altered. In all three cases, a notice of concern was also published to indicate that the peer review process had been compromised and the publisher strongly urged people not to cite the conference papers. It seems as if the ACM retroactively tried to erase the citations to the conference papers, but they did it by secretly making all kinds of alterations to the documents, which is far from ideal.

This update shows that some scientific publishers continue to use stealth corrections as a way to change the scientific record. Stealth corrections can undermine the entire enterprise of science; at the level of the individual article, the lack of a transparent correction minimizes the likelihood of those who read or cited the original version being informed of the change; on the macro level, the integrity of the published literature as a whole is compromised as readers never know for certain whether an article has been silently corrected or not. Meanwhile, there is still no consensus on issuing corrections.

 

Conclusion and recommendations

Stealth corrections are still problematic as they are sometimes used as a shortcut to ‘repair’ other integrity issues. Again, we stress that stealth corrections are notoriously difficult to find and that this update likely only shows chance findings by science sleuths. Correct documentation and transparency are of the utmost importance to uphold scientific integrity and the trustworthiness of science.

We still recommend:

  • Tracking of all changes to the published record by all publishers in an open, uniform and transparent manner, preferably by online submission systems that log every change publicly, making stealth corrections impossible.
  • Clear definitions and guidelines on all types of corrections.
  • Sustained vigilance of the scientific community to publicly register stealth corrections. Now made easier by using our COSIG guide.

 

Acknowledgements

We thank Dorothy Bishop for hosting this update on her blog and we thank all (anonymous) science sleuths who have found and reported stealth corrections: your work is much appreciated.

Note from DVMB: Comments are moderated on this blog.  They are usually approved if they are on topic and non-anonymous. 

Monday, 2 February 2026

An analysis of PubPeer comments on highly-cited retracted articles

PubPeer is sometimes discussed as if it is some kind of cesspit where people smear honest scientists with specious allegations of fraud. I'm always taken aback when I hear this, since it is totally at odds with my experience. When I conducted an analysis of PubPeer comments concerning papers from UK universities published over a two-year period, I found that all 345 of them conformed to PubPeer's guidelines, which require comments to contain only "Facts, logic and publicly verifiable information". There were examples where another commenter, sometimes an author, rebutted a comment convincingly. In other cases, the discussion concerned highly technical aspects of research, where even experts may disagree. Clearly, PubPeer comments are not infallible evidence of problems, but in my experience, they are strictly moderated and often draw attention to serious errors in published work.

The Problematic Paper Screener (PPS) is a beautiful resource that is ideal to investigate PubPeer's impact. It not only collates information on articles that are annulled (an umbrella term coined to encompass retractions, removals, or withdrawals), but it also cross-references this information with PubPeer, so you can see which articles have comments. Furthermore, it provides the citation count of each article, based on Dimensions.  

The PPS lists over 134,000 annulled papers; I wanted to see what proportion of retractions/withdrawals were preceded by a PubPeer comment. To make the task tractable, I focused on articles that had at least 100 citations, and which were annulled between 2021 and 2025. This gave a total of 800 articles, covering all scientific disciplines. It was necessary to read the PubPeer comments for each of these, because many comments occur after retraction, and serve solely to record the retraction on PubPeer. Accordingly, I coded each paper in terms of whether the first PubPeer comment preceded or followed the annulment.  

Flowchart of analysis of PPS annulled papers
 

I had anticipated that around 10-20% of these annulled articles would have associated PubPeer comments; this proved to be a considerable underestimate. In fact, 58% of highly-cited papers that were annulled between 2021-2025 had prior PubPeer comments. Funnily enough, shortly after I'd started this analysis, I saw this comment on Slack by Achal Agrawal: "I was wondering if there is any study on what percentage of retractions happen thanks to sleuths. I have a feeling that at least around 50% of the retractions happen thanks to the work of 10 sleuths." Achal's estimate of the percentage of flagged papers was much closer than mine. But what about the number of sleuths who were responsible?

It's not possible to give more than a rough estimate of the contribution of individual commenters. Many of them use pseudonyms (some people even use a different pseudonym for each post they submit), and combinations of individuals often contributed comments on a single article. Some of the PubPeer comments had been submitted in early years, when they were just labelled as "Unregistered submission" or "Peer 1" etc., so any estimate will be imperfect. The best I could do was to focus just on the first comment for each article, excluding any comments occurring after a retraction. Of those who had stable names or pseudonyms, the 10 most prolific commenters had commented on between 9 and 50 articles, accounting for 27% of all retractions in this sample. Although this is a lower proportion than Achal's estimate, it's an impressive number, especially when you bear in mind that there were many comments from unknown contributors, and the analysis focused only on articles with at least 100 citations.

Of course, the naysayers may reply and say that this just goes to show that the sleuths who comment on articles are effective in causing retractions, not that they are accurate. To that I can only reply that publishers/journals are very reluctant to retract articles: they may regard it as reputationally damaging, and be concerned about litigation from disgruntled authors. In addition, they have to go through due process and it takes up a lot of resources to make the necessary checks and modify the publication record. They don't do it lightly, and often don't do it at all, despite clear evidence of serious error in an article (see, e.g.  Grey et al, 2025)

If an article is going to be retracted, it is better that it is done sooner rather than later. Monitoring PubPeer would be a good way of cleaning up a polluted literature - in the interests of all of us. Any publisher can do that for free: just ask an employee of the integrity department to check new PubPeer posts every day—about 40 minutes and you’re done. PubPeer also provides publishers with a convenient dashboard to facilitate this essential monitoring task.

It would be interesting to extend the analysis to less highly-cited papers, but this would be a huge exercise, particularly since this would include many paper-milled articles from mass retractions. I hope that my selective analysis will at least demonstrate that those who comment on problematic articles on PubPeer should be taken seriously. 

 

Post-script: 7 February 2026

One of the commentators with numbered comments below has complained that I am censoring criticism, and has revealed their identity on LinkedIn as Ryan James Jessup, JD/MPA.  My bad - I usually paste a statement at the end of a blogpost explaining that Comments are moderated so there can be a delay, but I accept nonanonymous comments that are polite and on topic.  Jessup didn't take up my offer of incorporating his arguments in a section at the end of the blog, so I have accepted them and you can read them in the Comments.

I actually agree with a lot of what he says, but some points I disagree with, so here are my thoughts.

Points 1-2. He starts by stating the piece confuses correlation with causation.  On reflection I think he's right. The word "role" in the title is misleading, and I have accordingly changed the title of the post from "The role of PubPeer in retractions of highly-cited articles" to "An analysis of PubPeer comments on highly-cited articles".

3.  He argues that selection of highly-cited papers was done to fudge the result because these papers are most likely to be noticed and commented on.  The  actual reason for selecting these papers was to focus on outputs that had had some influence; many people assume PubPeer commentators just focus on the low-hanging fruit from papermills, which nobody is going to read anyhow. There is nothing to stop Jessup or anyone else doing his own analysis using another filter to see if these results generalise to less highly-cited articles. It involves just a few hours of rather tedious coding. Maybe sample a random 800 articles?  

4. He argues that "annulled" papers covers various categories.  I am glad to be able to clarify that in the sample of 800 papers that I analysed, all were retractions.

5. He disagrees that my opinion of whether PubPeer comments were factual and accurate has any value, and that they could be defamatory or otherwise falsely imply misconduct.  From my experience, I reckon it would be difficult to get such material past PubPeer moderators, but if he can provide some examples, that would be helpful.  

6. He says the coding method is subjective "They read comments and decide whether the first comment preceded or followed annulment".  The dates are provided for the retraction notice in the PPS, so this is just a matter of checking if the PubPeer comment (also dated) appeared before or after that date.

7. Re the identification of "top 10 sleuths".  I noted the limitations inherent in the data, so I am not sure what Jessup is complaining of here. The fact remains that a small number of individuals have been very effective in identifying issues in highly-cited articles prior to their retraction.

8.  Jessup argues that I'm saying that “journals don’t retract lightly, therefore PubPeer must be right”.  The first part of that argument has ample evidence. If he is aware of cases where PubPeer comments have indeed led to inappropriate retractions, then he should name them.

9-11. I do actually have some understanding of how retraction processes work in journals, but my concern is the failure of many journals/publishers to initiate the first step in the process.  I think we're in agreement that the current system for retracting articles from journals is broken. We also agree that PubPeer comments should be regarded as tips. My suggestion is simply that if publishers have a useful free source of tips, they should use it. A few of them do, but many don't seem motivated to be proactive because it just creates more work.

The prolific PubPeer commenters that I know would love it if the platform could be used primarily for civilised academic debate, as was the original intention. Unfortunately, science can't wait until the broken system is repaired; we do need to clean up a polluted literature. I would add that the idea that those who comment on PubPeer are doing it for the glory is laughable. The main reaction is to be ignored at best and abused at worst. They are unusual people who are obsessive about the need to have a reliable scientific literature.

 

 

 

 

Monday, 5 January 2026

An Open Letter to the BMJ Editorial Board

 
to: Editor in chief, Kamran Abbasi, kabbasi@bmj.com
     Executive editor, Theodora Bloom, tbloom@bmj.com
     Head of research, Elizabeth Loder, eloder@bmj.com
     Head of journalism, Rebecca Coombes, rcoombes@bmj.com
     Publication ethics and content integrity editor, BMJ Journals, Helen Macdonald, hmacdonald@bmj.com
     Handling academic editor, Juan Franco, juanfranco@bmj.com
 
Dear Editors

We are writing to ask the BMJ to respond swiftly to the numerous issues with the article by Attar et al: Prevention of acute myocardial infarction induced heart failure by intracoronary infusion of mesenchymal stem cells: phase 3 randomised clinical trial (PREVENT-TAHA8) BMJ 2025; 391 doi: https://doi.org/10.1136/bmj-2024-083382 and retract this article without further delay. Although an Expression of Concern was added to the online version of the article on 12 November, this is not included in the PDF version, so readers who rely on the PDF will be unaware of the concerns. In contrast to the substantial publicity for the original article, no mention of the Expression of Concern has been posted by the BMJ on X, Bluesky or Facebook. 

The article was published on 29th October 2025. On 1st November, Dorothy Bishop looked at the associated dataset deposited on Figshare and immediately spotted that the reported ages for participants were not consistent with the inclusion criteria, which specified age below 65. 127 of 396 participants in the dataset had ages 65 and above, with the oldest being 86. A check of the means/SDs reported for age in the article showed them to be consistent with the dataset. Thus, this dataset, with one third of participants aged over 65, appears to be the one used to produce the results that were reported in the article. These concerns were immediately posted on PubPeer and reported to the editor, Dr Juan Franco.

On 2nd November, Nick Brown added a comment on PubPeer showing repeating patterns in the deposited dataset. These are incontrovertible evidence of fabricated data, as such repetitive patterns are vanishingly unlikely to have occurred by chance. Dorothy Bishop wrote again to Dr Franco, drawing attention to this new evidence.

The lead author, Dr Armin Attar, replied on PubPeer to say 

"During an internal audit, we have noticed some inconsistencies in the baseline demographic data of the study. Our team is currently conducting a detailed review to identify the source of these discrepancies. This process is expected to take approximately two to three weeks."

A few days later, he wrote:  

"... we have initiated a full technical audit of our data assembly and analysis pipeline. We are specifically investigating the reported 101‑record cycles, systematic trends in WBC/Hb/Plt, and baseline age discrepancies across outputs, and will document root causes and corrections where needed. The complete audit package will be posted within 2–3 weeks, and any confirmed errors will be transparently corrected via the journal."

At the time of writing, some 8 weeks later, no audit package has been publicly posted. And indeed, it does not appear to be possible for an “audit” to rescue the situation. It is not a case of a few odd datapoints, but rather that the dataset used for the analyses in the article shows numerous hallmarks of fabrication.  If the dataset is "corrected" then the analyses in the article will be false.

Over the next couple of weeks, additional concerns were raised by different commenters on PubPeer  and in letters and rapid replies to the BMJ. Alison Avenell emailed a full summary of all issues from PubPeer, BMJ rapid responses, submitted letters and additional concerns to Drs Abbasi and Franco on 3rd December, but no acknowledgement has been received. In brief, as well as numerous additional signatures of fabricated data, the following points were noted:
  • The review record shows that reviewers commented only on the first version of the article, even though some substantial issues were raised, particularly by reviewer Manoj Lalu.
  • The deposited dataset was added only after final acceptance, so not reviewed.
  • Problems with the registration, subsequently dealt with by adding a new protocol to the final version of the article. A citation to the original protocol was removed from the final version of the paper.
  • Numerous changes to the actual study start date in the clinical trial registry (https://clinicaltrials.gov/study/NCT05043610), which were undisclosed and post-dated study completion. According to the original trial registration and all versions until version 4, dated October 7th 2024 (close to manuscript submission), as well as to the published protocol (doi: 10.1186/s13063-022-06594-1), the trial was retrospectively registered. The authors altered the actual study start date and misleadingly presented the trial as prospectively registered. It is not possible in 2024 to realize the trial actually started in September 2021, not January 2021. The BMJ as well as all ICMJE journals explicitly don’t publish retrospectively registered randomized trials.
  • Secondary outcomes added after data collected
  • Changes to the author list. In particular, addition of two authors (Anthony Mathur and Sheik Dowlut) who are listed as "involved in conceptualisation, methodology, patient management, procedures, administration, and supervision", despite not being listed on the original study registration, and being based in the UK.
  • Undisclosed financial COIs by co-author Anthony Mathur.
  • Undisclosed COI by co-author Massoud Vosough
  • Inconsistencies for sample size, randomisation block size and period of follow-up between registration documents, published protocol and paper.
  • Concern about retrospective ethical approval and potential medical risks of the procedure for delivering cells.
Most of these points were posted prior to an Editorial written by the Editor-in-Chief about this case.

While we appreciate that journal editors and publishers must follow clear processes that take into account the authors' viewpoint, it should be amply evident to anyone with expertise in this area that the problems with this article go way beyond anything that could be dealt with by a correction. For summary see comment 64 in the PubPeer chain. It should have been clear just on the basis of the first two PubPeer comments (reported by email to the editor) that there were serious issues with this article, and yet here we are, over two months later, with no retraction.
 
The Editorial concluded: 
"We’re in this together, and we welcome your ideas. The goal is to act in the best interests of the public, to devise more robust processes and new solutions that indeed allow evidence and data to rebuild confidence." 
In response to this invitation, our ideas are:
  • The BMJ should monitor PubPeer comments on articles and take action when credible concerns are raised.
  • The BMJ should honor its own commitment to the ICMJE prospective registration mandate, as a founding member of ICMJE. It is unfortunate that a retrospectively registered clinical trial made it through peer-review in 2024.
  • The BMJ should adhere to COPE Guidelines, which state: 
"To minimise harmful effects and uptake (eg, citation of erroneous work, acting on their findings, or drawing incorrect conclusions), retraction notices should be published as soon as the editor is confident that the publication is seriously flawed, misleading, or falls into any of the categories described above. If there is a delay in making that determination, editors should publish an expression of concern [...]. When an editor has lost confidence in the results or conclusions of an article, they should not delay retraction solely because the authors or their institutions are not cooperative or responding promptly.
We believe it is particularly important to retract this paper immediately, not just to maintain integrity of the scientific record, but because replication by other clinicians could carry serious risks for patients.

Yours sincerely,

Dorothy Bishop, Emeritus Professor of Developmental Neuropsychology, University of Oxford
Alison Avenell, Clinical Chair in Health Services Research, University of Aberdeen, UK
Mark Bolland, Associate Prof of Medicine, University of Auckland.
Nicholas J L Brown, Department of Psychology, Linnaeus University, Sweden
Ioana Alina Cristea, Associate Professor of Clinical Psychology, University of Padova
Sophie Hill, PhD student, Department of Government, Harvard University
Ian Hussey, Senior Lecturer, University of Bern
Thomas Kesteman, Oxford University Clinical Research Unit, Vietnam
Patricia Murray, Professor of Stem Cells and Regenerative Medicine, University of Liverpool, UK
Maarten van Kampen, ASML BV, The Netherlands
Peter Wilmshurst, Cardiologist

 

 P.S. 

5th Jan 2026

We've had a prompt response from the EIC:

Dear All,

Thank you for writing to us. We continue to investigate a range of issues related to this paper.

We will make a full decision once we have completed our due process.

Yours sincerely,

Kamran Abbasi FRCP Edin Lon
Editor in chief, The BMJ