Science

“AI is the future of peer review”

With the imminent revolutionary peer review crisis, AI is ready to reform science

According to a prominent medical journal editor, compressed weights of millions of academic papers awaiting expert evaluation may soon push the scientific community into an AI-assisted future.

In an editorial published this week in Critical Care Medicine, former top editors of one of the most prestigious journals in medicine argue that artificial intelligence should be an integral part of how to evaluate scientific papers before publication.

The scale of the challenge is astounding. This year alone, about 3 million articles will be indexed in major scientific databases, each requiring multiple expert evaluations. Adding to the reviewed but ultimately rejected papers, the academic community will need to conduct about 10 million peer reviews in 2025, a number that continues to grow as biomedical research expands around the world.

For scientists who both take into account research, teaching and administrative responsibilities, comprehensive peer-reviewed unpaid labor is often an afterthought. Many people are totally anxious to conduct evaluations or reject review requests, creating bottlenecks in scientific publication that can delay important findings and thus delay the public.

The solution proposed by Bauchner is simple: “We believe that peer review should include some form of preliminary review of AI to assist editors in deciding to send articles about external peer reviews.”

The editorial addresses one of the uncomfortable truths of academia – human commentators bring their own biases into the assessment process. Bauchner referenced a major study of comparative review methods and noted: “When reviewers were aware of the author’s identity (single-blind), they gave more favorable ratings for countries with higher English proficiency and higher income.”

Bauchner acknowledges that AI may also be biased, but suggests that “the model can be taught to ignore who the authors are and where they are”, potentially providing a more objective initial screening process.

This idea is not just theoretical. Several independent companies have provided AI review services to authors before submitting manuscripts to journals. According to a study cited by Bauchner, the authors found that GPT-4 feedback was “more helpful than some peer reviewers’ feedback.”

In addition to addressing bias, AI can also enforce standards that human reviewers often ignore. Bauchner notes that while journals often require authors to follow specific reporting guidelines, “there is no evidence that peer reviewers actually check compliance with these guidelines”, a tedious but crucial task that AI can consistently perform.

Perhaps most convincing is that AI may better detect potential research fraud, an increasing problem as publishing pressures in academia are intensifying.

The transition to AI-assisted peer review faces obstacles, including traditionalist resistance and concerns about algorithmic judgments. But as the paper’s avalanche continues to outweigh human capabilities, Bowchner’s conclusion is firm: “As it continues to improve, it’s time to accept another approach that may be more effective and effective and can be reviewed through AI.”

With the spring meeting season approaching and thousands of new manuscripts entering the submission pipeline, academia may soon need to decide whether to stick to tradition quickly or welcome AI as the latest member of the editorial team.

Was this article helpful?

If you find this report useful, consider supporting our work with a small donation. Your contribution allows us to continue to bring you accurate, thought-provoking scientific and medical news that you can trust. Independent reporting requires time, effort, and resources, and your support makes it possible for us to continue exploring stories that are important to you. Thank you so much!

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button