Considering AI in Academic Publishing

010920_Artificial-Intelligence-Blog

Algorithms and other forms of artificial intelligence (AI) are pervasive today. Every time we interact with technology—our phones, tablets, and computers to name a few tools—we are triggering various AIs to do work for us. Think about how often you use Google or another search engine every day. 

Now take a quick look at this Google Live Stat page. 40,000 searches per second. Each one powered by an AI that takes our words, how we organize them, and the history of our searches to serve up the most relevant results. It can be easy to lose sight of just how monumental algorithms are. 

AI in Academia

What does the prevalence of AI powered search have to do with your academic publishing? 

Aside from the implications to research, publishers are refining and employing AI like never before. Any process that adds efficiency will appeal to for-profit businesses; which academic publishers decidedly are. But with those efficiencies comes some benefits for authors and researchers.

For example, the ability to sift through source materials has improved exponentially. For anyone old enough to remember card catalogs in dusty library filing cabinets, Google represents an entirely new way of thinking. We research daily—everyone, not just academics. Over the previous weekend I researched sneakers, crepe recipes, reading lights, a handful of authors, and the best ways to donate to support the Australian brush fires. And those are just the searches I can remember off the top of my head.

The AI behind searching has become ubiquitous. 

Bringing AI to Peer Review

The benefits of utilizing algorithms in research are, generally, clear and positive. From a screen and keyboard we can access thousands of pages/hours of source material. AI served an important, but singular role in academic research. Until recently, when an academic publisher of some renown made a major announcement.

Frontiers, an Open Source Publisher and Science platform, announced the implementation of AI in their peer review process.

Now, this announcement is not exactly new—Frontiers originally published their notification a little over a year ago. But it remains significant. First because little has been said about AI tools in peer review since late 2018. And second, because the AI in peer review process present concerns writers, reviewers, and publishers need to consider.

The New Peer Review Gatekeeper

For year’s profit driven publishers have held the keys to the castle. The decision to publish this paper and not that one could reflect financial goals before educational. Certainly not always the case, but the very possibility calls into question the entire process. If peer review is meant to ensure the highest levels of academic integrity, then the process must be equally credible. 

Introducing AI into peer review raises concerns about the integrity of this process. From Frontiers’s announcement, we see that they task their AI with two purposes:

  1. Quality Control – the algorithm can review the content prior to a true peer review to ensure it meets predetermined standards.
  2. Reviewer Identification – while reviewing the manuscript to ensure it meets the quality standards, the best reviewers can be identified and paired with the manuscript for the best possible review.

AI Quality Review

The first and most troubling aspect of AI in peer review is the quality standard. This can be a huge benefit to a publisher inundated with content that needs to be winnowed down to the best and most relevant. Instantly discarding plagiarized work, filtering out content lacking adequate sources, and flagging content with questionable images are all powerful and useful ways AI can help publishers. The work of a few moments for an AI might take a person hours.

But what should give any academic pause is the uniformity of an algorithm designed to pre-filter manuscripts. Clearing out the chaff is useful. What about the innovative? The artistically unique? What about a manuscript intentional abnormal to facilitate conveying abnormal information?

How content is structured matters and an AI may lack the finesse to discern valid and important content that doesn’t conform to the template it expects. 

Frontiers looks to solve this concern by flagging rather than outright discarding filtered content. If the AI sees some ethical problems or design issues, it assigns a person to review and make a final determination.

Solving From a Marketer’s Perspective

Another problem to consider in light of AI quality gatekeeping is the content marketer’s mindset. Marketers are well versed in the AI world; they’ve spent decades looking for ways to ‘talk’ to the AIs and garner their favor. Yes, every wise marketer will tell you the greatest concern is the customer (or for an academic, the reader), but that customer/reader will never see content an AI deems unworthy.

Two problems arise here. First is the loss of valuable content due to algorithmic incompatibility. Using technology to filter our content will always have this concern and there is never going to a world where this isn’t a problem. It may well be a tiny problem, but one to know of none-the-less.

The second and more concerning problem is the opportunity to game the AI. This too may be highly unlikely, but it is even more worrisome. Years ago nefarious marketers could trick Google into showing their pages over others with deceptive tricks (like keyword stuffing). While algorithms have come a long way, the concern remains.

Place this in the context of academic publishing; imagine a false or inaccurate piece of research that passed through the AI. We now rely on the actual (human) peer review to catch the content and move it out of the process. But if we rely on technology as a gatekeeper, we also run the risk of trusting that AI expressly. The onus is back on the person reviewing the document. What, then, did the AI gain us?

Reviewer Identification

If the gatekeeping role of AI in peer review is concerning, the reviewer identification role makes up for this fear. Algorithms are wonderful at comparing and pairing data. In that way, using an algorithm to pair a manuscript with a reviewer is the most efficient and beneficial use.

An established reviewer will have a long list of previous manuscripts reviewed, their opinions on those reviews, and their own publications. The history of the reviewer is an easily discovered and viewed set of data. Placing that data set against the incoming manuscript to be reviewed offers an opportunity to get the best qualified reviewers working on the manuscript.

AI’s Place in Academia

Algorithms are here to stay. We have little choice but to embrace technology as it continues to grow and evolve. And for most academics, the variety of tools and streamlining that technology offers are a boon. We can research more quickly, find source material more efficiently, correct grammar on the fly, and collaborate from anywhere. 

But as we embrace the many benefits of technology and AI, we must also remain skeptical. Not doubtful or cynical, but skeptical enough to continue asking the questions that academia thrives on. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top
%d bloggers like this: