In a 2016 survey by Wiley, 69% of reviewers said they became reviewers as the result of publishing a paper on a relevant topic. Another key finding was the need to increase the reviewer pool, especially in high‐growth and emerging markets, and among early career researchers. The same survey found that the top factor influencing the decision to accept a specific review invitation was prestige and reputation of the journal. Logically, only few journals are afforded these accolades, and that puts pressure on the majority of journals that publish the vast majority of knowledge. Most journals increasingly struggle to attract appropriate reviewers in a timely fashion, and it is here the vast majority of review happens and most research is published. Finally, the same survey identified reviewer training as a key need, and what is the best training method historically? Exposure to the review process in topics that precisely match the expertise of reviewers. Artificial intelligence creates new opportunities for publishers to identify highly relevant reviewers instantly.
Editors, whether they work in-house at publishers, or manage a fulltime job as an academic researcher, are responsible for finding the most relevant researchers to review manuscripts. Some editors search for reviewers in discovery tools and abstract databases, and hope to strike upon a handful of relevant reviewer candidates. Other editors keep a list of previous reviewers within the field that they can contact again and again. Sometimes they do keyword searches via their online submission and review system to look for authors based on the handful of arbitrary keywords corresponding authors have keyed into the systems. Most often, editors do all of these, and for most journals, all scenarios carry an increasing risk of rejection from reviewers who might find the manuscript too different from their own research, or simply reject the invitation because they lack the time to review yet another manuscript.
Frustrations of reviewers, editors, authors, and publishers can be significantly reduced by taking advantage of the power of AI to analyze word by word what people have published in order to grow the reviewer pool. Ultimately, it is still the editors’ responsibility to assess reviewers, but AI can present editors with greater choice. With help from AI, editors can now select a manuscript and automatically find suggestions for the most relevant reviewers based on content analysis at a combined speed and a level of detail that only software can provide. Overlapping themes within the manuscript and reviewer candidates’ previous publications can be immediately and automatically identified with machine learning. In this way, editors do not need to enter the right set of keywords in large databases, and hope someone keyed similar words on the other end. Software will match against all terms and phrases within a manuscript, and the publication history of a reviewer candidate – not just a few keywords.
In the invitation email, reviewers can be shown the concepts that the software matched between the manuscript and the reviewer’s publication history, and the reviewer can be given the opportunity to provide feedback to the software if their interests and expertise have recently changed. Now, with the help of clever software, reviewers and editors will be able to match interests even better next time. This personalized approach can tease out the most relevant reviewers, and improve reviewer satisfaction and willingness to engage. Editors can avoid contacting reviewers who have written only slightly related articles, thereby saving time for authors, reviewers, and editors, as well as improve the community perception of the journal.
Similarly, editors can avoid overusing and fatiguing same reviewers time and again, as auto matching to reviewers will result in a more diverse set of reviewers to be contacted. Eventually, this change of workflow can significantly reduce publication time and effort to connect good reviewers with relevant manuscripts.
For journals, associations, editors, and publishers who recognize the value of building community around their publications, they can, with the help of AI, analyze the full text of their own publications to find appropropriate reviewers from authors who already have declared affinities to their journals. If an author was good enough to publish, they should at least be surfaced as review candidates. AI provides the means to meaningfully help editors match manuscripts with appropriate authors and spread the burden for the benefit of everyone.
Looking in our crystal ball, we believe that publishers will increasingly find it difficult to secure reviewers. AI cannot fully solve this exponential growth challenge, but it can provide significant differentiators to publishers, journals, editors, and associations as they seek significant competitive advantages to attract reviewers and authors. Any publisher should have strategic development plans for how they will deal with increasing reviewer shortages. If you are an editor or reviewer, ask your publisher what their plans are. If you are a publisher or an association, ask yourself how long you can sustain the current challenges without implementing smarter systems. Smarter systems will eventually become commonplace requirements of your reviewers and editors, just like email, online systems, and many platform features did before. While the window for immediate competitive advantages may close fairly quickly, the effect of missing the window and providing inefficient review may take longer to fix. Reputation of peer review standards take years to (re)build.
Please contact us to learn more about how your own publishing business can benefit from AI – whether you look for enhancing the reviewer and authoring process, improving search and discovery on your website or you want to make use of existing content in new ways.