UNSILO Survey reveals significant potential for improving scholarly journal submission workflows

Technical Checks SurveyTechnical Checking of manuscripts is a core activity in scholarly publishing – around 5.4m manuscripts are submitted to scholarly journals per year, with just over half of them approved for publication. All these manuscripts are subject to different levels of Technical Checks, and many undergo multiple rounds of checks. UNSILO carried out a survey between October and December 2019 to identify current practices, and opportunities for Natural Language Processing and Machine Learning to support the associated workflows.

“Technical Checks” are checks to ensure that manuscripts meet author guidelines — some people call them “format checks”. Editorial offices carry them out when initially receiving manuscripts, before submissions are peer reviewed, and the checks are often repeated when authors return manuscript revisions after review. We surveyed people who work in editorial office capacities and perform or oversee Technical Checks on manuscripts — some responsible for editorial office operations across portfolios of journals ranging between three to 100 journals. Most of the respondents worked in the health sciences, physical sciences and engineering areas, but we believe there is no reason why a similar situation does not exist in humanities and social sciences as well. 

Key Takeaways

  • We estimate that up to 6.8m rounds of Technical Checks are performed per year in scholarly journals.
  • Most editors (87%) perform Technical Checks before manuscripts are sent to editors for initial screening. 50.5% may also perform checks after review when authors provide revisions.
  • 56% of respondents perform Technical Checks on manuscripts two or three times depending on how far manuscripts advance in the submission and review workflow.
  • Around 1.8m manuscripts per year (33%) are returned to authors upon initial submission due to issues surfaced during the first Technical Checks.
  • We estimate that 61m minutes are spent per year by editorial offices performing Technical Checks over a total of 6.8m rounds of Technical Checks — that is, an average of 9 minutes per round of Technical Checks.
  • Scholarly journals routinely ask authors to fill in forms with information that appears in manuscripts. Nonetheless, 67.7% of respondents manually check every manuscript to see if form-provided information corresponds to information in the manuscripts. On average 5.1m manuscripts are double-checked for form-provided statements.|
  • 82.7% of respondents would like software that is able to help authors run their own manuscripts through Technical Checks before editorial offices receive the manuscripts.
  • The single most preferred area for Technical Checks is Disclosures (checking for information, such as funding, ethical, COI, and data availability statements etc.).

Conclusion

Done right, machine learning and text intelligence technologies offer publishers, editorial teams, and authors with tools that lend helping hands and reduce time spent on screening efforts. The best tools, as with UNSILO, do not replace human decision-making, but pinpoint areas of concern for editors and authors to check. By integrating Technical Checks into manuscript tracking systems, editorial teams and authors will have quicker access to information on how well manuscripts adhere to author guidelines. Editorial teams will continue to decide what to screen for, when to build in automation for “robotic” work, when not to, and when in the workflow to request changes from authors. 

Read the full report here.


Receive an email every time we publish a new blog post