Technical Checking of manuscripts is a core activity in scholarly publishing – around 5.4m manuscripts are submitted to scholarly journals per year, with just over half of them approved for publication. All these manuscripts are subject to different levels of Technical Checks, and many undergo multiple rounds of checks. UNSILO carried out a survey between October and December 2019 to identify current practices, and opportunities for Natural Language Processing and Machine Learning to support the associated workflows.
“Technical Checks” are checks to ensure that manuscripts meet author guidelines — some people call them “format checks”. Editorial offices carry them out when initially receiving manuscripts, before submissions are peer reviewed, and the checks are often repeated when authors return manuscript revisions after review. We surveyed people who work in editorial office capacities and perform or oversee Technical Checks on manuscripts — some responsible for editorial office operations across portfolios of journals ranging between three to 100 journals. Most of the respondents worked in the health sciences, physical sciences and engineering areas, but we believe there is no reason why a similar situation does not exist in humanities and social sciences as well.
Done right, machine learning and text intelligence technologies offer publishers, editorial teams, and authors with tools that lend helping hands and reduce time spent on screening efforts. The best tools, as with UNSILO, do not replace human decision-making, but pinpoint areas of concern for editors and authors to check. By integrating Technical Checks into manuscript tracking systems, editorial teams and authors will have quicker access to information on how well manuscripts adhere to author guidelines. Editorial teams will continue to decide what to screen for, when to build in automation for “robotic” work, when not to, and when in the workflow to request changes from authors.
Read the full report here.