Canadian Science Publishing (CSP) is Canada’s largest independent science publisher, with 24 journals, and more than 2,300 new articles each year. It has a long history, originating in 1929 as the National Research Council Press, the publishing wing of Canada’s largest federal research and development organization. Since 2010, CSP has operated independently from the National Research Council, as a national, not-for-profit science publisher. I spoke with Melanie Slavitch, Peer Review Manager at CSP.
What do you think makes CSP distinctive from other journal publishers?
Depth and breadth of experience publishing across the sciences has contributed to our reputation for excellence; the content in our journals is trusted by the research community and beyond. Our small size enables personal attention to customer service and agile response to change. And because we are not-for-profit, we are here for, and because of, the research community; our resources are invested back into this community. We work directly with professional societies and other science organizations to support conferences, awards, and other sponsorships.
You have a number of open-access articles. Why has CSP gone with open access for some of its journals?
Again, as a not-for-profit, we are vision-driven and actively working toward a world where everyone is empowered by scientific knowledge. Open Access is one way to achieve that vision. Over the past few years, we’ve launched three fully open-access journals. Authors publishing in our hybrid journals have the choice to make the published version of their paper fully open access, or they can post the submitted and/or accepted versions of their manuscript in a repository of their choice, without embargo. We are also actively building partnerships and exploring transformative business models to identify a sustainable path to a fully open-access future.
CSP provides plain-language summaries of some of the CSP research on your blog. Why do you do this?
For different reasons, scientific articles are not the most readable texts. The plain language summaries make articles more accessible to non-specialist users of science. The summaries also share relatable details about the research that may not be in the article—the frustration when equipment breaks down in the field, or the joy when a colleague helps fix broken programming code.
CSP has just started using the UNSILO Technical Checks for new submissions to your journals. Why did you adopt machine tools for your editors?
Increasing transparency and ethics requirements at original submission make administrative checks more time-consuming. We were looking for a way to ease the burden on editorial office staff, so that they could focus more time on tasks that require human judgement.
How did you evaluate the machine-based checks alongside your human editors?
The test period was interesting. We elected to trial all the checks, to get a feel for them, and found that because there are more checks than we typically screen for on first submission, it was actually more time-consuming. Our evaluation therefore relied a lot on a qualitative determination of benefit. For our highest volume title, a biomed journal, the number of well-established requirements related to transparency and ethics made UNSILO Technical Checks an easy choice. Our positive experience with the UNSILO team was also persuasive: we look forward to working with them and potentially expanding our use of the tool as the product matures.
What do you see as the future for machine-based checks? Do you think they will replace human editors?
I think there is huge potential for machine-based checks in science publishing, especially as relate to well-defined requirements, standards and nomenclature, e.g., presence of ethics, funding and repository information; expression of mathematical equations; use of correct taxonomic nomenclature. However, humans are necessary to evaluate science conducted by humans. Whereas a machine might be able to tell me whether a statistical expression or method is reported correctly, I would want a human (or two or three) to tell me whether that statistical test or method was the correct one to use, and whether the author’s claims are in line with their results.
Many thanks, Melanie, for your thoughts, and we look forward to working with CSP in the coming months and years.