UNSILO Evaluate

Reduce time to publication with UNSILO Evaluate, a suite of APIs that help you screen and evaluate incoming manuscript submissions, assisted by AI and NLP tools. Evaluate can be integrated directly into your manuscript evaluation workflow.

Scale your capacities to screen and evaluate manuscripts

The dramatic growth in research output is challenging the traditional peer review models and requires that we augment and strengthen the screening capabilities. UNSILO has launched UNSILO Evaluate, a suite of APIs that use advanced machine intelligence and natural-language understanding to help authors, editors, reviewers, and publishers carry out manuscript submission, evaluation and screening. Contact us for more information, or if you are interested in a demo or trial.

UNSILO Evaluate Features

All APIs from UNSILO Evaluate can be used with a public or private corpus of your choice. For evaluation purposes, we use a corpus of biomedical data, comprising 28 million PubMed abstracts and full-text articles, with an easy-to-use Web user interface. Each API can also be trialled using a simple REST interface.

Peer Reviewer Finder

Find the best matching peer reviewers, by comparing the semantic fingerprint of a manuscript with recent published articles in the same subject. Unlike traditional tools, which use simple keyword matching, the Reviewer Finder creates a semantic fingerprint for the submission in real time, and delivers much more precise matches to potential peer reviewers, as well as checking for recency and for potential conflicts of interest.

Technical Checks

Match the manuscript to a range of technical constraints like correct embedding of funding, Conflict of Interest and ethics statements in the paper. Technical checks will give proper warnings and when constraints are not met and give evidence when they are.

Journal Match

Match the incomings manuscripts with a large backdrop of journals or your own journals in order to qualify which journals are the best match for a given manuscript. This is a highly useful feature in a journal cascading workflow, or for integration directly on your website as a service towards your authors, that can then select the best suitable journal before submitting.

Related Papers

Compare the incoming manuscripts to a large corpus or of already published abstracts or your own corpus, and find the most similar and related existing publications. This is useful when trying to determining potentially missing references or potential plagiarism. 

Topics

Keywords provided by the authors are sometimes not as descriptive. Let the AI detect essential topics in the incoming manuscript and compare these to the author keywords, in order to judge the quality of them and make potential new keyword suggestions.

Key Statements

Get an immediate overview of the manuscript at a glance by letting the AI identify the most significant assertions and findings from the text.  

In partnership with

Contact us for more information, or if you are interested in a demo or trial.