Artificial Intelligence is promoted as the solution to some of humanity’s hardest challenges. But AI and Machine Learning can be applied to the same problem in many ways, and different companies may even apply the same methods and get different results. So how can we meaningfully compare the results of machine learning tools from different providers?
In response to this challenge, the R&D team at UNSILO has just released a white paper which presents a method for quantitative comparison, based on well defined and simple qualitative criteria for extracted concepts, and then applies this method to compare the quality of the UNSILO concept extraction to similar services from Google, Microsoft, and IBM.
The test was performed using a random selection of papers from four different scientific domains, and show that the UNSILO concept extraction has the best performance of all the evaluated services, scoring 33.0 points on average per article, compared to 14.8 points for the other services.
The results clearly show that UNSILO provides consistent high-quality output with little or no noise, across very diverse scientific domains. Compared to the incumbents, UNSILO provides more precise concepts, and is able to detect key concepts in unknown domains without the use of an underlying ontology.
The only other service with a similar technical capability is Microsoft, but their output also has the highest level of noise, with 25% ambiguous or redundant concepts, plus an additional 10% obvious errors and misinterpretations. By contrast, UNSILO had the lowest level of noise in the test with zero obvious errors and only 7% ambiguous or redundant concepts.
Follow the link below to download the white paper, and have a look at the underlying data on page two for yourself.