top of page

SafeML - Safety of Machine Learning

​

[Project with Fraunhofer IESE & Nuremberg Institute of Technology]

​

The accuracy of a machine learning classifier that is estimated during training may change when the algorithm operates on different data. For example, a medical diagnosis algorithm may be trained with images that show typical symptoms and then encounter images that contain rare symptoms of a disease. When the true outcomes of classification are unknown this new accuracy cannot be estimated. Can we trust the predicted accuracy established during training in this new situation?

 

In SafeML we take the view that accuracy can be trusted only if the statistical distribution of inputs that affect a classification has not deviated much from that of these inputs in the training set.  SafeML uses known statistical measures to measure this distributional shift and establish a degree of confidence in the accuracy of a classification made by a machine learning component.

​

More information in Kaggle Story

​

Paper on SafeML

SafeML.jpg
bottom of page