In this paper, we propose risk-calibrated evidential deep classifiers to reduce the costs associated with classification errors. We use two main approaches. The first is to develop methods to quantify the uncertainty of a classifier's predictions and reduce the likelihood of acting on erroneous predictions. The second is a novel way to train the classifier such that erroneous classifications are biased towards less risky categories. We combine these two approaches in a principled way. While doing this, we extend evidential deep learning with pignistic probabilities, which are used to quantify uncertainty of classification predictions and model rational decision making under uncertainty. We evaluate the performance of our approach on several image classification tasks. We demonstrate that our approach allows to (i) incorporate misclassification cost while training deep classifiers, (ii) accurately quantify the uncertainty of classification predictions, and (iii) simultaneously learn how to make classification decisions to minimize expected cost of classification errors.
Bibtex info
@inproceedings{sensoy_misclassification_2021,
title = {Misclassification {Risk} and {Uncertainty} {Quantification} in {Deep} {Classifiers}},
booktitle = {Proceedings of the {IEEE}/{CVF} {Winter} {Conference} on {Applications} of {Computer} {Vision} ({WACV})},
author = {Sensoy, Murat and Saleki, Maryam and Julier, Simon and Aydogan, Reyhan and Reid, John},
month = jan,
year = {2021},
pages = {2484--2492},
}