A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels. (arXiv:2109.11126v1 [cs.LG])

In some problem spaces, the high cost of obtaining ground truth labels
necessitates use of lower quality reference datasets. It is difficult to
benchmark model performance using these datasets, as evaluation results may be
biased. We propose a supplement to using reference labels, which we call an
approximate ground truth refinement (AGTR). Using an AGTR, we prove that bounds
on specific metrics used to evaluate clustering algorithms and multi-class
classifiers can be computed without reference labels. We also introduce a
procedure that uses an AGTR to identify inaccurate evaluation results produced
from datasets of dubious quality. Creating an AGTR requires domain knowledge,
and malware family classification is a task with robust domain knowledge
approaches that support the construction of an AGTR. We demonstrate our AGTR
evaluation framework by applying it to a popular malware labeling tool to
diagnose over-fitting in prior testing and evaluate changes whose impact could
not be meaningfully quantified under previous data.