Concept Drift Handling with Model Uncertainty

  • Background

    Due to the large increase of data in recent years, machine learning and analytical solutions have received increased attention in various industries. Many companies rely on machine learning models deployed in their information systems for offering new services or for increasing the efficiency of their processes (e.g. for the prediction of downtimes or machine failures).
    However, many models are very sensitive to small changes in their environment (e.g. adjustment of machine parameters in production) leading to different input data for the model. Even small deviations can already have a significant impact on the deployed machine learning model and thereby drastically reducing its prediction quality. This phenomenon is often referred to as “concept drift”.

    Research Goal

    Recent advances in deep learning allow to measure the uncertainty of a neural network during the computation of new predictions on test data. The objective of this thesis is to evaluate whether this property of neural networks can be used as a viable indicator for detecting concept drift. This analysis requires the adaptation of available methods to this specific purpose. Subsequently, the developed methods need to be implemented and benchmarked quantitatively on various datasets.

    If you are interested, send a short letter of motivation, your CV and a transcript of records to lucas.baier∂kit.edu.