Rémy Sun

Publications

KS(conf ): A Light-Weight Test if a ConvNet Operates Outside of Its Specifications.

German Conference on Pattern Recognition (GCPR 2018), accepted for an oral presentation.

Rémy Sun, Christoph Lampert

Computer vision systems for automatic image categorization have become accurate and reliable enough that they can run continuously for days or even years as components of real-world commercial applications. A major open problem in this context, however, is quality control. Good classification performance can only be expected if systems run under the specific conditions, in particular data distributions, that they were trained for. Surprisingly, none of the currently used deep network architectures has a built-in functionality that could detect if a network operates on data from a distribution that it was not trained for and potentially trigger a warning to the human users. In this work, we describe KS(conf), a procedure for detecting such outside of the specifications operation. Building on statistical insights, its main step is the applications of a classical Kolmogorov-Smirnov test to the distribution of predicted confidence values. We show by extensive experiments using ImageNet, AwA2 and DAVIS data on a variety of ConvNets architectures that KS(conf) reliably detects out-of-specs situations. It furthermore has a number of properties that make it an excellent candidate for practical deployment: it is easy to implement, adds almost no overhead to the system, works with all networks, including pretrained ones, and requires no a priori knowledge about how the data distribution could change.

Intrinsic disentanglement: an invariance view for deep generative models

Workshop on Theoretical Foundations and Applications of Deep Generative Models at ICML 2018

Michel Besserve, Rémy Sun, Bernhard Schölkopf

Deep generative models such as Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs) are important tools to capture and investigate the properties of complex empirical data. However, the complexity of their inner elements makes their functioning challenging to interpret and modify. In this respect, these architectures behave as black box models. In order to better understand the function of such network, we analyze the modularity of these system by quantifying the disentanglement of their intrinsic parameters. This concept relates to a notion of invariance to transformations of internal variables of the generative model, recently introduced in the field of causality. Our experiments on generation of human faces with VAEs supports that modularity between weights distributed over layers of generator architecture is achieved to some degree, and can be used to understand better the functioning of these architectures. Finally, we show that modularity can be enhanced during optimization.