logo IMB
Retour

Séminaire Images Optimisation et Probabilités

Towards better conditioned and interpretable neural networks : a study of the normalization-equivariance property

Sebastien Herbreteau

( EPFL )

Salle de conférénces

le 15 février 2024 à 11:00

In many information processing systems, it may be desirable to ensure that any change in the input, whether by shifting or scaling, results in a corresponding change in the system response. While deep neural networks are gradually replacing all traditional automatic processing methods, they surprisingly do not guarantee such normalization-equivariance (scale & shift) property, which can be detrimental in many applications. Inspired by traditional methods in image denoising, we propose a methodology to adapt existing convolutional neural networks so that normalization-equivariance holds by design and without performance loss. Our main claim is that not only ordinary unconstrained convolutional layers, but also all activation functions, including the ReLU (rectified linear unit), which are applied element-wise to the pre-activated neurons, should be completely removed from neural networks and replaced by better conditioned alternatives. As a result, we show that better conditioning improves the interpretability but also the robustness of these networks to outliers, which is experimentally confirmed in the context of image denoising.