IMB > Recherche > Séminaires

Séminaire Images Optimisation et Probabilités

Responsable : Luis Fredes et Camille Male

  • Le 11 janvier 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Valentin Debarnot University of Basel
    Algorithmes de deep-learning pour la reconstruction en microscopie électronique et à fluorescence.
    Dans cette présentation, je présenterai différents aspects qui limitent la résolution en microscopie électronique et à fluorescence. Après avoir défini ces problèmes dans un formalisme de problème inverse, j'introduirai différents outils qui nous ont permis d'atténuer certaines limitations, et je discuterais différentes pistes de recherche possibles pour prendre en compte les limitations restantes. J'utiliserais des outils de machine learning (e.g. réseau de neurones implicites, deep image prior, dérivation automatique) pour résoudre des problèmes inverses en microscopie.
  • Le 25 janvier 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Paul Catala University of Osnabrückv
    Trigonometric approximations of the sparse super-resolution problem in Wasserstein distances
    In this talk, I will discuss the recovery of an arbitrary measure on the $d$-dimensional torus, given trigonometric moments up to degree $n$. Considering the convolution of the measure with powers of the Fejér kernel, which can be computed efficiently from the truncated moment sequence, I will provide rates of convergence of the resulting polynomial density towards the measure in the $p$-Wasserstein distance, as the degree $n$ increases. In particular, I will show that the best possible rate for polynomial approximation is inversely proportional to the degree, and that it is achieved by adequately choosing the power to which the kernel is raised. Finally, I will discuss another class of polynomial approximations, similar although not based on convolution, that converge pointwise to the characteristic function of the support of the measure. This is joint work with Mathias Hockmann, Stefan Kunis and Markus Wageringel.
  • Le 1er février 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Jamal Najim Université Gustave Eiffel
    Equilibres de grands systèmes de Lotka-Volterra couplés par des matrices aléatoires non-hermitiennes
    Les systèmes de Lotka-Volterra sont des équations différentielles couplées par une matrice dite d’interactions. On s’intéressera au cas où la matrice d’interactions est une grande matrice aléatoire, modèle fréquemment utilisé en écologie théorique pour comprendre les réseaux trophiques. Dans les cas d’existence d’un équilibre stable, aléatoire par nature, on s’attachera à décrire certaines propriétés statistiques de cet équilibre, comme par exemple la proportion des composantes non nulles. On s’intéressera à des modèles matriciels non-hermitiens, de type Ginibre réel et plus généralement elliptique, et on montrera comment des algorithme de type AMP (Approximate Message Passing) permettent d’accéder aux propriétés statistiques de ces équilibres.

    Travail en collaboration avec Y. Gueddari et W. Hachem, voir aussi https://arxiv.org/abs/2302.07820
  • Le 8 février 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de Conférences
    Bastien Laville INRIA
    Some developments on off-the-grid curve reconstruction: divergence regularisation and untangling by Riemannian metric
    Recent years have seen the development of super-resolution variational optimisation in measure spaces. These so-called off-the-grid approaches offer both theoretical and numerical results, with very convincing results in biomedical imaging. However, the gridless variational optimisation is generally formulated for reconstruction of point sources, which is not always suitable for biomedical imaging applications: more realistic biological structures such as curves should also be reconstructed. In the first part of this talk, we propose a new strategy for the reconstruction of curves in an image through an off-the-grid variational framework, thanks to the sharp characterisation of the extreme points of the unit ball of a new regulariser thus enabling new theoretical and numerical results for optical imaging. In a second part of the talk, we investigate a new strategy for off-the-grid curve untangling, with some practical results for Localisation Microscopy.
  • Le 15 février 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Sebastien Herbreteau EPFL
    Towards better conditioned and interpretable neural networks : a study of the normalization-equivariance property
    In many information processing systems, it may be desirable to ensure that any change in the input, whether by shifting or scaling, results in a corresponding change in the system response. While deep neural networks are gradually replacing all traditional automatic processing methods, they surprisingly do not guarantee such normalization-equivariance (scale & shift) property, which can be detrimental in many applications. Inspired by traditional methods in image denoising, we propose a methodology to adapt existing convolutional neural networks so that normalization-equivariance holds by design and without performance loss. Our main claim is that not only ordinary unconstrained convolutional layers, but also all activation functions, including the ReLU (rectified linear unit), which are applied element-wise to the pre-activated neurons, should be completely removed from neural networks and replaced by better conditioned alternatives. As a result, we show that better conditioning improves the interpretability but also the robustness of these networks to outliers, which is experimentally confirmed in the context of image denoising.
  • Le 7 mars 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Radu-Alexandru Dragomir Telecom Paris
    Quartic Optimization Problems
    Many tasks in signal processing and machine learning involve minimizing polynomials of degree four. These include phase retrieval, matrix factorization, sensor network localization and many more. In this talk, I will give an overview of the challenges of quartic minimization as well as some complexity results. In particular, we will focus on a particular class of convex quartic problems, and we analyze the notion of quartic condition number. We design an algorithm for reducing this condition number. To do so, we build a preconditioner using a generalized version of the Lewis weights (a.k.a leverage scores), and we show that it is optimal in some specific sense.

    Based on joint work with Yurii Nesterov.
  • Le 14 mars 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférences
    Jianyu Ma Université de Toulouse
    Displacement functional and absolute continuity of Wasserstein barycenters.
    Barycenters are defined to average probability measures on metric spaces.
    For Wasserstein spaces, (Wasserstein) barycenter is a direct generalization of the celebrated McCann interpolation, which corresponds to the barycenters of measures $\lambda \delta_{\mu_1} + (1 - \lambda) \delta_{\mu_2}$.
    In the talk, we consider Wasserstein barycenters on Riemannian manifolds,
    and discuss the displacement functional used by the author in arXiv:2310.13832 to prove their absolute continuity with lower Ricci curvature bound assumptions.
    It is different from the widely used displacement convexity property combined with gradient flow, but still manifests an intriguing connection with the curvature-dimension condition.
    If time allowed, we will also explain how the Souslin space theory is applied in the proof,
    which is an unexpected technique for optimal transport.
  • Le 21 mars 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces.
    Laura Girometti et Léo Portales Université de Bologne et de Toulouse
    Deux exposés
    Title Léo : Convergence of the iterates of Lloyd's algorithm
    Summary : The finding of a discrete measure that approaches a target density, called quantization, is an important aspect of machine learning and is usually done using Lloyd’s algorithm; a continuous counterpart to the K-means algorithm. We have studied two variants of this algorithm: one where we specify the former measure to be uniform (uniform quantization) and one where the weights associated to each point is adjusted to fit the target density (optimal quantization). In either case it is not yet known in the literature whether the iterates of these algorithms converge simply. We proved so with the assumption that the target density is analytic and restricted to a semi algebraic compact and convex set. We do so using tools from o-minimal geometry as well as the Kurdyka-Lojasiewicz inequality. We also proved along the way the definability in an o-minimal structure of functions of the form Y := (y1, ..., yN ) → D(\mu,1/N sum_{i=1}^N \delta_{y_i}) for the following divergences D: the general Wp Wasserstein distance, the max-sliced Wasserstein distance and the entropic regularized Wasserstein distance.

    Title Laura : Quaternary Image Decomposition
    Summary : Decomposing an image into meaningful components is a challenging inverse problem in image processing and has been widely applied to cartooning, texture removal, denoising, soft shadow/spotlight removal, detail enhancement etc. All the valuable contributions to this problem rely on a variational-based formulation where the intrinsic difficulty comes from the numerical intractability of the considered norms, from the tuning of the numerous model parameters, and, overall, from the complexity of extracting noise from a textured image, given the strong similarity between these two components. In this talk, I will present a two-stage variational model for the additive decomposition of images into piecewise constant, smooth, textured and white noise components, focusing on the regularization parameter selection and presenting numerical results of decomposition of textured images corrupted by several kinds of additive white noises.
  • Le 11 avril 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    salle de conférence
    François Chapon Université of Toulouse
    Spectres de matrices de Toeplitz à bande déformées
    Les matrices de Toeplitz sont des matrices non-normales, dont l'analyse spectrale en grande dimension est bien comprise. Le spectre de ces matrices est en particulier très sensible à de petites perturbations. On s'intéressera dans cet exposé aux matrices de Toeplitz à bande, dont le symbole est donné par un polynôme de Laurent, et perturbées par une matrice aléatoire. Le but est de décrire les valeurs propres hors du support de la mesure limite de la perturbation quand la dimension tend vers l'infini, appelées "outliers", et qui apparaissent en fonction de l'indice de la courbe du plan complexe déterminée par le symbole. Travail en cours et en collaboration avec Mireille Capitaine et Charles Bordenave.
  • Le 18 avril 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférences
    Pierre-Loïc Méliot Paris-Saclay
    Ellis-Gärtner au second ordre pour des statistiques de tableaux et partitions aléatoires.
    Si (X_n) est une suite de variables aléatoires réelles, le théorème d'Ellis-Gärtner assure que les logarithmes des probabilités de grandes déviations log P[X_n > n x] sont reliées à l'asymptotique de la log-laplace renormalisée h -> (log E[e^{h X_n}])/n. Dans cet exposé, on expliquera à quelles conditions on peut enlever les logarithmes et obtenir un équivalent des probabilités P[X_n > n x]. Plus précisément, si

    log E[e^{z X_n}] = n Lambda(z) + Psi(z) + o(1)

    localement uniformément sur le plan complexe, alors une condition simple sur la partie réelle de la fonction Lambda(z) permet d'écrire un équivalent des probabilités de grandes déviations (sans logarithme). Ces techniques s'adaptent en particulier à des modèles mettant en jeu des partitions aléatoires ou des tableaux de Young standards aléatoires ; nous détaillerons dans ce cas les résultats obtenus et quelques techniques de preuve générales.
  • Le 25 avril 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Maud Biquard ISAE-SUPAERO and CNES
    Variational Bayes image restoration with (compressive) autoencoders
    Regularization of inverse problems is of paramount importance in computational imaging. The ability of neural networks to learn efficient image representations has been recently exploited to design powerful data-driven regularizers. While state-of-the-art plug-and-play methods rely on an implicit regularization provided by neural denoisers, alternative Bayesian approaches consider Maximum A Posteriori (MAP) estimation in the latent space of a generative model, thus with an explicit regularization. However, state-of-the-art deep generative models require a huge amount of training data compared to denoisers. Besides, their complexity hampers the optimization involved in latent MAP derivation. In this work, we first propose to use compressive autoencoders instead. These networks, which can be seen as variational autoencoders with a flexible latent prior, are smaller and easier to train than state-of-the-art generative models. As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm, which performs latent estimation within the framework of variational inference. Thanks to a simple yet efficient parameterization of the variational posterior, VBLE allows for fast and easy (approximate) posterior sampling. Experimental results on image datasets BSD and FFHQ demonstrate that VBLE reaches similar performance than state-of-the-art plug-and-play methods, while being able to quantify uncertainties faster than other existing posterior sampling techniques.
  • Le 2 mai 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférences
    Jérôme Bolte Université de Toulouse 1
    Nonsmooth differentiation of algorithms and solution maps
    The recent surge in algorithmic differentiation through the massive use of TensorFlow and PyTorch "autodiff" has democratized "computerized differentiation" for a broad spectrum of applications and solvers. Motivated by the challenges of nonsmoothness (such as thresholding, constraints, and ReLU) and the need to adjust parameters in various contexts directly via these solvers, we have devised tools for nonsmooth differentiation compatible with autodiff. We have in particular developed a nonsmooth implicit function calculus, aiming to provide robust guarantees for prevalent differentiation practices. We will discuss applications of these findings through the differentiation of algorithms and equations.
  • Le 16 mai 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Bruno Galerne Université d'Orleans
    Scaling Painting Style Transfer

    Neural style transfer (NST) is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image. It is particularly impressive when it comes to transferring style from a painting to an image. NST was originally achieved by solving an optimization problem to match the global statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate NST and produce images with larger size. However, our investigation shows that these accelerated methods all compromise the quality of the produced images in the context of painting style transfer. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution (UHR) images, enabling multiscale NST at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons, as well as a user study, show that our method produces style transfer of unmatched quality for such high-resolution painting styles. By a careful comparison, we show that state of the art fast methods are still prone to artifacts, thus suggesting that fast painting style transfer remains an open problem.


    Joint work with Lara Raad, José Lezama and Jean-Michel Morel.


  • Le 23 mai 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénce
    Stephane Dartois CEA
    Injective norm of random tensors and geometric entanglement of random quantum states

    In this talk, I will present the results of a collaboration with Benjamin McKenna on the injective norm of large random Gaussian tensors and uniform random quantum states and, time allowing, describe some of the context underlying this work. The injective norm is a natural generalization to tensors of the operator norm of a matrix and appears in multiple fields. In quantum information, the injective norm is one important measure of genuine multipartite entanglement of quantum states, known as geometric entanglement. In our recent preprint, we provide high-probability upper bounds in different regimes on the injective norm of real and complex Gaussian random tensors, which corresponds to lower bounds on the geometric entanglement of random quantum states, and to bounds on the ground-state energy of a particular multispecies spherical spin glass model. Our result represents a first step towards solving an important question in quantum information that has been part of folklore.


  • Le 6 juin 2024 à 11:00
  • Séminaire Images Optimisation et Probabilités
    Salle de Conférénces
    Francisco Andrade ENS Paris
    Sparsistency for Inverse Optimal Transport

    Optimal Transport is a useful metric to compare probability distributions and to compute a pairing given a ground cost. Its entropic regularization variant (eOT) is crucial to have fast algorithms and reflect fuzzy/noisy matchings. This work focuses on Inverse Optimal Transport (iOT), the problem of inferring the ground cost from samples drawn from a coupling that solves an eOT problem. It is a relevant problem that can be used to infer unobserved/missing links, and to obtain meaningful information about the structure of the ground cost yielding the pairing. On one side, iOT benefits from convexity, but on the other side, being ill-posed, it requires regularization to handle the sampling noise. This work presents an in-depth theoretical study of the l1 regularization to model for instance Euclidean costs with sparse interactions between features. Specifically, we derive a sufficient condition for the robust recovery of the sparsity of the ground cost that can be seen as a far reaching generalization of the Lasso's celebrated Irrepresentability Condition. To provide additional insight into this condition, we work out in detail the Gaussian case. We show that as the entropic penalty varies, the iOT problem interpolates between a graphical Lasso and a classical Lasso, thereby stablishing a connection between iOT and graph estimation, an important problem in ML.


  • Le 10 juin 2024 à 13:30 au 14 juin 2024 à 12:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Journées de Probabilités 2024

    Page de l'événement : https://indico.math.cnrs.fr/event/11353/overview


  • Le 10 juillet 2024 à 14:00 au 11 juillet 2024 à 12:00
  • Séminaire Images Optimisation et Probabilités
    Salle 2 / salle de conférences
    4 exposés
    Mini-Workshop: rencontre de probabilistes

    Mercredi 10/07


    14h00 Jurgen Angst (Univ. Rennes)

    Title :  TLC in total variation for beta-ensembles

    Abstract : In this talk, we study the fluctuations of linear statistics associated with beta-ensembles, which are statistical physics models generalizing random matrix spectra. In the context of random matrices precisely (e.g. GOE, GUE), the "law of large numbers" is Wigner's theorem, which states that the empirical measure of eigenvalues converges to the semicircle law, and fluctuations around equilibrium can be shown to be Gaussian. We will describe how this result generalizes to beta-ensembles and how it is possible to quantify the speed of convergence to the normal distribution. We obtain optimal rates of convergence for the total variation distance and the Wasserstein distances. To do this, we introduce a variant of Stein's method for a generator $L$ that is not necessarily invertible, and which allows us to establish the asymptotic normality of observables that are not in the image of $L$. Time permitting, we will also look at the phenomenon of super-convergence, which ensures that convergence to the normal law takes place for very strong metrics, typically the $C^{\infty}$-convergence of densities. The talk is based on recent works with R. Herry, D. Malicet and G. Poly.


    15h00 Nicolas Juillet (Univ. Haute-Alsace)

    Title :  Exact interpolation of 1-marginals

    Abstract : I shall present a new type of martingales that exactly interpolates any given family of 1-dimensional marginals on R1 (satisfying the suitable necessary assumption). The construction makes use of ideas from the (martingale) optimal transportation theory and relies on different stochastic orders. I shall discuss of related constructions and open questions (joint work with Brückerhoff and Huesmann).


    16h00 Kolehe Coulibaly-Pasquier (Inst. Ellie Cartan)

    Title :  On the separation cut-off phenomenon for Brownian motions on high dimensional rotationally

    symmetric compact manifolds.

    Abstract : Given a family of compact, rotationally symmetric manifolds indexed by the dimension and a weighted function, we will study the cut-off phenomena for the Brownian motion on this family.

    Our proof is based on the construction of an intertwined process, a strong stationary time, an estimation of the moments of the covering time of the dual process, and on the phenomena of concentration of the measure.

    We will see a phase transition concerning the existence or not of cut-off phenomena, which depend on the shape of the weighted function.



    Jeudi 11/07


    11h00 Masha Gordina (Univ. of Connecticut)

    Title :   Dimension-independent functional inequalities on sub-Riemannian manifolds

    Abstract : The talk will review recent results on gradient estimates, log Sobolev inequalities, reverse Poincare and log Sobolev inequalities on a class of sub-Riemannian manifolds. As for many of such setting curvature bounds are not available, we use different techniques including tensorization and taking quotients. Joint work with F. Baudoin, L. Luo and R. Sarkar.



  • Le 11 juillet 2024 à 14:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Magalie Bénéfice IMB
    Soutenance de thèse : Coupling of stochastic processes in subRiemannian manifolds

    In this thesis we study couplings of subelliptic Brownian motions in several subRiemannian manifolds: the free, step $2$ Carnot groups, including the Heisenberg group, as well as the groups of matrices $SU(2)$ et $SL(2,\mathbb{R})$.

    Taking inspiration from previous works on the Heisenberg group we obtain successful non co-adapted couplings on $SU(2)$, $SL(2,\mathbb{R})$ (under strong hypothesis) and also on the free step $2$ Carnot groups with rank $n\geq 3$. In particular we obtain estimates of the coupling rate, leading to gradient inequalities for the heat semi-group and for harmonic functions. We also describe the explicit construction of a co-adapted successful coupling on $SU(2)$.

    Finally, we develop a new coupling model "in one sweep" for any free, step $2$ Carnot groups. In particular, this method allows us to obtain relations similar to the Bismut-Elworthy-Li formula for the gradient of the semi-group by studying a change of probability on the Gaussian space.


  • Le 12 septembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Marc Arnaudon IMB
    (Proba-Stat) Entrelacement, simulation parfaite et diffusions dans l'espace de Wasserstein

    La motivation principale de cet exposé est de trouver des temps forts de stationnarité pour des processus de Markov (X_t), c'est à dire des temps d'arrêt T tels que X_T soit à l'équilibre, T et X_T soient indépendants. Pour trouver des temps fort de stationnarité T, il est naturel et très facile dans certains cas d'utiliser des processus duaux (D_t), tels que T soit un temps d'atteinte d'un certain état pour le processus dual. On étudiera l'entrelacement entre (X_t) et (D_t). On donnera des exemples pour des chaînes de Markov à espace d'états finis, puis on s'intéressera au mouvement brownien avec des processus duaux à valeur ensemble. L'étonnant théorème "2M-X" de Pitman donne un exemple d'entrelacement du mouvement brownien dans le cercle. On généralisera ce théorème aux variétés riemanniennes compactes, et on construira des temps forts de stationnarité. On étudiera la transition de phase en grande dimension. Finalement, on s'intéressera à des duaux à valeur mesure."


  • Le 26 septembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    LaBRI Salle 076
    Yvan Le Borgne U-Bordeaux - LaBRI
    (proba-stats) Une approche de la causalité due à Pearl.

    Cet exposé se veut une introduction par l'exemple à une théorie de la causalité développée depuis la fin des années 90 par Judea Pearl. Elle lui a valu une partie de son prix ACM Turing en 2011, l'équivalent en informatique du prix Abel. Nous considérerons un modèle classique dont des hypothèses sont formulées par un graphe de cause. Il contient notamment une cause commune inobservable et une variable éthiquement non contrôlable. Adoptant ici un vocabulaire informatique, nous traiterons en détail une requête sur les traces d'exécution d'un programme inexécutable à l'aide de statistiques sur les traces d'un autre programme lui exécutable. Les éléments rencontrés lors de cette analyse seront alors utilisés dans une présentation de l'architecture globale de la démarche de Pearl. Si le temps le permet, nous discuterons quelques éléments sur les calculs probabilistes dans ce contexte qui s'avèrent souvent reformulable uniquement en terme de théorie des graphes.


  • Le 10 octobre 2024 à 09:00
  • Séminaire Images Optimisation et Probabilités
    Salle de conférences
    ... ...
    Journée de rentrée de l'IMB

    Séminaire IOP banalisé


  • Le 17 octobre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle 1
    Jean Peyhardi U. Montpellier
    (Proba-Stats) Polya urn models for multivariate species abundance data: properties and applications

    Seminaire joint avec OptimAI.


    This talk focuses on models for multivariate count data, with emphasis on species abundance data. Two approaches emerge in this framework: the Poisson log-normal (PLN) and the Tree Dirichlet multinomial (TDM) models. The first uses a latent gaussian vector to model dependencies between species whereas the second models dependencies directly on observed abundances. The TDM model makes the assumption that the total abundance is fixed, and is then often used for microbiome data since the sequencing depth (in RNA seq) varies from one observation to another leading to a total abundance that is not really interpretable. We propose to generalize TDM model in two ways: by relaxing the fixed total abundance and by using Polya distribution instead of Dirichlet multinomial. This family of models corresponds to Polya urn models with a random number of draws and will be named Polya splitting distributions. In a first part I will present the probabilistic properties of such models, with focus on marginals and probabilistic graphical model. Then it will be shown that these models emerge as stationary distributions of multivariate birth death process under simple parametric assumption on birth-death rates. These assumptions are related to the neutral theory of biodiversity that assumes no biological interaction between species. Finally, the statistical aspects of Polya splitting models will be presented: the regression framework, the inference, the consideration of a partition tree structure and two applications on real data.


  • Le 7 novembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Julien Mairal INRIA Grenoble
    (Maths-IA) Physical Models and Machine Learning for Scientific Imaging

    Deep learning has revolutionised image processing and is often considered to outperform classical approaches based on accurate modelling of the image formation process. In this presentation, we will discuss the interplay between model-based and learning-based paradigms, and show that hybrid approaches show great promises for scientific imaging, where interpretation and robustness to real-world degradation is important. We will present two applications on super-resolution and high-dynamic range imaging, and exoplanet detection from direct imaging at high contrast.


    N'oubliez pas de vous inscrire à la liste maths-ia !

    https://listes.math.u-bordeaux.fr/wws/subscribe/mathsia?previous_action=info


  • Le 14 novembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Gersende Fort Institut de Mathématiques de Toulouse\, CNRS
    (Proba-Stat) Stochastic Approximation : Finite-time analyses and Variance Reduction

    In statistical learning, many analyses and methods rely on optimization, including its stochastic versions introduced for example, to overcome an intractability of the objective function or to reduce the computational cost of the deterministic optimization step.

    In 1951, H. Robbins and S. Monro introduced a novel iterative algorithm, named "Stochastic Approximation", for the computation of the zeros of a function defined by an expectation with no closed-form expression. This algorithm produces a sequence of iterates, by replacing at each iteration the unknown expectation with a Monte Carlo approximation based on one sample. Then, this method was generalized: it is a stochastic algorithm designed to find the zeros of a vector field when only stochastic oracles of this vector field are available.

    Stochastic Gradient Descent algorithms are the most popular examples of Stochastic Approximation : oracles come from a Monte Carlo approximation of a large sum. Possibly less popular are examples named "beyond the gradient case" for at least two reasons. First, they rely on oracles that are biased approximation of the vector field, as it occurs when biased Monte Carlo sampling is used for the definition of the oracles. Second, the vector field is not necessarily a gradient vector field. Many examples in Statistics and more

    generally in statistical learning are "beyond the gradient case": among examples, let us cite compressed stochastic gradient descent, stochastic Majorize-Minimization methods such as the Expectation-Maximization algorithm, or the Temporal Difference algorithm in reinforcement learning.

    In this talk, we will show that these "beyond the gradient case" Stochastic Approximation algorithms still converge, even when the oracles are biased, as soon as some parameters of the algorithm are tuned enough. We will discuss what 'tuned enough' means when the quality criterion relies on epsilon-approximate stationarity. We will also comment the efficiency of the

    algorithm through sample complexity. Such analyses are based on non-asymptotic convergence bounds in expectation: we will present a unified method to obtain such bounds for a large class of Stochastic Approximation methods including both the gradient case and the beyond the gradient case. Finally, a Variance Reduction technique will be described and its efficiency illustrated.


  • Le 21 novembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférences
    Thierry Emeric Gbaguidi IMB U-Bordeaux.
    On the SAGA algorithm with decreasing step

    Stochastic optimization naturally appear in many application areas, including machine learning. Our goal is to go further in the analysis of the Stochastic Average Gradient Accelerated (SAGA) algorithm. To achieve this, we introduce a new $\lambda$-SAGA algorithm which interpolates between the Stochastic Gradient Descent ($\lambda=0$) and the SAGA algorithm ($\lambda=1$). Firstly, we investigate the almost sure convergence of this new algorithm with decreasing step which allows us to avoid the restrictive strong convexity and Lipschitz gradient hypotheses associated to the objective function. Secondly, we establish a central limit theorem for the $\lambda$-SAGA algorithm. Finally, we provide the non-asymptotic $L^p$ rates of convergence.


  • Le 28 novembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Michel Bonnefont IMB
    (proba-stats) Couplages stochastiques et distance en variation totale sur le groupe de Heisenberg et les groupes de Carnot

    Dans cet exposé, je vais m'intéresser au mouvement Brownien dans des cadres simples de géométrie sous riemannienne: le groupe de Heisenberg et les groupes de Carnot de rang 2. Nous proposons une construction d'un couplage de deux mouvement Browniens à un temps fixe. Cette construction est basée sur une décomposition de Legendre du mouvement Brownien standard et de son aire de Lévy. Nous déduisons alors des estimées précises de la décroissance en variation totale entre les lois des mouvements Browniens 

    et par une technique de changement de probabilité une formule d'intégration par partie de type Bismut ainsi des estimées de régularisation de type Poincaré inverse pour le semi-groupe associé. Travail en commun avec Marc Arnaudon, Magalie Bénéfice et Delphine Féral


  • Le 5 décembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Julien Hermant IMB
    (Maths-IA) Gradient Correlation allows for faster optimization

    Many problems, especially in machine learning, can be formulated as optimization problems. Using optimization algorithms, such as stochastic gradient descent or ADAM, has become a cornerstone to solve these optimization problems. However for many practical cases, theoretical proofs of their efficiency are lacking. In particular, it has been empirically observed that adding a momentum mechanism to the stochastic gradient descent often allows solving these optimization problems more efficiently. In this talk, we introduce a condition linked to a measure of the gradient correlation that allows to theoretically characterize the possibility to observe this acceleration.


  • Le 12 décembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle 1
    Jordan Serres INSA
    (proba-stats) Statistical estimation in a geometric context

    Statistical estimation in a geometric context has become an increasingly important issue in recent years, not least due to the development in ML of non-linear dimension reduction techniques, which involve projecting high-dimensional data onto a much lower-dimensional sub-manifold. The search for statistical guarantees justifying both the use and the effectiveness of these algorithms is now a much-studied area. In this talk, we will take a geometric view of the issue, and see how some usual curvature quantities are translated into algorithmic guarantees. First, we will see that upper bounds on sectional curvatures give good properties for barycenter estimation, and then we will see that a lower bound on Ricci curvature implies the existence of depth points, giving rise to robust statistical estimators. Those works are based on joint works with Victor-Emmanuel Brunel (ENSAE Paris) and Shin-ichi Ohta (Osaka University).


    Les anciens séminaires