IMB > Recherche > Séminaires

Séminaire Images Optimisation et Probabilités

Responsable : Luis Fredes et Camille Male

  • Le 7 novembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Julien Mairal INRIA Grenoble
    (Maths-IA) Physical Models and Machine Learning for Scientific Imaging

    Deep learning has revolutionised image processing and is often considered to outperform classical approaches based on accurate modelling of the image formation process. In this presentation, we will discuss the interplay between model-based and learning-based paradigms, and show that hybrid approaches show great promises for scientific imaging, where interpretation and robustness to real-world degradation is important. We will present two applications on super-resolution and high-dynamic range imaging, and exoplanet detection from direct imaging at high contrast.


    N'oubliez pas de vous inscrire à la liste maths-ia !

    https://listes.math.u-bordeaux.fr/wws/subscribe/mathsia?previous_action=info


  • Le 14 novembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Gersende Fort Institut de Mathématiques de Toulouse\, CNRS
    (Proba-Stat) Stochastic Approximation : Finite-time analyses and Variance Reduction

    In statistical learning, many analyses and methods rely on optimization, including its stochastic versions introduced for example, to overcome an intractability of the objective function or to reduce the computational cost of the deterministic optimization step.

    In 1951, H. Robbins and S. Monro introduced a novel iterative algorithm, named "Stochastic Approximation", for the computation of the zeros of a function defined by an expectation with no closed-form expression. This algorithm produces a sequence of iterates, by replacing at each iteration the unknown expectation with a Monte Carlo approximation based on one sample. Then, this method was generalized: it is a stochastic algorithm designed to find the zeros of a vector field when only stochastic oracles of this vector field are available.

    Stochastic Gradient Descent algorithms are the most popular examples of Stochastic Approximation : oracles come from a Monte Carlo approximation of a large sum. Possibly less popular are examples named "beyond the gradient case" for at least two reasons. First, they rely on oracles that are biased approximation of the vector field, as it occurs when biased Monte Carlo sampling is used for the definition of the oracles. Second, the vector field is not necessarily a gradient vector field. Many examples in Statistics and more

    generally in statistical learning are "beyond the gradient case": among examples, let us cite compressed stochastic gradient descent, stochastic Majorize-Minimization methods such as the Expectation-Maximization algorithm, or the Temporal Difference algorithm in reinforcement learning.

    In this talk, we will show that these "beyond the gradient case" Stochastic Approximation algorithms still converge, even when the oracles are biased, as soon as some parameters of the algorithm are tuned enough. We will discuss what 'tuned enough' means when the quality criterion relies on epsilon-approximate stationarity. We will also comment the efficiency of the

    algorithm through sample complexity. Such analyses are based on non-asymptotic convergence bounds in expectation: we will present a unified method to obtain such bounds for a large class of Stochastic Approximation methods including both the gradient case and the beyond the gradient case. Finally, a Variance Reduction technique will be described and its efficiency illustrated.


  • Le 21 novembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de Conférences
    Thierry Emeric Gbaguidi IMB U-Bordeaux.
    On the SAGA algorithm with decreasing step

    Stochastic optimization naturally appear in many application areas, including machine learning. Our goal is to go further in the analysis of the Stochastic Average Gradient Accelerated (SAGA) algorithm. To achieve this, we introduce a new $\lambda$-SAGA algorithm which interpolates between the Stochastic Gradient Descent ($\lambda=0$) and the SAGA algorithm ($\lambda=1$). Firstly, we investigate the almost sure convergence of this new algorithm with decreasing step which allows us to avoid the restrictive strong convexity and Lipschitz gradient hypotheses associated to the objective function. Secondly, we establish a central limit theorem for the $\lambda$-SAGA algorithm. Finally, we provide the non-asymptotic $L^p$ rates of convergence.


  • Le 28 novembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Michel Bonnefont IMB
    (proba-stats) À préciser

    À préciser


  • Le 5 décembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Julien Hermant IMB
    (Maths-IA) A définir

    A définir


  • Le 12 décembre 2024 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de Conférénces
    Jordan Serres INSA
    (proba-stats) À préciser

    À préciser


  • Le 9 janvier 2025 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de Conférence
    Franck Iutzeler Institut de Mathématiques de Toulouse
    (Maths-IA) A définir

    A définir


  • Le 23 janvier 2025 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    Aram-Alexandre Pooladian NYU
    (proba-stats) À préciser

    À préciser


  • Le 30 janvier 2025 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférénces
    David Picard ENPC
    À préciser

    À préciser


  • Le 6 février 2025 à 10:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Cécilia Lancien Institut Fourier & CNRS
    (Prob-Stat) A définir

    A préciser


  • Le 6 février 2025 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Rémi Gribonval INRIA
    (Maths-IA) A définir

    A définir


  • Le 13 mars 2025 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Sibylle Marcotte ENS
    (Maths- IA) A définir

    A définir


  • Le 3 avril 2025 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Adrien Taylor INRIA
    (Maths-IA) A définir

    A définir


  • Le 5 juin 2025 à 11:15
  • Séminaire Images Optimisation et Probabilités
    Salle de conférence
    Nicolas Keriven CNRS
    (Maths-IA) A définir

     A définir


    Les anciens séminaires