Responsable : Luis Fredes et Camille Male
Neural style transfer (NST) is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image. It is particularly impressive when it comes to transferring style from a painting to an image. NST was originally achieved by solving an optimization problem to match the global statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate NST and produce images with larger size. However, our investigation shows that these accelerated methods all compromise the quality of the produced images in the context of painting style transfer. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution (UHR) images, enabling multiscale NST at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons, as well as a user study, show that our method produces style transfer of unmatched quality for such high-resolution painting styles. By a careful comparison, we show that state of the art fast methods are still prone to artifacts, thus suggesting that fast painting style transfer remains an open problem.
Joint work with Lara Raad, José Lezama and Jean-Michel Morel.
In this talk, I will present the results of a collaboration with Benjamin McKenna on the injective norm of large random Gaussian tensors and uniform random quantum states and, time allowing, describe some of the context underlying this work. The injective norm is a natural generalization to tensors of the operator norm of a matrix and appears in multiple fields. In quantum information, the injective norm is one important measure of genuine multipartite entanglement of quantum states, known as geometric entanglement. In our recent preprint, we provide high-probability upper bounds in different regimes on the injective norm of real and complex Gaussian random tensors, which corresponds to lower bounds on the geometric entanglement of random quantum states, and to bounds on the ground-state energy of a particular multispecies spherical spin glass model. Our result represents a first step towards solving an important question in quantum information that has been part of folklore.
Optimal Transport is a useful metric to compare probability distributions and to compute a pairing given a ground cost. Its entropic regularization variant (eOT) is crucial to have fast algorithms and reflect fuzzy/noisy matchings. This work focuses on Inverse Optimal Transport (iOT), the problem of inferring the ground cost from samples drawn from a coupling that solves an eOT problem. It is a relevant problem that can be used to infer unobserved/missing links, and to obtain meaningful information about the structure of the ground cost yielding the pairing. On one side, iOT benefits from convexity, but on the other side, being ill-posed, it requires regularization to handle the sampling noise. This work presents an in-depth theoretical study of the l1 regularization to model for instance Euclidean costs with sparse interactions between features. Specifically, we derive a sufficient condition for the robust recovery of the sparsity of the ground cost that can be seen as a far reaching generalization of the Lasso's celebrated Irrepresentability Condition. To provide additional insight into this condition, we work out in detail the Gaussian case. We show that as the entropic penalty varies, the iOT problem interpolates between a graphical Lasso and a classical Lasso, thereby stablishing a connection between iOT and graph estimation, an important problem in ML.
Page de l'événement : https://indico.math.cnrs.fr/event/11353/overview
Mercredi 10/07
14h00 Jurgen Angst (Univ. Rennes)
Title : TLC in total variation for beta-ensembles
Abstract : In this talk, we study the fluctuations of linear statistics associated with beta-ensembles, which are statistical physics models generalizing random matrix spectra. In the context of random matrices precisely (e.g. GOE, GUE), the "law of large numbers" is Wigner's theorem, which states that the empirical measure of eigenvalues converges to the semicircle law, and fluctuations around equilibrium can be shown to be Gaussian. We will describe how this result generalizes to beta-ensembles and how it is possible to quantify the speed of convergence to the normal distribution. We obtain optimal rates of convergence for the total variation distance and the Wasserstein distances. To do this, we introduce a variant of Stein's method for a generator $L$ that is not necessarily invertible, and which allows us to establish the asymptotic normality of observables that are not in the image of $L$. Time permitting, we will also look at the phenomenon of super-convergence, which ensures that convergence to the normal law takes place for very strong metrics, typically the $C^{\infty}$-convergence of densities. The talk is based on recent works with R. Herry, D. Malicet and G. Poly.
15h00 Nicolas Juillet (Univ. Haute-Alsace)
Title : Exact interpolation of 1-marginals
Abstract : I shall present a new type of martingales that exactly interpolates any given family of 1-dimensional marginals on R1 (satisfying the suitable necessary assumption). The construction makes use of ideas from the (martingale) optimal transportation theory and relies on different stochastic orders. I shall discuss of related constructions and open questions (joint work with Brückerhoff and Huesmann).
16h00 Kolehe Coulibaly-Pasquier (Inst. Ellie Cartan)
Title : On the separation cut-off phenomenon for Brownian motions on high dimensional rotationally
symmetric compact manifolds.
Abstract : Given a family of compact, rotationally symmetric manifolds indexed by the dimension and a weighted function, we will study the cut-off phenomena for the Brownian motion on this family.
Our proof is based on the construction of an intertwined process, a strong stationary time, an estimation of the moments of the covering time of the dual process, and on the phenomena of concentration of the measure.
We will see a phase transition concerning the existence or not of cut-off phenomena, which depend on the shape of the weighted function.
Jeudi 11/07
11h00 Masha Gordina (Univ. of Connecticut)
Title : Dimension-independent functional inequalities on sub-Riemannian manifolds
Abstract : The talk will review recent results on gradient estimates, log Sobolev inequalities, reverse Poincare and log Sobolev inequalities on a class of sub-Riemannian manifolds. As for many of such setting curvature bounds are not available, we use different techniques including tensorization and taking quotients. Joint work with F. Baudoin, L. Luo and R. Sarkar.
In this thesis we study couplings of subelliptic Brownian motions in several subRiemannian manifolds: the free, step $2$ Carnot groups, including the Heisenberg group, as well as the groups of matrices $SU(2)$ et $SL(2,\mathbb{R})$.
Taking inspiration from previous works on the Heisenberg group we obtain successful non co-adapted couplings on $SU(2)$, $SL(2,\mathbb{R})$ (under strong hypothesis) and also on the free step $2$ Carnot groups with rank $n\geq 3$. In particular we obtain estimates of the coupling rate, leading to gradient inequalities for the heat semi-group and for harmonic functions. We also describe the explicit construction of a co-adapted successful coupling on $SU(2)$.
Finally, we develop a new coupling model "in one sweep" for any free, step $2$ Carnot groups. In particular, this method allows us to obtain relations similar to the Bismut-Elworthy-Li formula for the gradient of the semi-group by studying a change of probability on the Gaussian space.
La motivation principale de cet exposé est de trouver des temps forts de stationnarité pour des processus de Markov (X_t), c'est à dire des temps d'arrêt T tels que X_T soit à l'équilibre, T et X_T soient indépendants. Pour trouver des temps fort de stationnarité T, il est naturel et très facile dans certains cas d'utiliser des processus duaux (D_t), tels que T soit un temps d'atteinte d'un certain état pour le processus dual. On étudiera l'entrelacement entre (X_t) et (D_t). On donnera des exemples pour des chaînes de Markov à espace d'états finis, puis on s'intéressera au mouvement brownien avec des processus duaux à valeur ensemble. L'étonnant théorème "2M-X" de Pitman donne un exemple d'entrelacement du mouvement brownien dans le cercle. On généralisera ce théorème aux variétés riemanniennes compactes, et on construira des temps forts de stationnarité. On étudiera la transition de phase en grande dimension. Finalement, on s'intéressera à des duaux à valeur mesure."
Cet exposé se veut une introduction par l'exemple à une théorie de la causalité développée depuis la fin des années 90 par Judea Pearl. Elle lui a valu une partie de son prix ACM Turing en 2011, l'équivalent en informatique du prix Abel. Nous considérerons un modèle classique dont des hypothèses sont formulées par un graphe de cause. Il contient notamment une cause commune inobservable et une variable éthiquement non contrôlable. Adoptant ici un vocabulaire informatique, nous traiterons en détail une requête sur les traces d'exécution d'un programme inexécutable à l'aide de statistiques sur les traces d'un autre programme lui exécutable. Les éléments rencontrés lors de cette analyse seront alors utilisés dans une présentation de l'architecture globale de la démarche de Pearl. Si le temps le permet, nous discuterons quelques éléments sur les calculs probabilistes dans ce contexte qui s'avèrent souvent reformulable uniquement en terme de théorie des graphes.
Séminaire IOP banalisé
Seminaire joint avec OptimAI.
This talk focuses on models for multivariate count data, with emphasis on species abundance data. Two approaches emerge in this framework: the Poisson log-normal (PLN) and the Tree Dirichlet multinomial (TDM) models. The first uses a latent gaussian vector to model dependencies between species whereas the second models dependencies directly on observed abundances. The TDM model makes the assumption that the total abundance is fixed, and is then often used for microbiome data since the sequencing depth (in RNA seq) varies from one observation to another leading to a total abundance that is not really interpretable. We propose to generalize TDM model in two ways: by relaxing the fixed total abundance and by using Polya distribution instead of Dirichlet multinomial. This family of models corresponds to Polya urn models with a random number of draws and will be named Polya splitting distributions. In a first part I will present the probabilistic properties of such models, with focus on marginals and probabilistic graphical model. Then it will be shown that these models emerge as stationary distributions of multivariate birth death process under simple parametric assumption on birth-death rates. These assumptions are related to the neutral theory of biodiversity that assumes no biological interaction between species. Finally, the statistical aspects of Polya splitting models will be presented: the regression framework, the inference, the consideration of a partition tree structure and two applications on real data.
Deep learning has revolutionised image processing and is often considered to outperform classical approaches based on accurate modelling of the image formation process. In this presentation, we will discuss the interplay between model-based and learning-based paradigms, and show that hybrid approaches show great promises for scientific imaging, where interpretation and robustness to real-world degradation is important. We will present two applications on super-resolution and high-dynamic range imaging, and exoplanet detection from direct imaging at high contrast.
N'oubliez pas de vous inscrire à la liste maths-ia !
https://listes.math.u-bordeaux.fr/wws/subscribe/mathsia?previous_action=info
In statistical learning, many analyses and methods rely on optimization, including its stochastic versions introduced for example, to overcome an intractability of the objective function or to reduce the computational cost of the deterministic optimization step.
In 1951, H. Robbins and S. Monro introduced a novel iterative algorithm, named "Stochastic Approximation", for the computation of the zeros of a function defined by an expectation with no closed-form expression. This algorithm produces a sequence of iterates, by replacing at each iteration the unknown expectation with a Monte Carlo approximation based on one sample. Then, this method was generalized: it is a stochastic algorithm designed to find the zeros of a vector field when only stochastic oracles of this vector field are available.
Stochastic Gradient Descent algorithms are the most popular examples of Stochastic Approximation : oracles come from a Monte Carlo approximation of a large sum. Possibly less popular are examples named "beyond the gradient case" for at least two reasons. First, they rely on oracles that are biased approximation of the vector field, as it occurs when biased Monte Carlo sampling is used for the definition of the oracles. Second, the vector field is not necessarily a gradient vector field. Many examples in Statistics and more
generally in statistical learning are "beyond the gradient case": among examples, let us cite compressed stochastic gradient descent, stochastic Majorize-Minimization methods such as the Expectation-Maximization algorithm, or the Temporal Difference algorithm in reinforcement learning.
In this talk, we will show that these "beyond the gradient case" Stochastic Approximation algorithms still converge, even when the oracles are biased, as soon as some parameters of the algorithm are tuned enough. We will discuss what 'tuned enough' means when the quality criterion relies on epsilon-approximate stationarity. We will also comment the efficiency of the
algorithm through sample complexity. Such analyses are based on non-asymptotic convergence bounds in expectation: we will present a unified method to obtain such bounds for a large class of Stochastic Approximation methods including both the gradient case and the beyond the gradient case. Finally, a Variance Reduction technique will be described and its efficiency illustrated.
Stochastic optimization naturally appear in many application areas, including machine learning. Our goal is to go further in the analysis of the Stochastic Average Gradient Accelerated (SAGA) algorithm. To achieve this, we introduce a new $\lambda$-SAGA algorithm which interpolates between the Stochastic Gradient Descent ($\lambda=0$) and the SAGA algorithm ($\lambda=1$). Firstly, we investigate the almost sure convergence of this new algorithm with decreasing step which allows us to avoid the restrictive strong convexity and Lipschitz gradient hypotheses associated to the objective function. Secondly, we establish a central limit theorem for the $\lambda$-SAGA algorithm. Finally, we provide the non-asymptotic $L^p$ rates of convergence.
Dans cet exposé, je vais m'intéresser au mouvement Brownien dans des cadres simples de géométrie sous riemannienne: le groupe de Heisenberg et les groupes de Carnot de rang 2. Nous proposons une construction d'un couplage de deux mouvement Browniens à un temps fixe. Cette construction est basée sur une décomposition de Legendre du mouvement Brownien standard et de son aire de Lévy. Nous déduisons alors des estimées précises de la décroissance en variation totale entre les lois des mouvements Browniens
et par une technique de changement de probabilité une formule d'intégration par partie de type Bismut ainsi des estimées de régularisation de type Poincaré inverse pour le semi-groupe associé. Travail en commun avec Marc Arnaudon, Magalie Bénéfice et Delphine Féral
Many problems, especially in machine learning, can be formulated as optimization problems. Using optimization algorithms, such as stochastic gradient descent or ADAM, has become a cornerstone to solve these optimization problems. However for many practical cases, theoretical proofs of their efficiency are lacking. In particular, it has been empirically observed that adding a momentum mechanism to the stochastic gradient descent often allows solving these optimization problems more efficiently. In this talk, we introduce a condition linked to a measure of the gradient correlation that allows to theoretically characterize the possibility to observe this acceleration.
Statistical estimation in a geometric context has become an increasingly important issue in recent years, not least due to the development in ML of non-linear dimension reduction techniques, which involve projecting high-dimensional data onto a much lower-dimensional sub-manifold. The search for statistical guarantees justifying both the use and the effectiveness of these algorithms is now a much-studied area. In this talk, we will take a geometric view of the issue, and see how some usual curvature quantities are translated into algorithmic guarantees. First, we will see that upper bounds on sectional curvatures give good properties for barycenter estimation, and then we will see that a lower bound on Ricci curvature implies the existence of depth points, giving rise to robust statistical estimators. Those works are based on joint works with Victor-Emmanuel Brunel (ENSAE Paris) and Shin-ichi Ohta (Osaka University).