Cardiac arrhythmia is one of the world's leading causes of death. The electrical rhythm of the heart is disturbed, and one treatment is cardiac ablation, which aims to electrically isolate certain parts of the heart. The bidomain model is a very classical mathematical model for cardiac electrophysiology. However it turns out to be unsuitable to describe the application of short and intense electric pulses as used in pulsed electric field ablation (PFA) - a therapeutical innovation in the context of cardiac ablation. We propose a macroscopic model designed to account for PFA and be compatible with cardiac electrophysiology. After deriving it from the cell-scale equations of electrophysiology using two-scale convergence, we present some numerical simulations, followed by an overview of the perspectives from the mathematical point of view (proof of the convergence, analysis of the PDE system), from the modeling point of view (ionic term, fibers orientation) and from the simulation point of view (sensitivity analysis, data assimilation). Particular emphasis will be made on the mathematical homogenization process (two-scale convergence).
In many applications in signal/image processing and statistical problem, we wish to recover information from a limited amount of linear measurements, i.e solve an underdetermined system. Surprisingly, in the mid-2000 E. Candès, J. Romberg and T. Tao showed that under the assumption of an underlying sparsity, we could recover a signal with a small amount of measurement, largely surpassing the previous assumptions based upon the Shannon-Nyquist Sampling theorem. This is can be viewed as the birth of compressive sensing, a rich topic of mathematics that uses a wide array of branches of mathematics, e.g linear algebra, random matrices, convex analysis, optimization. In this talk, we shall discuss some of the main topics regarding compressive sensing and provide an overview of the conditions under which sparse and low-rank vectors/matrices may be recovered from a measurement. Finally, we shall see how this may relate to image decomposition.
This talk is devoted to the study of Schrödinger equations in the presence of resonant interactions that can lead to energy transfer. When the domain is a Diophantine torus we prove that, over very long time scales, the majority of small solutions in high regularity Sobolev spaces do not exchange energy from low to high frequencies. We first provide context on Birkhoff normal form approaches to study of the long-time dynamics of the solutions to Hamiltonian partial differential equations. Then, we introduce the induction on scales normal form, central to our proof. Throughout the iteration, we ensure appropriate non-resonance properties while modulating the frequencies (of the linearized system) with the amplitude of the Fourier coefficients of the initial data. Our main challenge is then to address very small divisor problems and to describe the set of admissible initial data.The results are based on a joint work with Joackim Bernier, and an ongoing joint work with Gigliola Staffilani.
$$abla$$
Chebyshev's bias is the phenomenon stating that the number of prime numbers $p \leq x$ that are congruent to a non-square $a \mod q$, denoted $\pi(x;q,a)$, has strong tendency for these to to be larger than those congruent to a square $b \mod q$, $\pi(x;q,b)$. This bias was quantitatively proven by Rubinstein and Sarnak, in 1994, under some hypotheses, including the Generalized Riemann Hypothesis. In our talk, we will explore their results and extend the discussion to the equivalent concept of Chebyshev's bias in Number Fields.
Decomposing an image into meaningful components is a challenging inverse problem in image processing and has been widely applied to cartooning, texture removal, denoising, soft shadow/spotlight removal, detail enhancement etc. In this talk, I will review the different approaches and models proposed during the years to tackle this problem, focusing on the crucial role played by parameter selection. Then, I will present a two-stage variational model for the additive decomposition of images into piecewise constant, smooth, textured and white noise components and show numerical results of decomposition of textured images corrupted by several kinds of additive white noises.
L'ordre du jour sera le suivant :
1. Approbation du compte rendu de la réunion du conseil scientifique du 20 février (vote)
2. Exposé scientifique de Raphaël Loubère (CSM) : "Chaire PROVE et équipe projet MONADE"
3. Examen de demandes d'ADT/HDR
4. Informations de la direction
5. Questions diverses.
In this talk we are going to explore some parts of the concept of fractal! What are they? How can they possibly have a non-integer dimension? Is this useful? (of course not) The tools we will use come from undergraduate-level measure theory, and a bit of topology. Stay until the end and you will be able to enjoy some nice pictures o:
Cf. https://plmbox.math.cnrs.fr/f/136ed3186ea241e8b980/
En théorie des invariants, on est parfois amené à s'intéresser à la polynomialité de l'algèbre des invariants ${\mathbb C}[V]^G$ des fonctions polynomiales sur un espace vectoriel complexe $V$ de dimension finie, par l'action d'un groupe linéaire algébrique $G$.Par exemple si $G$ est connexe, semi-simple agissant par l'action adjointe (ou coadjointe) sur son algèbre de Lie $V=g$ (isomorphe à son dual), un théorème célèbre de Chevalley permet de conclure que l'algèbre des invariants ${\mathbb C}[V]^G$ est une algèbre de polynômes. D'autre part, un théorème de Kostant permet d'établir un isomorphisme d'algèbres entre ${\mathbb C}[g]^G$ et l'algèbre des fonctions polynomiales sur une "tranche de Kostant", par restriction des fonctions à cette tranche : cela donne ce que l'on peut nommer aussi une "section de Weierstrass" pour ${\mathbb C}[g]^G$.Je passerai d'abord en revue quelques exemples ou contre-exemples de polynomialité de certaines algèbres d'invariants obtenues en faisant agir $G$ sur le dual de son algèbre de Lie par l'action coadjointe, et donnerai quelques exemples de sections de Weierstrass obtenues dans le cas de certaines sous-algèbres paraboliques.Je définirai ensuite la contraction d'Inönü-Wigner d'une sous-algèbre parabolique $p$ d'une algèbre de Lie simple, que l'on peut voir comme une certaine dégénérescence de $p$.En m'appuyant sur des techniques employées pour les sous-algèbres paraboliques, je tenterai d'expliquer comment on peut obtenir des (semi)-invariants pour le cas où $V$ est le dual de la contraction d'Inönü-Wigner d'une sous-algèbre parabolique sur lequel agit le groupe adjoint de la contraction.En particulier, pour les contractions d'Inönü-Wigner de certaines sous-algèbres paraboliques maximales (notamment en type B), je donnerai des sections de Weierstrass pour les algèbres de semi-invariants correspondantes, ce qui prouvera en particulier la polynomialité de ces algèbres de semi-invariants.Ceci est un travail en cours, dont une partie se trouve sur arXiv :
https://arxiv.org/abs/2310.06761
In this talk I am interested in formulas describing the low-lying eigenvalues of the Witten Laplacian $\Delta_V = -h^2\Delta + | V^{\prime} |^2 - h V^{\prime \prime}$. The case where $V$ is a Morse function has been largely studied and here I try to obtain similar results when $V$ has some degeneracy. In the end of the presentation I will also give an example of new behaviors that were not observed in the Morse case.
Generally, polynomial systems that arise in algebraic cryptanalysis have extra structure compared to generic systems, which comes from the algebraic modelling of the cryptographic scheme. Sometimes, part of this extra structure can be caught with polynomial rings with non-standard grading. For example, in the Kipnis-Shamir modelling of MinRank one considers the system over a bi-graded polynomial ring instead. This allows for better approximations of the solving degree of such systems when using Gröbner basis algorithms.
In this talk, I will present ongoing work in which this idea is extended to multi-graded polynomial rings. Furthermore, I will show how we can use this grading to tailor existing algorithms to use this structure and speed up computation.
Les surfaces del Pezzo et leurs groupes d'automorphismes jouent un rôle important dans l'étude des sous-groupes algébriques du groupe de Cremona du plan projectif.
Sur un corps algébriquement clos, il est classique qu’une surface del Pezzo est soit isomorphe à $\mathbb{P}^{1} \times \mathbb{P}^{1}$ soit à l’éclatement de $\mathbb{P}^{2}$ en au plus $8$ points en position générale, et dans ce cas, les automorphismes des surfaces del Pezzo (de tout degré) ont été décrits. En particulier, il existe une unique classe d'isomorphismes de surfaces del Pezzo de degré $5$ sur un corps algébriquement clos. Dans cet exposé, nous nous intéresserons aux surfaces del Pezzo de degré $5$ définies sur un corps parfait. Dans ce cas, il y a beaucoup de surfaces supplémentaires (comme on peut déjà le voir si le corps de base est le corps des nombres réels), et la classification ainsi que la description du groupe d’automorphismes de ces surfaces sur un corps parfait $\mathbf{k}$ se ramènent à comprendre les actions du groupe de Galois $\operatorname{Gal}(\overline{\mathbf{k}}/\mathbf{k})$ sur le graphe des $(-1)$-courbes.
Consider a control system 𝛛t f + Af = Bu. Assume that 𝛱 is
a projection and that you can control both the systems
𝛛t f + 𝛱Af = 𝛱Bu,
𝛛t f + (1-𝛱)Af = (1-𝛱)Bu.
Can you conclude that the first system itself is controllable ? We
cannot expect it in general. But in a joint work with Andreas Hartmann,
we managed to do it for the half-heat equation. It turns out that the
property we need for our case is:
If 𝛺 satisfies some cone condition, the set {f+g, f∈L²(𝛺), g∈L²(𝛺),
f is holomorphic, g is anti-holomorphic} is closed in L²(𝛺).
The first proof by Friedrichs consists of long computations, and is
very "complex analysis". But a later proof by Shapiro uses quite
general coercivity estimates proved by Smith, whose proof uses some
tools from algebra : Hilbert's nullstellensatz and/or primary ideal
decomposition.
In this first talk, we will introduce the algebraic tools needed and
present Smith's coercivity inequalities. In a second talk, we will
explain how useful these inequalities are to study the control
properties of the half-heat equation.
We consider the standard Ginzburg-Landau system for N-dimensional maps defined in the unit ball for some parameter eps>0. For a boundary data corresponding to a vortex of topological degree one, the aim is to prove the (radial) symmetry of the ground state of the system. We show this conjecture in any dimension N≥7 and for every eps>0, and then, we also prove it in dimension N=4,5,6 provided that the admissible maps are curl-free. This is part of joint works with L. Nguyen, M. Rus, V. Slastikov and A. Zarnescu.
We will make a tour of related concepts whose motivation lies in
quantum information theory. We consider the detection of entanglement
in unitarily-invariant states, a class of positive (but not completely
positive) multilinear maps, and the construction of tensor polynomial
identities. The results are established through the use of commutative
and noncommutative Positivstellensätze and the representation theory of
the symmetric group.
We discuss a new swarm-based gradient descent (SBGD) method for non-convex optimization. The swarm consists of agents, each is identified with position $x$ and mass $m$. There are three key aspects to the SBGD dynamics: (i) persistent transition of mass from agents at high to lower ground; (ii) a random marching direction, aligned with the steepest gradient descent; and (iii) a time stepping protocol which decreases with $m$.
The interplay between positions and masses leads to dynamic distinction between `heavier leaders’ near local minima, and `lighter explorers’ which explore for improved position with large(r) time steps. Convergence analysis and numerical simulations demonstrate the effectiveness of SBGD method as a global optimizer.
We give a light talk on optimality of shapes in geometry and physics. First, we recollect classical geometric results that the disk has the largest area (respectively, the smallest perimeter) among all domains of a given perimeter (respectively, area). Second, we recall that the circular drum has the lowest fundamental tone among all drums of a given area or perimeter and reinterpret the result in a quantum-mechanical language of nanostructures. In parallel, we discuss the analogous optimality of square among all rectangles in geometry and physics. As the main body of the talk, we present our recent attempts to prove the same spectral-geometric properties in relativistic quantum mechanics, where the mathematical model is a matrix-differential (Dirac) operator with complex (infinite-mass) boundary conditions. It is frustrating that such an illusively simple and expected result remains unproved and apparently out of the reach of current mathematical tools.
Neural style transfer (NST) is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image. It is particularly impressive when it comes to transferring style from a painting to an image. NST was originally achieved by solving an optimization problem to match the global statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate NST and produce images with larger size. However, our investigation shows that these accelerated methods all compromise the quality of the produced images in the context of painting style transfer. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution (UHR) images, enabling multiscale NST at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons, as well as a user study, show that our method produces style transfer of unmatched quality for such high-resolution painting styles. By a careful comparison, we show that state of the art fast methods are still prone to artifacts, thus suggesting that fast painting style transfer remains an open problem.
Joint work with Lara Raad, José Lezama and Jean-Michel Morel.
The Beurling--Selberg extremal approximation problems aim to find optimal unisided bandlimited approximations of a target function of bounded variation. We present an extension of the Beurling--Selberg problems, which we call “of higher-order,” where the approximation residual is constrained to faster decay rates in the asymptotic, ensuring the smoothness of their Fourier transforms. Furthermore, we harness the solution’s properties to bound the extremal singular values of confluent Vandermonde matrices with nodes on the unit circle. As an application to sparse super-resolution, this enables the derivation of a simple minimal resolvable distance, which depends only on the properties of the point-spread function, above which stability of super-resolution can be guaranteed.
The coupling of coastal wave models, such as Boussinesq-type (BT) and Saint-Venant (SV) equations, has been explored since the 1990s. Despite numerous models and coupling examples, the literature exhibits significant disagreement regarding induced artifacts and methods for their analysis. This work aims to elucidate these issues, proposing explanations and a method for evaluating and comparing coupling techniques. We ground our explanation in the mathematical properties of each model's Cauchy and half-line problems, highlighting the sensitivity of these models to numerical artifacts. Additionally, we demonstrate how one-way models provide insights into expected physical effects, unexpected artifacts, and errors relative to 3D models. We demonstrate this analysis with linearized models, where we establish the well-posedness of a popular coupling, characterize analytically the "coupling error" in terms of wave reflections, and prove its asymptotic behavior in shallow water. We will discuss how these insights can be applied to other linear/nonlinear models, providing a foundation for the evaluation and comparison of new coupled coastal wave models.
Les assistants de preuves sont des logiciels permettant de rédiger des énoncés mathématiques et leur démonstration, la compilation du tout garantissant (modulo d'infimes détails) la correction de l'ensemble. Après avoir été surtout promu par la communauté informatique, ils font l'objet d'un engouement croissant chez les mathématicien·nes.
Il y a quelques mois, j'ai formalisé au sein du logiciel Lean/mathlib une démonstration d'un théorème classique, élémentaire, de théorie des groupes : la simplicité du groupe alterné sur au moins 5 lettres, via un critère d'Iwasawa généralement utilisé pour démontrer la simplicité des groupes géométriques.
Je présenterai ce travail, son contexte, et quelques perspectives. (Aucune familiarité avec les assistants de preuve n'est requise.)
Nous considérerons l'interaction entre une molécule diatomique et un pulse laser et verrons comment calculer semi-classiquement la probabilité pour qu'elle change d'état rotationnel. Nous nous concentrerons en particulier sur le calcul de l'indexe de phase, crucial pour une prise en compte précise des interférences quantiques.
Une singularité de dimension $d$ est quasi-ordinaire par rapport à une projection finie $X$ -----> ${\mathbb C}^d$ si le discriminant de la projection est un diviseur à croisements normaux. Les singularités quasi-ordinaires sont au cœur de l'approche de Jung de la résolution des singularités en caractéristique zéro. En caractéristiques positives, elles ne sont pas très utiles du point de vue de la résolution des singularités, le problème de leurs résolutions étant presque aussi compliqué que le problème de résolution des singularités en général. En utilisant une version pondérée du polyèdre caractéristique de Hironaka (ou tout simplement la géométrie des équations) et des plongements successifs dans des espaces affines de "grandes" dimensions, nous introduisons la notion de singularités Teissier qui coïncide avec les singularités quasi-ordinaires en caractéristiques zéro, mais qui en est différente en caractéristiques positives. Nous démontrons qu'une singularité Teissier définie sur un corps de caractéristique positive est la fibre spéciale d'une famille équisingulière sur une courbe de caractéristique mixte dont la fibre générique (en caractéristique zéro donc) a des singularités quasi-ordinaires. Ici, L'équisingularité de la famille correspond à l'existence d'une résolution plongée simultanée.
Travail en collaboration avec Bernd Schober.
The regular model of a curve is a key object in the study of the arithmetic of the curve, as information about the special fiber of a regular model provides information about its generic fiber (such as rational points through the Chabauty-Coleman method, index, Tamagawa number of the Jacobian, etc). Every curve has a somewhat canonical regular model obtained from the quotient of a regular semistable model by resolving only singularities of a special type called quotient singularities. We will discuss in this talk what is known about the resolution graphs of $Z/pZ$-quotient singularities in the wild case, when $p$ is also the residue characteristic. The possible singularities that can arise in this process are not yet completely understood, even in the case of elliptic curves in residue characteristic 2.
Dans cet exposé nous discuterons des résonances pour un graphe quantique dont sa partie compacte est attachée en un sommet à une arête infinie. Les conditions de transmission à ce sommet dépendent d’un petit paramètre et nous démontrons sous certaines hypothèses sur la géométrie du graphe l’existence d’une famille de résonances dont la partie imaginaire tend vers l’infini.
Ce travail est motivé par une question issue de la physique expérimentale où de telles familles de résonances ont été observées. Je montrerai comment avec des outils mathématiques élémentaires il est possible de montrer l’existence et la localisation de ces résonances.
Il s’agit d’un travail interdisciplinaire en collaboration avec Maxime Ingremeau, Ulrich Kuhl, Olivier Legrand, Junjie Lu (Univ. Nice).
Monsters populate mathematics : topologist's sine, Vitali set, Weierstrass function… These counter-examples to naive intuitions often have in common that they are defined either in a convoluted way, either with an oscillating function like the sine. The o-minimal paradigm allows us to forget those oddness and to make our first intuitions true, by considering only objects that have a "reasonable" definition in a way. What is an o-minimal structure? What examples do we know of? What is happening there? How do they act in complex geometry, in number theory, in optimization?
In this talk, I will present the results of a collaboration with Benjamin McKenna on the injective norm of large random Gaussian tensors and uniform random quantum states and, time allowing, describe some of the context underlying this work. The injective norm is a natural generalization to tensors of the operator norm of a matrix and appears in multiple fields. In quantum information, the injective norm is one important measure of genuine multipartite entanglement of quantum states, known as geometric entanglement. In our recent preprint, we provide high-probability upper bounds in different regimes on the injective norm of real and complex Gaussian random tensors, which corresponds to lower bounds on the geometric entanglement of random quantum states, and to bounds on the ground-state energy of a particular multispecies spherical spin glass model. Our result represents a first step towards solving an important question in quantum information that has been part of folklore.
In this talk, we will introduce a new exact algorithm to solve two-stage stochastic linear programs. Based on the multicut Benders reformulation of such problems, with one subproblem for each scenario, this method relies on a partition of the subproblems into batches. The key idea is to solve at most iterations only a small proportion of the subproblems by detecting as soon as possible that a first-stage candidate solution cannot be proven optimal. We also propose a general framework to stabilize our algorithm, and show its finite convergence and exact behavior. We report an extensive computational study on large-scale instances of stochastic optimization literature that shows the efficiency of the proposed algorithm compared to nine alternative algorithms from the literature. We also obtain significant additional computational time savings using the primal stabilization schemes.
Simuler numériquement de manière précise l'évolution des interfaces séparant différents milieux est un enjeu crucial dans de nombreuses applications (multi-fluides, fluide-structure, etc). La méthode MOF (moment-of-fluid), extension de la méthode VOF (volume-of-fluid), utilise une reconstruction affine des interfaces par cellule basée sur les fractions volumiques et les centroïdes de chaque phase. Cette reconstruction d'interface est solution d'un problème de minimisation sous contrainte de volume. Ce problème est résolu dans la littérature par des calculs géométriques sur des polyèdres qui ont un coût important en 3D. On propose dans cet exposé une nouvelle approche du calcul de la fonction objectif et de ses dérivées de manière complètement analytique dans le cas de cellules hexaédriques rectangulaires et tétraédriques en 3D. Les résultats numériques montrent un gain important en temps de calcul.
L'existence de métriques kählériennes canoniques (Kähler-Einstein, à courbure scalaire constante, etc...) dans une classe de cohomologie donnée d'une variété kählérienne compacte admet une formulation variationnelle comme équation d'Euler-Lagrange de certaines fonctionnelles. Grâce aux travaux profonds de Darvas-Rubinstein et Chen-Cheng, on sait que de plus qu'elles admettent des points critiques (donc des métriques canoniques) ssi elles satisfont une condition de croissance linéaire. Après avoir passé en revue ces objets fondamentaux, j'expliquerai comment cette caractérisation permet de généraliser des travaux d'Arezzo-Pacard et Seyyedali-Szekelyhidi portant sur la stabilité de telles métriques par éclatement de la variété. Il s'agit d'un travail en collaboration avec Mattias Jonsson et Antonio Trusiani.
Algebraic curves over a finite field $\mathbb{F}_q$ have been a source of great fascination, ever since the seminal work of Hasse and Weil in the 1930s and 1940s. Many fruitful ideas have arisen out of this area, where number theory and algebraic geometry meet, and many applications of the theory of algebraic curves have been discovered during the last decades.
A very important example of such application was provided in 1977-1982 by Goppa, who found a way to use algebraic curves in coding theory. The key point of Goppa's construction is that the code parameters are essentially expressed in terms of the features of the curve, such as the number $N_q$ of $\mathbb{F}_q$-rational points and the genus $g$. In this light, Goppa codes with good parameters are constructed from curves with large $N_q$ with respect to their genus $g$.
Given a smooth projective, algebraic curve of genus $g$ over $\mathbb{F}_q$, an upper bound for $N_q$ is a corollary to the celebrated Hasse-Weil Theorem,
$$N_q \leq q+ 1 + 2g\sqrt{q}.$$
Curves attaining this bound are called $\mathbb{F}_q$-maximal. The Hermitian curve is a key example of an $\mathbb{F}_q$-maximal curve, as it is the unique curve, up to isomorphism, attaining the maximum possible genus of an $\mathbb{F}_q$-maximal curve.
It is a result commonly attributed to Serre that any curve which is $\mathbb{F}_q$-covered by an $\mathbb{F}_q$-maximal curve is still $\mathbb{F}_q$-maximal. In particular, quotient curves of $\mathbb{F}_q$-maximal curves are $\mathbb{F}_q$-maximal. Many examples of $\mathbb{F}_q$-maximal curves have been constructed as quotient curves of the Hermitian curve by choosing a subgroup of its very large automorphism group.
It is a challenging problem to construct maximal curves that cannot be obtained in this way, as well as to construct maximal curves with many automorphisms (in order to use the machinery described above). A natural question arises also: given two maximal curves over the same finite field, how can one decide whether they are isomorphic or not? A way to try to give an answer to this question is to look at the birational invariants of the two curves, that is, their properties that are invariant under isomorphism.
In this talk, we will describe our main contributions to the theory of maximal curves over finite fields and their applications to coding theory. In relation with the question described before, during the talk, the behaviour of the birational invariant of maximal curves will also be discussed.
In this talk, we present results on the eigenvalue distribution for perturbed magnetic Dirac operators in two dimensions. We derive third-order asymptotic formulas that incorporate a geometric property of the perturbation's support. Notably, our approach allows us to consider some perturbations that do not necessarily have fixed sign, which is one the main novelties of our work.
This is part of a joint work together with Vincent Bruneau.
Gröbner bases lie at the forefront of the algorithmic treatment of polynomial systems and ideals in symbolic computation. They are
defined as special generating sets of polynomial ideals which allow to decide the ideal membership problem via a multivariate version of
polynomial long division. Given a Gröbner basis for a polynomial ideal, a lot of geometric and algebraic information about the
polynomial ideal at hand can be extracted, such as the degree, dimension or Hilbert function.
Notably, Gröbner bases depend on two parameters: The polynomial ideal which they generate and a monomial order, i.e. a certain kind
of total order on the set of monomials of the underlying polynomial ring. Then, the geometric and ideal-theoretic information that can be
extracted from a Gröbner basis depends on the chosen monomial order. In particular, the lexicographic one allows us to solve a polynomial system.
Such a lexicographic Gröbner basis is usually computed through a change of order algorithm, for instance the seminal FGLM algorithm. In this talk,
I will present progress made to change of order algorithms: faster variants in the generic case, complexity estimates for system of critical values, computation
of colon ideals or of generic fibers.
This is based on different joint works with A. Bostan, Ch. Eder, A. Ferguson, R. Mohr, V. Neiger and M. Safey El Din.
Adjustable robust optimization problems, as a subclass of multi-stage optimization under uncertainty problems, constitute a class of problems that are very difficult to solve in practice. Although the exact solution of these problems under certain special cases may be possible, for the general case, there are no known exact solution algorithms. Instead, approximate solution methods have been developed, often restricting the functional form of recourse actions, these are generally referred to as “decision rules“. In this talk, we will present a review of existing decision rule approximations including affine and extended affine decision rules, uncertainty set partitioning schemes and finite-adaptability. We will discuss the reformulations and solution algorithms that result from these approximations. We will point out existing challenges in practical use of these decision rules, and identify current and future research directions. When possible we will emphasize the connections to multi-stage stochastic programming literature.
We study the growth of the resolvent of a Toeplitz operator $T_b$, defined on the Hardy space, in terms of the distance to its spectrum $\sigma(T_b)$. We are primarily interested in the case when the symbol $b$ is a Laurent polynomial (\emph{i.e., } the matrix $T_b$ is banded). We show that for an arbitrary such symbol the growth of the resolvent is quadratic, and under certain additional assumption it is linear. We also prove the quadratic growth of the resolvent for a certain class of non-rational symbols.
This is a joint work with S. Kupin and A. Vishnyakova.
A une surface algébrique S on associe son groupe des transformations birationnelles Bir(S). Ces groupes et leurs structures algébriques et dynamiques ont fait l'objet d'études approfondies dans les dernières décennies. Dans cet exposé on verra une réponse positive à une question de Charles Favre concernant des sous-groupes dont tous les éléments sont d'un certain type, dit algébrique. J'expliquerai pourquoi ce résultat technique est intéressant et je l'utiliserai pour décrire des propriétés dynamiques des sous-groupes de type fini de Bir(S). Il s'agit d'un travail commun avec Anne Lonjou et Piotr Przytycki.
La conjecture de Birch et Swinnerton-Dyer prédit un lien entre les points rationnels d'une variété abélienne et les valeurs spéciales de sa fonction L. Cette conjecture est réputée difficile, nous commencerons donc par voire comment l'attaquer à l'aide d'une conjecture intermédiaire où l'on se focalise en un nombre premier $p$. Ensuite, nous verrons comment dans le cas des surfaces abéliennes on peut obtenir une preuve de cette conjecture (la conjecture intermédiaire) en faisant varier $p$-adiquement une classe de cohomologie galoisienne obtenue à partir de la cohomologie de la variété de Shimura de GSp(4).
In this talk, I will present some introductory facts on Hardy-Toeplitz and Bergman-Toeplitz operators. I will also discuss the presence (or absence) of discrete spectrum for a Bergman-Toeplitz operator; this part of the talk will be based on works of Zhao- Zheng et al., 2010- 2020.
Witsenhausen's problem asks for the maximum fraction α_n of the n-dimensional unit sphere that can be covered by a measurable set containing no pairs of orthogonal points. We extended well known optimization hierarchies based on the Lovász theta number, like the Lasserre hierarchy, to Witsenhausen's problem and similar problems. We then showed that these hierarchies converge to α_n, and used them to compute the best upper bounds known for α_n in low dimensions.
Ordre du jour :
1) Informations diverses
2) Listes de diffusion
3) Retours sur le sondage "missions" et propositions
4) Questions diverses.
Optimal Transport is a useful metric to compare probability distributions and to compute a pairing given a ground cost. Its entropic regularization variant (eOT) is crucial to have fast algorithms and reflect fuzzy/noisy matchings. This work focuses on Inverse Optimal Transport (iOT), the problem of inferring the ground cost from samples drawn from a coupling that solves an eOT problem. It is a relevant problem that can be used to infer unobserved/missing links, and to obtain meaningful information about the structure of the ground cost yielding the pairing. On one side, iOT benefits from convexity, but on the other side, being ill-posed, it requires regularization to handle the sampling noise. This work presents an in-depth theoretical study of the l1 regularization to model for instance Euclidean costs with sparse interactions between features. Specifically, we derive a sufficient condition for the robust recovery of the sparsity of the ground cost that can be seen as a far reaching generalization of the Lasso's celebrated Irrepresentability Condition. To provide additional insight into this condition, we work out in detail the Gaussian case. We show that as the entropic penalty varies, the iOT problem interpolates between a graphical Lasso and a classical Lasso, thereby stablishing a connection between iOT and graph estimation, an important problem in ML.
Two-stage stochastic programs (TSSP) are a classic model where a decision must be made before the realization of a random event, allowing recourse actions to be performed after observing the random values. For example, many classic optimization problems, like network flows or facility location problems, became TSSP if we consider, for example, a random demand.
Benders decomposition is one of the most applied methods to solve TSSP with a large number of scenarios. The main idea behind the Benders decomposition is to solve a large problem by replacing the values of the second-stage subproblems with individual variables, and progressively forcing those variables to reach the optimal value of the subproblems, dynamically inserting additional valid constraints, known as Benders cuts. Most traditional implementations add a cut for each scenario (multi-cut) or a single-cut that includes all scenarios.
In this paper we present a novel Benders adaptive-cuts method, where the Benders cuts are aggregated according to a partition of the scenarios, which is dynamically refined using the LP-dual information of the subproblems. This scenario aggregation/disaggregation is based on the Generalized Adaptive Partitioning Method (GAPM). We formalize this hybridization of Benders decomposition and the GAPM, by providing sufficient conditions under which an optimal solution of the deterministic equivalent can be obtained in a finite number of iterations. Our new method can be interpreted as a compromise between the Benders single-cuts and multi-cuts methods, drawing on the advantages of both sides, by rendering the initial iterations faster (as for the single-cuts Benders) and ensuring the overall faster convergence (as for the multi-cuts Benders).
Computational experiments on three TSSPs validate these statements, showing that the new method outperforms the other implementations of Benders method, as well as other standard methods for solving TSSPs, in particular when the number of scenarios is very large.
Given an inner function $\Theta \in H^\infty(\mathbb D)$ and $[g]$ in the quotient algebra $H^\infty/ \Theta H^\infty$,
its quotient norm is
$\|[g]\|:= \inf \left\{ \|g+\Theta h\|_\infty, h \in H^\infty \right\}$. We show that
when $g$ is normalized so that $\|[g]\|=1$, the quotient norm of its inverse can be made
arbitrarily close to $1$ by imposing $|g(z)|\ge 1- \delta$ when $\Theta(z)=0$, with $\delta>0$ small enough,
(call this property SIP)
if and only if the function $\Theta$ satisfies the following growth property:
$$
\lim_{t\to 1} \inf\left\{ |\Theta(z)|: z \in \mathbb D, \rho(z, \Theta^{-1} \{0\} ) \ge t \right\} =1,
$$
where $\rho$ is the usual pseudohyperbolic distance in the disc, $\rho(z,w):= \left| \frac{z-w}{1-z\bar w}\right|$.
We prove that an inner function is SIP if and only if for any $\eps>0$, the set $\{ z: 0< |\Theta (z) | < 1-\eps\}$
cannot contain hyperbolic disks of arbitrarily large radius.
Thin Blaschke products provide an example of such functions. Some SIP Blaschke products fail to be interpolating
(and thus aren't thin), while there exist Blaschke products which are interpolating and fail to be SIP.
We also study the functions which can be divisors of SIP inner functions.
Sur une variété riemannienne (possiblement singulière), pour chaque classe d'homologie la norme stable mesure la longueur du plus court représentant possible de cette classe. C'est un raffinement naturel du concept de systole, et on s'attend à ce que la norme stable contienne beaucoup d'information géométrique: en contrepartie, la norme stable est généralement très difficile à calculer, si bien qu'il existe très peu d'exemples explicites.
Dans cet exposé je m'intéresserai à la norme stable des surfaces plates. Plus précisément, je montrerai qu'il est possible de calculer la norme stable des tores plats fendus avec la suite de Farey. Ensuite, en recollant des tores fendus je montrerai que l'on obtient des surfaces de demi-translation sur lesquelles la norme stable est connue. Enfin, je montrerai que sur ces surfaces le nombre de classes d'homologie minimisées par des courbes simples de longueur inférieure à un réel x croît sous-quadratiquement en x.
The main tool of soliton theory (aka completely integrable systems) is the inverse scattering transform (IST) which relies on solving the Faddeev-Marchenko integral equation. The latter amounts to inverting the I+Hankel operator which historically was done by classical techniques of integral operators and the theory of Hankel operations was not used. In the recent decade however the interest in the soliton community has started shifting from classical initial conditions of integrable PDEs to more general ones (aka none classical initial data) for which the classical IST no longer works. In this talk, on the prototypical example of the Cauchy problem for the Korteweg-de Vries (KdV) equation, we show how the classical IST can be extended to serve a broad range of physically interesting initial data. Our approach is essentially based on the theory of Hankel operators.
Les anneaux de déformation potentiellement Barsotti–Tate sont un outil essentiel pour l’obtention de résultats profonds en arithmétique, comme la conjecture de Shimura–Taniyama–Weil ou la conjecture de Breuil–Mézard. Néanmoins leur géométrie n’est pas encore bien comprise, et présente de comportement variés avec la parution de points irréguliers ou non-normaux (comme montré par des exemples et conjectures de Caruso–David–Mézard). Dans cet exposé nous discuterons comment les champs de modules de Breuil–Kisin peuvent être utilisés pour décrire la géométrie des champs des représentations potentiellement et modérément Barsotti–Tate (en rang 2, pour des extension non ramifiées de $\mathbf{Q}_p$), en utilisant la théorie des modèles locaux des groupes des lacets en caractéristique mixte. L’outil technique principal est une analyse de la p-torsion d’un complexe tangent pour relever des cartes affines pour des images schématiques entre champs de Breuil–Kisin et des représentations Galoisiennes. Avec ce procédé, nous obtenons un algorithme pour calculer des présentations explicites des anneaux de déformation potentiellement modérément Barsotti–Tate pour les représentations Galoisiennes de dimension 2 pour des extensions non-ramifiées de $\mathbf{Q}_p$. Ceci est un travail en commun avec B. Le Hung et A. Mézard.
Page de l'événement : https://indico.math.cnrs.fr/event/11353/overview
Nous présentons plusieurs exemples de fonctions différentiables ayant des propriétés pathologiques. Nous démontrerons en particulier le résultat suivant, obtenu en collaboration avec A. Daniilidis et S. Tapia. Pour tout N≥1, il existe une fonction f de R ^N dans R, localement Lipschitzienne et différentiable en tout point, telle que pour tout compact connexe d'intérieur non vide, il existe x dans R^N tel que K={ lim Df(x_n); (x_n) converge vers x}.
We are interested in standing waves for the nonlinear Schrodinger equation with double power nonlinearities, whose typical example is the cubic-quintic nonlinearity in $R^3$.
The cubic term is focusing and the quintic term can be chosen to be both focusing and defocusing.
I will introduce my recent results on the existence, uniqueness and non-degeneracy of ground state solutions based on the variational method and the shooting method
We will discuss computable descriptions of isomorphism classes in a fixed isogeny class of both polarised abelian varieties over finite fields (joint work with Bergström-Marseglia) and Drinfeld modules over finite fields (joint work with Katen-Papikian).
More precisely, in the first part of the talk we will describe all polarisations of all abelian varieties over a finite field in a fixed isogeny class corresponding to a squarefree Weil polynomial, when one variety in the isogeny class admits a canonical lifting to characteristic zero. The computability of the description relies on applying categorical equivalences, due to Deligne and Centeleghe-Stix, between abelian varieties over finite fields and fractional ideals in étale algebras.
In the second part, we will use an action of fractional ideals, inspired by work of Hayes, to compute isomorphism classes of Drinfeld modules. As a first step and a problem of independent interest, we prove that an isogeny class contains a Drinfeld module whose endomorphism ring is minimal if and only if the class is either ordinary or defined over the prime field. We obtain full descriptions in these cases, that can be compared to the Drinfeld analogues of those of Deligne and Centeleghe-Stix, respectively.
L’ordre du jour sera le suivant :
1) Adoption du Compte-Rendu du Conseil de Laboratoire du 2 avril 2024 (vote)
2) Informations générales
) Premières discussions sur le Plan de Gestion des Emplois des enseignants-chercheurs 2025
4) Questions diverses
We present a new model for heat transfer in compressible fluid flows. The model is derived from Hamilton’s principle of stationary action in Eulerian coordinates, in a setting where the entropy conservation is recovered as an Euler–Lagrange equation. A sufficient criterion for the hyperbolicity of the model is formulated. The governing equations are asymptotically consistent with the Euler equations for compressible heat conducting fluids, provided the addition of suitable relaxation terms. A study of the Rankine–Hugoniot conditions and Clausius–Duhem inequality is performed for a specific choice of the equation of state. In particular, this reveals that contact discontinuities cannot exist while expansion waves and compression fans are possible solutions to the governing equations. Evidence of these properties is provided on a set of numerical test cases.
Translation surfaces arise naturally in many different contexts, for example when unfolding billard trajectories or when equipping a Riemann surface with an abelian differential. Most visually, they can be described by (finitely or infinitely many) polygons that are glued along edges which are parallel and have the same length.
In this talk, we will be interested in the Veech groups of translation surfaces, that is, the stabilizer of the natural GL(2,R) action on the moduli space for a given translation surface. Although Veech groups have been studied for several decades, they are in itself not fully understood yet. In particular, it is not known in general whether a given abstract group can be realized as the Veech group of a translation surface.
After introducing the realization problem for Veech groups, I will speak about some recent progress in this direction for infinite translation surfaces. This is joint work with Mauro Artigiani, Chandrika Sadanand, Ferrán Valdez, and Gabriela Weitze-Schmithuesen.
In this talk, I will discuss the self-adjointness of the two-dimensional Dirac operator coupled with a singular combination of electrostatic and Lorentz scalar $\delta$-interaction, supported on a closed Lipschitz curve. The main new ingredients are an explicit use of the Cauchy transform on non-smooth curves and a direct link with the Fredholmness of a singular boundary integral operator. This results in a proof of self-adjointness for a new range of coupling constants, which includes and extends all previous results for this class of problems. The study is particularly precise for the case of curvilinear polygons, as the angles can be taken into account in an explicit way. In particular, if the curve is a curvilinear polygon with obtuse angles, then there is a unique self-adjoint realization with domain contained in $H^{1/2}$ for the full range of non-critical coefficients in the transmission condition. The results are based on a joint work with Badreddine Benhellal and Konstantin Pankrashkin.
Certain sets of germs at $+ \infty$ of monotone bijections between neighborhoods of $+ \infty$ form groups under composition. This is the case for germs of functions definable in an o-minimal structure, for certain germs lying in Hardy fields, as well as for more abstract functions defined on fields of formal series, such as transseries.
In this talk I will describe properties of the resulting ordered groups, and show that they can be studied using valuation-theoretic tools adapted to this non-commutative context.
Une partition d'un entier n est une suite décroissante d'entiers positifs de somme n. Cette définition est étroitement liée au groupe symétrique et à sa théorie des représentations. Notamment, pour étudier les représentations sur un corps de caractéristique p on peut utiliser le procédé de p-régularisation, introduit par James, qui à une partition associe une partition p-régulière, c'est-à-dire une partition dont aucune part ne se répète p fois ou plus.Une mesure de probabilité classique sur l'ensemble des partitions de n est la mesure de Plancherel. Un résultat spectaculaire de Kerov–Vershik et Logan–Shepp (1977) donne une forme limite asymptotique pour les grandes partitions tirées selon la mesure de Plancherel. Dans cet exposé, nous montrerons ce que devient ce résultat pour la p-régularisation de grandes partitions. Notamment, il y a toujours existence d'une forme limite, qui est donnée par le « secouage » (shaking) de la courbe de Kerov-Vershik-Logan-Shepp.
I will present recent joint work with Magnus Carlson, where we provide formulas for 3-fold Massey products in the étale cohomology of the ring of integers of a number field. Using these formulas, we identify the first known examples of imaginary quadratic fields with a class group of p-rank two possessing an infinite p-class field tower, where p is an odd prime. Furthermore, we establish a necessary and sufficient condition, in terms of class groups of p-extensions, for the vanishing of 3-fold Massey products. As a consequence, we offer an elementary and sufficient condition for the infinitude of class field towers of imaginary quadratic fields. Additionally, we disprove McLeman’s (3,3)-conjecture.
Reduced order models (ROMs) are parametric mathematical models derived from PDEs using previously computed solutions. In many applications, the solution space turns out to be low dimensional, so that one can trade a minimal loss of accuracy for speed and scalability of the numerical model. ROMs counteract the curse of dimensionality by significantly reducing the computational complexity. Overall, reduced order models have reached a certain level of maturity in the last decade, allowing their implementation in large-scale industrial codes, mainly in structural mechanics. Nevertheless, some hard points remain. Parametric problems governed by advection fields or solutions with a substantial compact support such as shock waves suffer from a limited possibility of dimensional reduction and, at the same time, from an insufficient generalization of the model (out-of-sample solutions). The main reason is that the solution space is usually approximated by an affine or linear representation. In this thesis, we aim to contribute to the use of non-intrusive model reduction methods by working on three axes: (i) Application to unsteady computations with non-intrusive interpolation methods; (ii) Use of hybrid models linking reduced models and numerical simulation models with a domain decomposition type approach; (iii) Application to complex industrial problems The flutter problem on a fin will be used as a first complex application case. Indeed, this fluid-structure problem presents very different behaviors according to the flow regimes and is very expensive to simulate without simplifying assumptions. Thus, a hybrid model could accelerate the computation time while remaining accurate in the complex areas. This CIFRE thesis financed by Ingeliance is part of the chaire PROVE financed by ONERA and the Nouvelle Aquitaine region.
Holomorphic dynamics studies the evolution of complex manifolds under the iteration of holomorphic maps.
While significant progress has been made in understanding the theory of one-dimensional holomorphic dynamics, the transition to higher dimensions still presents difficult challenges since the situation is vastly different from the one-dimensional case.
Even only the study of the dynamics of automorphisms (i.e. holomorphic maps injective and surjective) in two dimensions already poses deep difficulties, and the construction of significant examples is an active area of research.
In this talk, we provide an overview of the dynamics in several complex variables, focusing particularly on the stable dynamics of automorphisms of C^2. We introduce concepts such as Fatou sets, polynomial and transcendental Hénon maps, and limit functions. Finally, we address two recently resolved questions that refer to the current state of my research (a joint work with A. M. Benini and A. Saracco):
Can limit sets for (non-recurrent) Fatou components be hyperbolic?
Can limit sets be distinct?
This paper explores strategic optimization in updating essential medical kits crucial for humanitarian emergencies. Recognizing the perishable nature of medical components, the study emphasizes the need for regular updates involving complex recovery, substitution and disposal processes with the associated costs. The goal is to minimize costs over an unpredictable time horizon. The introduction of the kit-update problem considers both deterministic and adversarial scenarios. Key performance indicators (KPIs), including updating time and destruction costs, are integrated into a comprehensive economic measure, emphasizing a strategic and cost-effective approach.
The paper proposes an innovative online algorithm utilizing available information at each time period, demonstrating its 2-competitivity. Comparative analyses include a stochastic multi-stage approach and four other algorithms representing former and current MSF policies, a greedy improvement of the MSF policy, and the perfect information approach.
Analytics results on various instances show that the online algorithm is competitive in terms of cost with the stochastic formulation, with differences primarily in computation time. This research contributes to a nuanced understanding of the kit-update problem, providing a practical and efficient online algorithmic solution within the realm of humanitarian logistics.
This talk presents a family of algebraically constrained finite element schemes for hyperbolic conservation laws. The validity of generalized discrete maximum principles is enforced using monolithic convex limiting (MCL), a new flux correction procedure based on representation of spatial semi-discretizations in terms of admissible intermediate states. Semi-discrete entropy stability is enforced using a limiter-based fix. Time integration is performed using explicit or implicit Runge-Kutta methods, which can also be equipped with property-preserving flux limiters. In MCL schemes for nonlinear systems, problem-dependent inequality constraints are imposed on scalar functions of conserved variables to ensure physical and numerical admissibility of approximate solutions. After explaining the design philosophy behind our flux-corrected finite element approximations and showing some numerical examples, we turn to the analysis of consistency and convergence. For the Euler equations of gas dynamics, we prove weak convergence to a dissipative weak solution. The convergence analysis to be presented in this talk is joint work with Maria Lukáčová-Medvid’ová and Philipp Öffner.
We consider the moduli space of Abelian differentials on compact Riemann surfaces. It is stratified by the degree of the zeros of the differential and each stratum has a linear structure coming from period coordinates. Each stratum admits an action by GL(2,R) and this action is relevant in the study of billiard dynamics. I aim to discuss works in collaboration with Julian Rüth and Kai Fu in which we design computer programs to guess and certify GL(2,R)-orbit closures.
For threefolds over the complex numbers, much is understood about the rationality problem, i.e. the property of being birational to projective space. However, much less is known over fields that are not algebraically closed. For example, a threefold defined over the real numbers could become rational after base changing to C, but in general, the complex rationality construction may not descend to R. In this talk, we study this question for real threefolds with a conic bundle structure. This talk is based on joint work with S. Frei, S. Sankar, B. Viray, and I. Vogt, and joint work with M. Ji.
Une partie de la Cellule Informatique participe à la semaine de travail de l'équipe de la PLM au CIRM du 24 au 28 juin 2024 pouvant impacter des délais de traitements des demandes plus longs que d'habitude.
Pensez à anticiper les retraits et les réservations de matériel par exemple.
We are interested here in questions related to the maximal regularity of solutions of elliptic problems div $(A abla\, u) = f$ in $\Omega$ with Dirichlet boundary condition. For the last 40 years, many works have been concerned with questions when $A$ is a matrix or a function and when $\Omega$ is a Lipschitz domain. Some of them contain incorrect results that are corrected in the present work.
We give here new proofs and some complements for the case of the Laplacian, the Bilaplacian and the operator $\mathrm{div}\, (A abla)$, when ${\bf A}$ is a matrix or a function. And we extend this study to obtain other regularity results for domains having an adequate regularity. We give also new results for the {Dirichlet-to-Neumann operator for Laplacian and Bilaplacian.
Using the duality method, we can then revisit the work of Lions-Magenes, concerning the so-called very weak solutions, when the data are less regular.
Thanks to the interpolation theory, it permits us to extend the classes of solutions and then to obtain new results of regularity.
SQIsign is an isogeny-based signature scheme in Round 1 of NIST’s recent alternate call for signature schemes. In this talk, we will take a closer look at SQIsign verification and demonstrate that it can be performed completely on Kummer surfaces. In this way, one-dimensional SQIsign verification can be viewed as a two-dimensional isogeny between products of elliptic curves. Curiously, the isogeny is then defined over Fp rather than Fp2. Furthermore, we will introduce new techniques that enable verification for compression signatures using Kummer surfaces, in turn creating a toolbox for isogeny-based cryptography in dimension 2.This is based on joint work with Krijn Reijnders.
Dans cette présentation, nous exposons un problème bi-niveaux, multi leader- single follower, de tarification sur le marché de l'électricité. Les meneurs correspondent aux sociétés productrices d'énergie qui doivent soumettre une offre à un agent centralisateur (ISO). L'ISO sélectionne les offres et distribue la demande sur le réseaux. Les générateurs sont composés de plusieurs technologies de production, avec différents coûts et quantités de pollution produite. Nous exposerons les particularités de ce problème ainsi que les différents algorithmes qui permettent de trouver un ou plusieurs équilibres de Nash.
In the first part of the talk, we introduce the concept: Controllability of Differential Equations. Then we give some examples in finite (ODE) and infinite dimensional(PDE) contexts. We recall the controllability results of the Transport and Heat equation.
In the second part of the talk, we consider compressible Navier-Stokes equations in one dimension, linearized around a positive constant steady state . It is a Coupled system of Transport (for density) and Heat type (for velocity) equations. We study the boundary null-controllability of this linearized system in an interval when a Dirichlet control function is acting either only on the density or only on the velocity component at one end of the interval. In this setup, we state some new control results which we have obtained. We see that these controllability results are optimal/sharp concerning the regularity of initial states (in the velocity case) and time (in the density case). The proof is based on a spectral analysis and on solving a mixed parabolic-hyperbolic moments problem and a parabolic hyperbolic joint Ingham-type inequality. This is a joint work with Kuntal Bhandari, Rajib Dutta and Jiten Kumbhakar. Finally, the talk ends with some ongoing and future directions of research.
Un sous-ensemble $A$ de $\mathbf{N}$ est dit dense s’il est de densité asymptotique supérieure positive, et épars s’il est de densité nulle. Un théorème classique de Furstenberg et Sarközy dit que si $A$ est dense, alors il existe des éléments distincts $a, a'$ dans $A$ tels que $a-a' = n^2$ pour un certain entier $n$. Un ensemble $H$ d'entiers positifs est dit intersectif si l'on peut remplacer l'ensemble des carrés par $H$ dans le théorème de Furstenberg-Sarközy, autrement dit si $(A-A) \cap H$ est non vide. L'étude des ensembles intersectifs se trouve à l'intersection de plusieurs domaines de mathématiques, y compris la théorie des nombres, la combinatoire et la théorie ergodique.
Dans cet exposé, je discuterai dans quelle mesure ce phénomène est toujours valable, lorsque $A$ est un sous-ensemble dense de l'ensemble des nombres premiers, ou plus généralement d'un ensemble épars quelconque $E$ (à la place de $\mathbf{N}$). Il s'agit d'un travail en commun avec J. T. Griesmer, P.-Y. Bienvenu et A. Le.
We discuss the computational problem of finding pairs of consecutive smooth integers, also known as smooth twins. Such twins have had some relevance in isogeny-based cryptography and reducing the smoothness bound of these twins aids the performance of these cryptosystems. However searching for such twins with a small smoothness bound is the most challenging aspect of this problem especially since the set of smooth twins with a fixed smoothness bound is finite. This talk presents new large smooth twins which have a smaller smoothness bound compared to twins found with prior approaches.
L'ordre du jour sera le suivant :
1) Adoption du Compte-Rendu du conseil du 11 juin (vote)
2) Informations générales
3) Plan de Gestion des Emplois des enseignants-chercheurs 2025
4) Questions diverses
In this talk we show that spectral shift function can be expressed via (regularised) determinant of Birman-Schwinger operator in the setting suitable for higher order differential operators. We then use this expression to show that the spectral shift function for massless Dirac operator is continuous everywhere except possibly at zero. Behaviour of the spectral shift function at zero is influenced by the presence of zero eigenvalue and/or resonance of the perturbed Dirac operator.
On montrera comment résoudre le problème de F. John sur les objets flottants dans le cas d'un objet fixe. Il s'agit de comprendre comment des vagues linéaires se comportent en présence d'un objet partiellement immergé. La difficulté principale vient du fait que le domaine du fluide présente des singularités aux points de contact entre l'objet et la surface de l'eau.
Mercredi 10/07
14h00 Jurgen Angst (Univ. Rennes)
Title : TLC in total variation for beta-ensembles
Abstract : In this talk, we study the fluctuations of linear statistics associated with beta-ensembles, which are statistical physics models generalizing random matrix spectra. In the context of random matrices precisely (e.g. GOE, GUE), the "law of large numbers" is Wigner's theorem, which states that the empirical measure of eigenvalues converges to the semicircle law, and fluctuations around equilibrium can be shown to be Gaussian. We will describe how this result generalizes to beta-ensembles and how it is possible to quantify the speed of convergence to the normal distribution. We obtain optimal rates of convergence for the total variation distance and the Wasserstein distances. To do this, we introduce a variant of Stein's method for a generator $L$ that is not necessarily invertible, and which allows us to establish the asymptotic normality of observables that are not in the image of $L$. Time permitting, we will also look at the phenomenon of super-convergence, which ensures that convergence to the normal law takes place for very strong metrics, typically the $C^{\infty}$-convergence of densities. The talk is based on recent works with R. Herry, D. Malicet and G. Poly.
15h00 Nicolas Juillet (Univ. Haute-Alsace)
Title : Exact interpolation of 1-marginals
Abstract : I shall present a new type of martingales that exactly interpolates any given family of 1-dimensional marginals on R1 (satisfying the suitable necessary assumption). The construction makes use of ideas from the (martingale) optimal transportation theory and relies on different stochastic orders. I shall discuss of related constructions and open questions (joint work with Brückerhoff and Huesmann).
16h00 Kolehe Coulibaly-Pasquier (Inst. Ellie Cartan)
Title : On the separation cut-off phenomenon for Brownian motions on high dimensional rotationally
symmetric compact manifolds.
Abstract : Given a family of compact, rotationally symmetric manifolds indexed by the dimension and a weighted function, we will study the cut-off phenomena for the Brownian motion on this family.
Our proof is based on the construction of an intertwined process, a strong stationary time, an estimation of the moments of the covering time of the dual process, and on the phenomena of concentration of the measure.
We will see a phase transition concerning the existence or not of cut-off phenomena, which depend on the shape of the weighted function.
Jeudi 11/07
11h00 Masha Gordina (Univ. of Connecticut)
Title : Dimension-independent functional inequalities on sub-Riemannian manifolds
Abstract : The talk will review recent results on gradient estimates, log Sobolev inequalities, reverse Poincare and log Sobolev inequalities on a class of sub-Riemannian manifolds. As for many of such setting curvature bounds are not available, we use different techniques including tensorization and taking quotients. Joint work with F. Baudoin, L. Luo and R. Sarkar.
In this thesis we study couplings of subelliptic Brownian motions in several subRiemannian manifolds: the free, step $2$ Carnot groups, including the Heisenberg group, as well as the groups of matrices $SU(2)$ et $SL(2,\mathbb{R})$.
Taking inspiration from previous works on the Heisenberg group we obtain successful non co-adapted couplings on $SU(2)$, $SL(2,\mathbb{R})$ (under strong hypothesis) and also on the free step $2$ Carnot groups with rank $n\geq 3$. In particular we obtain estimates of the coupling rate, leading to gradient inequalities for the heat semi-group and for harmonic functions. We also describe the explicit construction of a co-adapted successful coupling on $SU(2)$.
Finally, we develop a new coupling model "in one sweep" for any free, step $2$ Carnot groups. In particular, this method allows us to obtain relations similar to the Bismut-Elworthy-Li formula for the gradient of the semi-group by studying a change of probability on the Gaussian space.
L'ordre du jour sera le suivant :
1) Adoption du Compte-Rendu du conseil du 2 juillet (vote)
2) Informations générales
3) Approbation du Document Unique d'Évaluation des Risques (DUER) (vote)
Retour sur l'enquête sur les propos racistes à l'IMB
4) Approbation de la demande DIALOG de l'IMB (vote)
5) Approbation du texte relatif aux préconisations de l'IMB concernant les
déplacements en avion dans le cadre des missions (vote)
6) Exposé scientifique de Yann Traonmilin : l'IA à l'IMB
La direction propose de nommer Y. Traonmilin responsable de la thématique IA à
l'IMB (vote)
7) Questions diverses
8) Uniquement pour le Conseil Scientifique : examen des demandes d'HDR
La motivation principale de cet exposé est de trouver des temps forts de stationnarité pour des processus de Markov (X_t), c'est à dire des temps d'arrêt T tels que X_T soit à l'équilibre, T et X_T soient indépendants. Pour trouver des temps fort de stationnarité T, il est naturel et très facile dans certains cas d'utiliser des processus duaux (D_t), tels que T soit un temps d'atteinte d'un certain état pour le processus dual. On étudiera l'entrelacement entre (X_t) et (D_t). On donnera des exemples pour des chaînes de Markov à espace d'états finis, puis on s'intéressera au mouvement brownien avec des processus duaux à valeur ensemble. L'étonnant théorème "2M-X" de Pitman donne un exemple d'entrelacement du mouvement brownien dans le cercle. On généralisera ce théorème aux variétés riemanniennes compactes, et on construira des temps forts de stationnarité. On étudiera la transition de phase en grande dimension. Finalement, on s'intéressera à des duaux à valeur mesure."
In 1987, Coleman submitted a certain conjecture for curves of genus greater than one over complete discrete valuation fields of mixed characteristics. Roughly speaking, this conjecture asserts that the residue fields of the torsion points of the Jacobian lying on the curve are unramified over the base field. As an application, (the already proven part of) this conjecture gives another proof of the Manin-Mumford conjecture (Raynaud's theorem) on the finiteness of torsion points on curves. In this talk, after overviewing some known results on the Coleman conjecture by Coleman, Tamagawa, Hoshi, et al., I explain my recent approach to the conjecture using Raynaud's classification of vector space schemes and discuss “quasi-supersingular group schemes'', which I introduced in another possible approach to the conjecture.
Our work aims to quantify the benefit of storage flexibilities such as a battery on several short term electricity markets. We especially focus on two different markets, the intraday market (ID) and the activation market of the automatic Frequency Restoration Reserve (aFRR), also known as the secondary reserve. We propose algorithms to optimize the management of a small battery (<= 5 MWh) on these markets. In this talk, we first present the modeling of the problem, then we show some theoretical results and numerical simulations. We conclude by listing some limitations of the method.
The existence of Kaehler-Einstein metrics on Fano 3-folds can be determined by studying lower bounds of stability thresholds. An effective way to verify such bounds is to construct flags of point-curve-surface inside the Fano 3-folds. This approach was initiated by Abban-Zhuang, and allows us to restrict the computation of bounds for stability thresholds only on flags. We employ this machinery to prove K-stability of terminal quasi-smooth Fano 3-fold hypersurfaces. This is deeply intertwined with the geometry of the hypersurfaces: in fact, birational rigidity and superrigidity play a crucial role. The superrigid case had been attacked by Kim-Okada-Won. In this talk, I will discuss the K-stability of strictly rigid Fano hypersurfaces via Abban-Zhuang Theory. This is a joint work with Takuzo Okada.
The Benjamin-Ono (BO) equation is a nonlocal asymptotic model for the unidirectional propagation of weakly nonlinear, long internal waves in a two-layer fluid. The equation was introduced formally by Benjamin in the '60s and has been a source of active research since. For instance, the study of the long-time behavior of solutions, stability of traveling waves, and the low regularity well-posedness of the initial value problem. However, despite the rich theory for the BO equation, it is still an open question whether its solutions are close to the ones of the original physical system.
In this talk, I will explain the main steps involved in the rigorous derivation of the BO equation.
In this talk, we present a new construction of quantum codes that enables the integration of a broader class of classical codes into the mathematical framework of quantum stabilizer codes. Next, we discuss new connections between twisted codes and linear cyclic codes and provide novel bounds for the minimum distance of twisted codes. We demonstrate that classical tools, such as the Hartmann-Tzeng minimum distance bound, are applicable to twisted codes. This has led to the discovery of five new infinite families and many other examples of record-breaking, and sometimes optimal, binary quantum codes. Additionally, we explore the role of the $\gamma$ value on the parameters of twisted codes and present new findings regarding the construction of twisted codes with different $\gamma$ values but identical parameters.
In this talk we present an information discovery framework in optimization under uncertainty. In this framework, uncertain parameters are assumed to be “discoverable” by the decision-maker under a given discovery (query) budget. The decision-maker therefore decides which parameters to discover (query) in a first stage then solves an optimization problem with the discovered uncertain parameters at hand in a second stage. We model this problem as a two-stage stochastic program and develop decomposition algorithms for its solution. Notably, one of the algorithms we propose reposes on a Dantzig-Wolfe reformulation of the recourse problem. This allows to treat the cases where the recourse problem involves binary decisions without relying on integer L-Shaped cuts. In its implementation, this approach requires to couple Benders’ and Dantzig-Wolfe reformulation with the subproblems of the Benders’ algorithm being solved using the column generation algorithm. We present some preliminary results on the kidney exchange problem under uncertainty of the compatibility information.
Cet exposé se veut une introduction par l'exemple à une théorie de la causalité développée depuis la fin des années 90 par Judea Pearl. Elle lui a valu une partie de son prix ACM Turing en 2011, l'équivalent en informatique du prix Abel. Nous considérerons un modèle classique dont des hypothèses sont formulées par un graphe de cause. Il contient notamment une cause commune inobservable et une variable éthiquement non contrôlable. Adoptant ici un vocabulaire informatique, nous traiterons en détail une requête sur les traces d'exécution d'un programme inexécutable à l'aide de statistiques sur les traces d'un autre programme lui exécutable. Les éléments rencontrés lors de cette analyse seront alors utilisés dans une présentation de l'architecture globale de la démarche de Pearl. Si le temps le permet, nous discuterons quelques éléments sur les calculs probabilistes dans ce contexte qui s'avèrent souvent reformulable uniquement en terme de théorie des graphes.
Networks of hyperbolic PDEs arise in different applications, e.g. modeling water- or gas-networks or road traffic. In the first part of this talk we discuss modeling aspects of coupling conditions for hyperbolic PDEs.
Starting from an kinetic description we derive coupling conditions for the associated macroscopic equations. For this process a detailed description of the boundary layer is important. In the second part appropriate numerical methods are considered.
Different high order approaches are compared and applications to district heating or water networks are discussed.
It is known that the partition function $p(n)$ obeys Benford's law in any integer base $b\ge 2$. In a recent paper, Douglass and Ono asked for an explicit version of this result. In my talk, I will show that for any string of digits of length $f$ in base $b$, there is $n\le N(b,f)$, where
$$N(b,f):=\exp\left(10^{32} (f+11)^2(\log b)^3\right)$$
such that $p(n)$ starts with the given string of digits in base $b$. The proof uses a lower bound for a nonzero linear form in logarithms of algebraic numbers with algebraic coefficients due to Philippon and Waldschmidt. A similar result holds for the plane partition function.
Several algorithmic problems on supersingular elliptic curves are
currently under close scrutiny. When analysing algorithms or reductions
in this context, one often runs into the following type of question:
given a supersingular elliptic curve E and an object x attached to E, if
we consider a random large degree isogeny f : E -> E' and carry the
object x along f, how is the resulting f(x) distributed among the
possible objects attached to E'? We propose a general framework to
formulate this type of question precisely, and prove a general
equidistribution theorem under a condition that is easy to check in
practice. The proof goes from elliptic curves to quaternionic
automorphic forms via an augmented Deuring correspondence, and then to
classical modular forms via the Jacquet-Langlands correspondence. This
is joint work with Benjamin Wesolowski.
After a quick overview of the general principles of Life Cycle Assessment (LCA), we will investigate how such a tool can be helpful to compare the environmental impact of different architectures of computer systems used for teaching purposes in higher education. In particular, we will see how to perform the life cycle inventory of the systems under studies from a practical standpoint. We will then review the main results from the life cycle impact assessment and discuss them as well as the limitations of this study.
Multidimensional simulations of magnetohydrodynamic phenomena occurring in stellar interiors are essential for understanding how stars evolve and die. The highly subsonic flow regimes found in the regions deep inside stars pose severe challenges to conventional methods of computational MHD, such as the popular "high-resolution shock-capturing'' schemes. After giving a brief overview of work on astrophysical simulations (including also supernova explosions and common-envelope evolution) in our group at Heidelberg, we summarize the challenges and present suitable numerical solvers optimized for magnetized, low-Mach-number stellar flows, implemented in our Seven-League Hydro code. We show how the choice of the numerical method can drastically affect both the performance of the code and its accuracy in real astrophysical simulations.
Présentation des membres de l'équipe
Un pavage de Penrose est formé de deux tuiles polygonales dont le ratio des fréquences est égal au nombre d'or. De même, les pavages par la monotuile apériodique découverte en 2023 par David Smith sont tels que le ratio des fréquences des deux orientations de la monotuile est égal à la quatrième puissance du nombre d'or. Aussi, la structure des pavages de Jeandel-Rao est expliquée par le nombre d'or. On connait des pavages apériodiques qui ne sont pas reliés au nombre d'or. Toutefois, la caractérisation des nombres possibles pour de tels ratios est une question, posée dès 1992 par Ammann, Grünbaum et Shephard, qui est toujours ouverte aujourd'hui.
Pour chaque entier positif $n$, nous introduisons un ensemble $\mathcal{T}_n$ composé de $(n+3)^2$ tuiles de Wang (carrés unitaires avec des bords étiquetés). Nous représentons un pavage par des translations de ces tuiles comme une fonction $\mathbb{Z}^2\to\mathcal{T}_n$ appelée configuration. Une configuration est valide si le bord commun des tuiles adjacentes a la même étiquette. Pour chaque entier $n\geq1$, nous considérons le sous-décalage de Wang $\Omega_n$ défini comme l'ensemble des configurations valides pour les tuiles $\mathcal{T}_n$.
La famille $\{\Omega_n\}_{n\geq1}$ élargit la relation entre les entiers quadratiques et les tuiles apériodiques au-delà de l'omniprésent nombre d'or, car la dynamique de $\Omega_n$ implique la racine positive $\beta$ du polynôme $x^2-nx-1$. Cette racine est parfois appelée $n$-ième nombre métallique (https://fr.wikipedia.org/wiki/Nombre_métallique), et en particulier, le nombre d'or lorsque $n=1$ et le nombre d'argent lorsque $n=2$.
L'ensemble $\Omega_n$ est auto-similaire, apériodique et minimal pour l'action de décalage. De plus, il existe une partition polygonale de $\mathbb{T}^2$ qui est une partition de Markov pour une $\mathbb{Z}^2$-action sur le tore. La partition et les ensembles de tuiles de Wang sont symétriques, ce qui les rend, comme les tuiles de Penrose, dignes d'intérêt.
Les détails peuvent être trouvés dans les prépublications disponibles à
https://arxiv.org/abs/2312.03652 (partie I) et
https://arxiv.org/abs/2403.03197 (partie II).
L'exposé présentera une vue d'ensemble des principaux résultats.
Freiman's $3k-4$ Theorem states that if a subset $A$ of $k$ integers has a Minkowski sum $A+A$ of size at most $3k-4$, then it must be contained in a short arithmetic progression. We prove a function field analogue that is also a generalisation: it states that if $K$ is a perfect field and if $S\supset K$ is a vector space of dimension $k$ inside an extension $F/K$ in which $K$ is algebraically closed, and if the $K$-vector space generated by all products of pairs of elements of $S$ has dimension at most $3k-4$, then $K(S)$ is a function field of small genus, and $S$ is of small codimension inside a Riemann-Roch space of $K(S)$. Joint work with Alain Couvreur.
Discussion autour de l’après thèse et les carrières académiques (concours, candidatures), en priorité à destination des doctorantes et doctorants, post-doctorant·e·s et ATER à l’IMB.
Je discuterai un travail récent avec Yann Chaubet et Daniel Han-Kwan (Nantes). Nous nous sommes intéressés à la dynamique en temps long de l'équation de Vlasov non-linéaire sur une variété à courbure négative lorsque le noyau d'interaction est lisse. J'expliquerai que, pour des petites données initiales lisses et supportées loin de la section nulle, les solutions de cette équation convergent à vitesse exponentielle vers un état d'équilibre du problème linéaire. Pour obtenir un tel résultat, on fait appel à des outils d'analyse microlocale développés initialement dans le contexte de l'étude des systèmes dynamiques chaotiques (Baladi, Dyatlov, Faure, Sjöstrand, Tsujii, Zworski).
The classical modular polynomial phi_N parametrizes pairs of elliptic curves connected by an isogeny of degree N. They play an important role in algorithmic number theory, and are used in many applications, for example in the SEA point counting algorithm.
This talk is about a new method for computing modular polynomials. It has the same asymptotic time complexity as the currently best known algorithms, but does not rely on any heuristics. The main ideas of our algorithm are: the embedding of N-isogenies in smooth-degree isogenies in higher dimension, and the computation of deformations of isogenies.
The talk is based on a joint work with Damien Robert.
In recent months, the proliferation of conversational agents such as ChatGPT has had a major impact on Artificial Intelligence research, but also on the way AI is perceived by the public. Because of some rather bluff results, some people wonder if this agent is as intelligent as us, if it can replace us, or even if it has a conscience. But also because of the rather crude mistakes it makes, people wonder whether its use should not be prohibited under certain conditions. To answer such questions, it might be useful to know more about how ChatGPT works. After that, we'll be able to discuss the potential of such tools and the uses to which they can be put.
We study non-conservative hyperbolic systems of balance laws and are interested in development of well-balanced (WB) numerical methods for such systems. One of the ways to enforce the balance between the flux terms and source and non-conservative product terms is to rewrite the studied system in a quasi-conservative form by incorporating the latter terms into the modified global flux. The resulting system can be quite easily solved by Riemann-problem-solver-free central-upwind (CU) schemes. This approach, however, does not allow to accurately treat non-conservative products. We therefore apply a path-conservative (PC) integration technique and develop a very robust and accurate path-conservative central-upwind schemes (PCCU) based on flux globalization. I will demonstrate the performance of the WB PCCU schemes on a wide variety of examples.
Séminaire IOP banalisé
https://www.math.u-bordeaux.fr/~skupin/conf-pthomas-2024.html
Une variété est dite PSC si elle admet une métrique riemannienne complète à courbure scalaire positive. Vers la fin des années 1970, des résultats de Schoen et Yau reposant sur la théorie des surfaces minimales et, en parallèle, des méthodes basées sur la théorie de l’indice développées par Gromov et Lawson, ont permis de classifier les 3-variétés fermées PSC : ce sont exactement celles qui se décomposent en sommes connexes de variétés sphériques et de produits S2xS1. Dans cet exposé, nous présenterons un résultat de décomposition des 3-variétés PSC non compactes : si sa courbure scalaire décroît assez lentement, alors la variété se décompose en somme connexe (possiblement infinie) de variétés sphériques et S2xS1. Ce résultat fait suite à des travaux récents de Gromov et de Wang.
Il s'agit d'un travail en collaboration avec F. Balacheff et S. Sabourau.
Résumé. L'exposé porte sur l'analogue du Théorème d’Artin-Furtwängler sur la capitulation des groupes de classes dans le corps de Hilbert obtenu en transposant aux groupes de classes logarithmiques des corps de nombres la preuve algébrique classique du Théorème de l’idéal principal.
Abstract. We establish a logarithmic version of the classical result of Artin-Furwängler on the principalization of ideal classes in the Hilbert class-field by applying the group theoretic description of the transfert map in the logarithmic context.
Exceptionnellement, l'accueil de la Cellule Informatique au bureau 225
en raison de la participation d'une partie de l'équipe informatique à l'Action Nationale de Formation Mathrice au CIRM à Marseille.
We investigate the connection between the propagation of smallness in two dimensions and one-dimensional spectral estimates. The phenomenon of smallness propagation in the plane, originally obtained by Yuzhe Zhu, reveals how the value of solutions in a small region extends to a larger domain. By revisiting Zhu’s proof, we obtain a quantitative version that includes an explicit dependence on key parameters. This refinement enables us to establish spectral inequalities for one-dimensional Schrödinger operators.
Dans le sillage d'une éolienne ou d'un hélicoptère se créent naturellement des filaments de tourbillon en forme d'hélice. Le mouvement des filaments de tourbillon fait l'objet d'une conjecture importante : lorsque le diamètre du filament tend vers 0 (en conservant son intensité), son mouvement devrait suivre en première approximation le flot par courbure binormale. Cette conjecture n'est prouvée que pour les filaments rectilignes et pour les anneaux de tourbillon. Nous montrons, dans le contexte des équations d'Euler 3D incompressibles en symétrie hélicoïdale que les filaments hélicoïdaux suffisamment concentrés suivent également le flot par courbure binormal.
The slides are in english but the talk will be in french.
In a category enriched in a closed symmetric monoidal category, the power
object construction, if it is representable, gives a contravariant monoidal
action. We first survey the construction, due to Serre, of the power object
by (projective) Hermitian modules on abelian varieties. The resulting
action, when applied to a primitively oriented elliptic curve, gives a
contravariant equivalence of category (Jordan, Keeton, Poonen, Rains,
Shepherd-Barron and Tate).
We then give several applications of this module action:
1) We first explain how it allows to describe purely algebraically the
ideal class group action on an elliptic curve or the Shimura class group
action on a CM abelian variety over a finite field, without lifting to
characteristic 0.
2) We then extend the usual algorithms for the ideal action to the case of
modules, and use it to explore isogeny graphs of powers of an elliptic
curve in dimension up to 4. This allows us to find new examples of curves
with many points. (This is a joint work with Kirschmer, Narbonne and
Ritzenthaler)
3) Finally, we give new applications for isogeny based cryptography. We
explain how, via the Weil restriction, the supersingular isogeny path
problem can be recast as a rank 2 module action inversion problem. We also
propose ⊗-MIKE a novel NIKE (non interactive isogeny key exchange) that only
needs to send j-invariants of supersingular curves, and compute a dimension
4 abelian variety as the shared secret.
Les algorithmes quantiques sont une piste majeure d'accélération pour certains calculs. Dans cet exposé, nous présenterons les principaux problèmes susceptibles d'en bénéficier. Nous développerons également quelques grands principes sous-jacents à ces algorithmes.
Beaucoup de problèmes, notamment en machine learning, peuvent se formuler comme des problèmes d'optimisation. Pour résoudre ces problèmes, les algorithmes de gradients (types descente de gradient) sont très populaires. En particulier, modifier la descente de gradient en y incorporant un méchanisme d'inertie permet d'en accélérer la vitesse. Cependant, l'émergence de grosses bases de données rend le calcul du gradient très coûteux. En pratique donc, on préférera souvent utiliser des techniques d'échantillonnages pour utiliser une approximation moins coûteuse du gradient. Dans cette présentation, on s'intéresse à la possibilité de conserver des propriétés d'accélération de la descente de gradient grâce à l'ajout d'inertie, lorsque de telles approximations du gradient sont utilisées.
Seminaire joint avec OptimAI.
This talk focuses on models for multivariate count data, with emphasis on species abundance data. Two approaches emerge in this framework: the Poisson log-normal (PLN) and the Tree Dirichlet multinomial (TDM) models. The first uses a latent gaussian vector to model dependencies between species whereas the second models dependencies directly on observed abundances. The TDM model makes the assumption that the total abundance is fixed, and is then often used for microbiome data since the sequencing depth (in RNA seq) varies from one observation to another leading to a total abundance that is not really interpretable. We propose to generalize TDM model in two ways: by relaxing the fixed total abundance and by using Polya distribution instead of Dirichlet multinomial. This family of models corresponds to Polya urn models with a random number of draws and will be named Polya splitting distributions. In a first part I will present the probabilistic properties of such models, with focus on marginals and probabilistic graphical model. Then it will be shown that these models emerge as stationary distributions of multivariate birth death process under simple parametric assumption on birth-death rates. These assumptions are related to the neutral theory of biodiversity that assumes no biological interaction between species. Finally, the statistical aspects of Polya splitting models will be presented: the regression framework, the inference, the consideration of a partition tree structure and two applications on real data.
This talk focuses on models for multivariate count data, with emphasis on species abundance data. Two approaches emerge in this framework: the Poisson log-normal (PLN) and the Tree Dirichlet multinomial (TDM) models. The first uses a latent gaussian vector to model dependencies between species whereas the second models dependencies directly on observed abundances. The TDM model makes the assumption that the total abundance is fixed, and is then often used for microbiome data since the sequencing depth (in RNA seq) varies from one observation to another leading to a total abundance that is not really interpretable. We propose to generalize TDM model in two ways: by relaxing the fixed total abundance and by using Polya distribution instead of Dirichlet multinomial. This family of models corresponds to Polya urn models with a random number of draws and will be named Polya splitting distributions. In a first part I will present the probabilistic properties of such models, with focus on marginals and probabilistic graphical model. Then it will be shown that these models emerge as stationary distributions of multivariate birth death process under simple parametric assumption on birth-death rates. These assumptions are related to the neutral theory of biodiversity that assumes no biological interaction between species. Finally, the statistical aspects of Polya splitting models will be presented: the regression framework, the inference, the consideration of a partition tree structure and two applications on real data.
Les métriques Lorentziennes à courbure constante ayant un nombre fini de singularités coniques offrent de nouveaux exemples naturels de structures géométriques sur le tore. Des travaux de Troyanov sur leur analogue Riemannien ont montré que la donnée de la structure conforme et des angles aux singularités classifient entièrement les métriques Riemanniennes à singularités coniques. Dans cet exposé nous nous intéresserons aux tores de-Sitter singuliers, en construirons des exemples, et présenterons un phénomène de rigidité rappelant celui de Troyanov : les tores de-Sitter à une singularité d'angle fixé sont déterminés par la classe d'équivalence topologique de leur bi-feuilletage lumière. Nous verrons que cette question géométrique est intimement liée à un problème de dynamique sur les difféomorphismes par morceaux du cercles.
In this talk, we investigate intersecting codes. In the Hamming metric, these are codes where two nonzero codewords always share a coordinate in which they are both nonzero. Based on a new geometric interpretation of intersecting codes, we are able to provide some new lower and upper bounds on the minimum length $i(k, q)$ of intersecting codes of dimension k over $\mathbb{F}_q$, together with some explicit constructions of asymptotically good intersecting codes. We relate the theory of intersecting codes over $\mathbb{F}_q$ with the theory of $2$-wise weighted Davenport constants of certain groups, and to nonunique factorization theory. Finally, we will present intersecting codes in the rank metric.
L'équation de Gross-Pitaevskii décrit le mouvement de superfluides, et possède entre autres des solutions stationnaires en forme de vortex. Si deux vortex sont présents, ils se déplacent ensemble à une vitesse constante.
Dans cet exposé, on montrera un résultat de stabilité orbitale dans un espace métrique pour cette paire. On expliquera comment adapter le schéma de preuve de stabilité à un tel espace, et pourquoi on ne peut pas prouver le résultat dans un espace plus simple. Ce travail a été fait en collaboration avec Philippe Gravejat et Frédéric Valet.
Dans le contexte du changement climatique, de nombreuses études prospectives, englobant généralement tous les domaines de la société, imaginent des futurs possibles pour faire émerger des nouveaux récits et/ou guider les prises de décisions. Dans cette présentation, nous analyserons les technologies numériques envisagées dans un monde qui a atténué le changement climatique ou s'y est adapté. Pour cela, les variables d'analyse utilisées dans notre étude seront décrites. Elles ont été appliquées pour étudier 14 études prospectives et les 35 scénarios futurs correspondants. Nous constatons que tous les scénarios considèrent la technologie numérique comme présente dans le futur et peu d'entre eux interrogent notre rapport au numérique et sa matérialité. Notre analyse montre l'absence d'une vision systémique des technologies de l'information et de la communication dans les scénarios prospectifs. Nous conclurons la présentation en discutant d'alternative pour imaginer les scénarios alternatifs pour le numérique.
In the Kidney Exchange Problem (KEP), we consider a pool of altruistic donors and incompatible patient-donor pairs.
Kidney exchanges can be modelled in a directed weighted graph as circuits starting and ending in an incompatible pair or as paths starting at an altruistic donor.
The weights on the arcs represent the medical benefit which measures the quality of the associated transplantation.
For medical reasons, circuits and paths are of limited length and are associated with a medical benefit to perform the transplants.
The aim of the KEP is to determine a set of disjoint kidney exchanges of maximal medical benefit or maximal cardinality (all weights equal to one).
In this work, we consider two types of uncertainty in the KEP which stem from the estimation of the medical benefit (weights of the arcs) and from the failure of a transplantation (existence of the arcs).
Both uncertainty are modelled via uncertainty sets with constant budget.
The robust approach entails finding the best KEP solution in the worst-case scenario within the uncertainty set.
We modelled the robust counter-part by means of a max-min formulation which is defined on exponentially-many variables associated with the circuits and paths.
We propose different exact approaches to solve it: either based on the result of Bertsimas and Sim or on a reformulation to a single-level problem.
In both cases, the core algorithm is based on a Branch-Price-and-Cut approach where the exponentially-many variables are dynamically generated.
The computational experiments prove the efficiency of our approach.
This talk explores two advanced numerical methods for solving compressible two-phase flows modelled using the conservative Symmetric Hyperbolic Thermodynamically Compatible (SHTC) model proposed by Romenski et al. I first address the weak hyperbolicity of the original model in multidimensional cases by restoring strong hyperbolicity through two distinct approaches: the explicit symmetrization of the system and the hyperbolic Generalized Lagrangian Multiplier (GLM) curl-cleaning approach. Then, I will present two numerical methods to solve the proposed problem: a high-order ADER Discontinuous Galerkin (ADER-DG) scheme with an a posteriori sub-cell finite volume limiter and an exactly curl-free finite volume scheme to handle the curl involution in the relative velocity field. The latter method uses a staggered grid discretization and defines a proper compatible gradient and a curl operator to achieve a curl-free discrete solution. Extensive numerical test cases in one and multiple dimensions validate both methods' accuracy and stability.
Many phenomena in the life sciences, ranging from the microscopic to macroscopic level, exhibit surprisingly similar structures. Behaviour at the microscopic level, including ion channel transport, chemotaxis, and angiogenesis, and behaviour at the macroscopic level, including herding of animal populations, motion of human crowds, and bacteria orientation, are both largely driven by long-range attractive forces, due to electrical, chemical or social interactions, and short-range repulsion, due to dissipation or finite size effects. Various modelling approaches at the agent-based level, from cellular automata to Brownian particles, have been used to describe these phenomena. An alternative way to pass from microscopic models to continuum descriptions requires the analysis of the mean-field limit, as the number of agents becomes large. All these approaches lead to a continuum kinematic equation for the evolution of the density of individuals known as the aggregation-diffusion equation. This equation models the evolution of the density of individuals of a population, that move driven by the balances of forces: on one hand, the diffusive term models diffusion of the population, where individuals escape high concentration of individuals, and on the other hand, the aggregation forces due to the drifts modelling attraction/repulsion at a distance. The aggregation-diffusion equation can also be understood as the steepest-descent curve (gradient flow) of free energies coming from statistical physics. Significant effort has been devoted to the subtle mechanism of balance between aggregation and diffusion. In some extreme cases, the minimisation of the free energy leads to partial concentration of the mass. Aggregation-diffusion equations are present in a wealth of applications across science and engineering. Of particular relevance is mathematical biology, with an emphasis on cell population models. The aggregation terms, either in scalar or in system form, is often used to model the motion of cells as they concentrate or separate from a target or interact through chemical cues. The diffusion effects described above are consistent with population pressure effects, whereby groups of cells naturally spread away from areas of high concentration. This talk will give an overview of the state of the art in the understanding of aggregation-diffusion equations, and their applications in mathematical biology.
Le problème de Manin-Mumford dynamique est un problème en dynamique algébrique inspiré par des résultats classiques de géométrie arithmétique.
Étant donné un système dynamique algébrique $(X,f)$, où $X$ est une variété projective et $f$ est un endomorphisme polarisé de $X$, on veut déterminer sous quelles conditions une sous-variété $Y$ qui contient une quantité Zariski-dense de points à orbite finie, doit avoir elle-même une orbite finie.
Dans un travail en commun avec Romain Dujardin et Charles Favre, on montre que cette propriété est vérifiée quand $f$ est un endomorphisme régulier du plan projectif provenant d'un endomorphisme polynomial de ${\mathbf C}^2$ (de degré $d \ge 2$), sous la condition supplémentaire que l'action de $f$ à l'infini n'a pas de points critiques périodiques.
La preuve se base sur des techniques provenant de la géométrie arithmétique et de la dynamique analytique, à la fois sur ${\mathbf C}$ et sur des corps non-archimédiens.
Joint work with Bas Edixhoven.
We present a generalization of Chabauty's method, that allows to compute the rational points on curves /$\mathbf{Q}$ when the Mordell-Weil rank is strictly smaller than $g1$, where $g$ is the genus of the curve and $s$ is the rank of the Néron-Severi group of the Jacobian.
The idea is to enlarge the Jacobian by talking a $\mathbf{G}_m$-torsor over it and the algorithm ultimately consists in intersecting the integral points on the $\mathbf{G}_m$-torsor with (an image of) the $\mathbf{Z}_p$-points on the curve.
We can also view the method as a way of rephrasing the quadratic Chabauty method by Balakrishnan, Dogra, Muller, Tuitman and Vonk.
Due to the complexity of real-world planning processes addressed by major transportation companies, decisions are often made considering subsequent problems at the strategic, tactical, and operational planning phases. However, these problems still prove to be individually very challenging. This talk will present two examples of tactical transportation problems motivated by industrial applications: the Train Timetabling Problem (TTP) and the Service Network Scheduling Problem (SNSP). The TTP aims at scheduling a set of trains, months to years before actual operations, at every station of their path through a given railway network while respecting safety headways. The SNSP determines the number of vehicles and their departure times on each arc of a middle-mile network while minimizing the sum of vehicle and late commodity delivery costs. For these two problems, the consideration of capacity and uncertainty in travel times are discussed. We present models and solution approaches including MILP formulations, Tabu search, Constraint Programming techniques, and a Progressive Hedging metaheuristic.
Despite the supreme importance of fluid flow models, the well-posedness of three-dimensional viscous and inviscid flow equations remains unsolved. Promising efforts have recently evolved around the concept of statistical solutions. In this talk, we present stochastic lattice Boltzmann methods for efficiently approximating statistical solutions to the incompressible Navier–Stokes equations in three spatial dimensions. Space-time adaptive kinetic relaxation frequencies are used to find stable and consistent numerical solutions along the inviscid limit toward the Euler equations. With single level Monte Carlo and stochastic Galerkin methods, we approximate responses, e.g., from initial random perturbations of the flow field. The novel combinations of schemes are implemented in the parallel C++ data structure OpenLB and executed on heterogeneous high-performance computing machinery. Based on exploratory computations, we search for scaling of the energy spectra and structure functions in terms of Kolmogorov’s K41 theory. For the first time, we numerically approximate the limit of statistical solutions of the incompressible Navier–Stokes solutions toward weak-strong unique statistical solutions of the incompressible Euler equations in three dimensions. Applications to wall-bounded turbulence and the potential to provide training data for generative artificial intelligence algorithms are discussed.
Isogeny-based cryptography is founded on the assumption that the Isogeny problem—finding an isogeny between two given elliptic curves—is a hard problem, even for quantum computers.
In the security analysis of isogeny-based schemes, various related problems naturally arise, such as computing the endomorphism ring of an elliptic curve or determining a maximal quaternion order isomorphic to it.
These problems have been shown to be equivalent to the Isogeny problem, first under some heuristics and subsequently under the Generalized Riemann Hypothesis.
In this talk, we present ongoing joint work with Benjamin Wesolowski, where we unconditionally prove these equivalences, notably using the new tools provided by isogenies in higher dimensions.
Additionally, we show that these problems are also equivalent to finding the lattice of all isogenies between two elliptic curves.
Finally, we demonstrate that if there exist hard instances of the Isogeny problem then all the previously mentioned problems are hard on average.
L'ordre du jour sera le suivant :
1) Adoption du Compte-Rendu du conseil du 10 septembre (vote)
2) Informations générales
3) Élection d'un nouveau membre du conseil scientifique (vote)
4) Présentation du projet de nouveau site web de l'IMB. Discussions sur la présentation et les couleurs à adopter.
5) Discussion autour des comités de sélection sur la base des propositions apparaissant dans la lettre ouverte
6) Questions diverses
Pensez à donner votre procuration
Le projet européen SimCardioTest essaye de montrer qu'il est possible et utile de réaliser des essais cliniques in-silico pour des médicaments ou des dispositifs médicaux cardiaques. Pour cela, une plateforme internet à été créée, à travers laquelle il est possible d'exécuter des simulations numériques de modèles représentant trois usages possibles en cardiologie. Garantir la crédibilité des simulations est alors un point clé pour un usage industriel de cette plateforme. Cela repose sur des procédures standardisées de vérification et de validation pour chaque usage. À l'université de Bordeaux, au sein de l'IHU LIRYC, nous avons construit, vérifié et travaillons à la validation d'un modèle qui permet d'étudier l'efficacité énergétique d'un stimulateur cardiaque. J'expliquerais ce travail et des difficultés auxquelles nous avons été confrontées.
Cette recherche est menée pour examiner une approche d'optimisation distributionnellement robuste appliquée au problème de dimensionnement de lots avec des retards de production et une incertitude de rendement sous des ensembles d'ambiguïté par événement. Les ensembles d'ambiguïté basés sur les moments, Wasserstein et le clustering K-Means sont utilisés pour représenter la distribution des rendements. Des stratégies de décision statiques et statiques-dynamiques sont également considérées pour le calcul d'une solution. Dans cette présentation, la performance de différents ensembles d'ambiguïté sera présentée afin de déterminer un plan de production qui soit satisfaisant et robuste face aux changements de l'environnement. Il sera montré, à travers une expérience numérique, que le modèle reste traitable pour tous les ensembles d'ambiguïté considérés et que les plans de production obtenus demeurent efficaces pour différentes stratégies et contextes décisionnels.
Deep learning has revolutionised image processing and is often considered to outperform classical approaches based on accurate modelling of the image formation process. In this presentation, we will discuss the interplay between model-based and learning-based paradigms, and show that hybrid approaches show great promises for scientific imaging, where interpretation and robustness to real-world degradation is important. We will present two applications on super-resolution and high-dynamic range imaging, and exoplanet detection from direct imaging at high contrast.
N'oubliez pas de vous inscrire à la liste maths-ia !
https://listes.math.u-bordeaux.fr/wws/subscribe/mathsia?previous_action=info
A Coulter counter is an impedance measurement system widely used in blood analyzers to count and size red blood cells, thus providing information about the most numerous cells of the body. In Coulter counters, cells flow through a detection zone where an electric field is imposed, which is disturbed when a cell passes through. The number of these impedance signals yield the red blood cell count, while the cell volume is supposed to be proportional to the amplitude of the signals. However, in real systems, the red blood cells trajectories in the system does not allow to verify the assumptions necessary to provide an accurate volume measurement. For a few years, IMAG has been developing the YALES2BIO solver for the prediction of red blood cell dynamics under flow. In this presentation, I will describe the fluid-structure problem and the numerical method used, then share how numerical simulation has been used to understand the signals in industrial Coulter counters and to improve the measurements of red blood cell volumes rendered by such systems. In addition, I will discuss how the mechanical properties of RBCs impact the measurements. This work has been performed during the PhD theses of Pierre Taraconat and Pierre Pottier (Horiba Medical & IMAG).
Soit $K$ un corps algébriquement clos de caractéristique quelconque. Soit $f \in K[[x,y]]$ une série réduite et $r(f)$ le nombre de ses facteurs irréductibles. Soit $\mathcal{O}=K[[x,y]]/(f)$ et $\overline{\mathcal{O}}$ sa cloture intégrale. On note $\delta(f)=\dim_K \overline{\mathcal{O}}/\mathcal{O}$ et $\mu(f)=\dim_K K[[x,y]]/(f'_x,f'_y)$, le nombre de Milnor. Milnor a montré en 1968 que si $K=\mathbb{C}$,
$$\mu(f)=2\delta(f)-r(f)+1.$$
En 1973, Deligne a montré que si la caractérisque de $K$ est arbitraire
$$\mu(f)\geq 2\delta(f)-r(f)+1.$$
Le but de cet exposé est d'énoncer une conjecture sur la caractéristique de $K$ pour avoir l'égalité.
La conjecture standard de type Hodge porte sur les nombres d'intersections de sous-variétés d'une variété projective. Elle a de nombreuses conséquences en arithmétique, dans cet exposé on construira des variétés abéliennes A qui satisfont à cette conjecture. L'outil principal permettant la construction de variétés abéliennes A est la théorie de Honda-Tate, qui relie ces dernières à des objets de théorie algébrique des nombres. On sera ensuite amené à étudier l'algèbre des classes de Tate de A, qui est un invariant plus manipulable que l'ensemble des sous-variétés de A.
We will focus on the formation of extreme waves in the open sea, adopting a probabilistic point of view. We will first identify the first term of the asymptotic development of the probability of occurrence of such a wave when the wave height tends to infinity. If an extreme wave occurs, what is the most likely mechanism that produced it? We will answer this question using two toy models. In the case of an integrable system, we will show that a linear superposition mechanism is the most likely. In the case of a strongly resonant system, the main formation mechanism is a nonlinear focusing effect, which induces an increase in the probability of occurrence of large waves.
Dans cette conférence, nous explorerons les évolutions récentes du secteur spatial dans le cadre du mouvement NewSpace, qui révolutionne l'accès à l'espace par une approche plus agile et commerciale. Nous aborderons également le rôle croissant du spatial dans la surveillance et la lutte contre les changements climatiques, avec un accent particulier sur les technologies permettant de recueillir des données environnementales cruciales. Enfin, nous illustrerons ces avancées à travers le cas de LEOBLUE, une société innovante qui développe des solutions de communication directe entre satellites en orbite basse et smartphones, permettant de nouvelles applications à large échelle.
In statistical learning, many analyses and methods rely on optimization, including its stochastic versions introduced for example, to overcome an intractability of the objective function or to reduce the computational cost of the deterministic optimization step.
In 1951, H. Robbins and S. Monro introduced a novel iterative algorithm, named "Stochastic Approximation", for the computation of the zeros of a function defined by an expectation with no closed-form expression. This algorithm produces a sequence of iterates, by replacing at each iteration the unknown expectation with a Monte Carlo approximation based on one sample. Then, this method was generalized: it is a stochastic algorithm designed to find the zeros of a vector field when only stochastic oracles of this vector field are available.
Stochastic Gradient Descent algorithms are the most popular examples of Stochastic Approximation : oracles come from a Monte Carlo approximation of a large sum. Possibly less popular are examples named "beyond the gradient case" for at least two reasons. First, they rely on oracles that are biased approximation of the vector field, as it occurs when biased Monte Carlo sampling is used for the definition of the oracles. Second, the vector field is not necessarily a gradient vector field. Many examples in Statistics and more
generally in statistical learning are "beyond the gradient case": among examples, let us cite compressed stochastic gradient descent, stochastic Majorize-Minimization methods such as the Expectation-Maximization algorithm, or the Temporal Difference algorithm in reinforcement learning.
In this talk, we will show that these "beyond the gradient case" Stochastic Approximation algorithms still converge, even when the oracles are biased, as soon as some parameters of the algorithm are tuned enough. We will discuss what 'tuned enough' means when the quality criterion relies on epsilon-approximate stationarity. We will also comment the efficiency of the
algorithm through sample complexity. Such analyses are based on non-asymptotic convergence bounds in expectation: we will present a unified method to obtain such bounds for a large class of Stochastic Approximation methods including both the gradient case and the beyond the gradient case. Finally, a Variance Reduction technique will be described and its efficiency illustrated.
...
On s'intéresse au problème d'optimiser une fonction objectif g(W x) + c^T x pour x entier, où chaque coordonnée de x est contrainte dans un intervalle. On suppose que la matrice W est à coefficient entiers de valeur absolue bornée par Delta, et qu'elle projette x sur un espace de petite dimension m << n. Ce problème est une généralisation du résultat de Hunkenschröder et al. dans lequel g est séparable convexe, et x est dans un 0-1 hypercube.
On présentera un algorithme en complexité n^m (m Delta)^O(m^2), sous la supposition que l'on sache résoudre efficacement le problème lorsque n = m. Cet algorithme utilise les travaux d'Eisenbrand et Weismantel sur la programmation linéaire entière avec peu de contraintes.
L'algorithme présenté peut être employé théoriquement dans plusieurs problèmes notamment la programmation mixte linéaire avec peu de contraintes, ou encore le problème du sac à dos où l'on doit acheter son sac.
Stochastic optimization naturally appear in many application areas, including machine learning. Our goal is to go further in the analysis of the Stochastic Average Gradient Accelerated (SAGA) algorithm. To achieve this, we introduce a new $\lambda$-SAGA algorithm which interpolates between the Stochastic Gradient Descent ($\lambda=0$) and the SAGA algorithm ($\lambda=1$). Firstly, we investigate the almost sure convergence of this new algorithm with decreasing step which allows us to avoid the restrictive strong convexity and Lipschitz gradient hypotheses associated to the objective function. Secondly, we establish a central limit theorem for the $\lambda$-SAGA algorithm. Finally, we provide the non-asymptotic $L^p$ rates of convergence.
...
Separable states are multipartite quantum states that can be written as a convex combination of product states. Product states are multipartite quantum states that can be written as a tensor product of states in each space. Quantum state separable problem is an NP-hard problem but fundamental for quantum information theory. We propose two relaxation techniques for this problem. In the view of commutative optimization, we treat the states as matrices of multilinear complex polynomials. Our relaxation technique is found similar to that for complex bilinear polynomials arising in the Alternating Current Optimal Power Flow problem. In the view of non-commutative optimization, we treat the states as tensor products of bounded Positive Semi-definite variables. We propose a generalized McCormick relaxations using linear matrix inequalities. These two relaxations will be the key component to drive an exact branch-and-cut algorithm.
À préciser
...
A définir
La systole d'une surface hyperbolique est la longueur de la géodésique fermée la plus courte sur la surface. Déterminer la systole maximale possible d'une surface hyperbolique d'une topologie donnée est une question classique en géométrie hyperbolique. Je vais parler d'un travail commun avec Mingkun Liu sur la question de ce que les constructions aléatoires peuvent apporter à ce problème d'optimisation.
...
À préciser
On dit qu'une classe de groupes de type fini satisfait une alternative de Tits si chacun de ces groupes est soit "petit" (le sens peut dépendre du contexte), soit contient un groupe libre. L'alternative de Tits originelle concerne les groupes linéaires (et dans ce cas petit signifie virtuellement résoluble). Depuis, elle a été démontrée dans de nombreux contextes géométriques, souvent en courbure négative : groupes agissant sur des espaces hyperboliques, sous-groupes de groupes modulaires de surfaces ou de Out(F_N), groupes agissant sur des complexes simpliciaux avec des bonnes propriétés de courbure, etc.
Je présenterai une nouvelle preuve de l'alternative de Tits pour les groupes agissant sur des immeubles de type Ã_2 (objets que j'introduirai). La nouveauté de notre approche est qu'elle se base sur des marches aléatoires. On démontre également au passage un théorème "local-global" : un groupe dont tous les éléments fixent un point a un point fixe global. C'est un travail en commun avec Corentin Le Bars et Jeroen Schillewaert.
Dans cet exposé nous étudierons la taille du groupe de Tate-Shafarevich de certaines surfaces abéliennes sur le corps de fonctions $\mathbb{F}_q(t)$. Hindry et Pacheco ont montré que, pour les variétés abéliennes sur des corps de fonctions, la taille du Sha (dès que finie) est majorée par la hauteur exponentielle. Nous montrerons qu'en dimension 2 leur borne est optimale. Pour cela, on construira une suite de Jacobiennes vérifiant la conjecture de BSD, puis nous calculerons explicitement leur fonction L à l'aide de sommes de caractères. Grâce à des méthodes analytiques, nous estimerons la taille de la valeur spéciale, pour retrouver finalement la borne souhaitée sur le cardinal de leur groupe de Sha.