Among the approaches directed towards the analysis of quantitative properties of programs, one can certainly mention the metric approaches and the differential/resource-aware ones. In both, the notion of (non-)linearity in the sense of linear logic plays a central role: the first ones aim at treating program distances, and duplication leads to interpret bounded programs as Lipschitz maps; the second ones aim at directly handling duplication in the syntax, and duplication leads to interpret programs as power series. A natural question is thus whether these two apparently unrelated ways of handling (non-)linearity can be somehow connected. At a first glance, there seems to be a “logarithmic” gap between the two: in metric models n duplications result in a Lipschitz function nx, while in differential models this results in a polynomial x^n, not Lipschitz. The central insight of my talk is that a natural way to overcome this obstacle and bridge these two viewpoints is to consider differential semantics in the framework of tropical mathematics, which is a rich combinatorial counterpart of usual algebraic geometry in tight relation with optimization and computational problems.
I will give an overview of some recent work on applying categorical methods in finite model theory and descriptive complexity. A key idea of a research program started by Abramsky, Dawar et al. in 2017 is that various forms of model comparison game, central to finite model theory, can be encoded as game comonads', i.e. resource-indexed comonads on the category of relational structures. For example, the following ingredients can be captured in a syntax-free way: Ehrenfeucht-Fraïssé games, fragments of first-order logic with bounded quantifier rank, and the combinatorial parameter of tree-depth. This approach covers also finite variable fragments, modal and guarded fragments, and more. The pattern of game comonads has been axiomatised at a general level in terms of
arboreal adjunctions', i.e. adjunctions between an extensional category (typically, in the examples, a category of relational structures) and a resource-indexed family of arboreal' categories. I will illustrate an application of this axiomatic framework to the study of
equi-resource' homomorphism preservation theorems in (finite) model theory, exemplified by Rossman's equirank homomorphism preservation theorem. If time allows, I will also discuss recent work on identifying the expressive power of arboreal adjunctions.
Dans ce séminaire nous étudions des classes d’arbres étiquetés selon différents modèles d’étiquetages croissants. Ces arbres sont utiles dans la modélisation de nombreux processus. Nous adoptons dans nos recherches différents points de vues complémentaires (combinatoire ou probabiliste) afin d’enrichir les résultats connus sur les arbres croissants classiques et de proposer de nouvelles classes d’arbres moins contraintes que les modèles existants dans la littérature.
Un pavage est un recouvrement du plan par des tuiles qui ne se chevauchent pas. On appelle sous-shift un ensemble de pavages qui est invariant par translation et fermé (pour la topologie habituelle sur les pavages). Ici on s’intéresse au cas où il y a un nombre fini de tuiles à translation près, les tuiles sont des losanges, et à chaque fois que deux tuiles sont adjacentes elle partagent soit un unique sommet en commun soit une arête entière en commun. Sur ces pavages, on peut imposer des règles locales (à la manière des tuiles de Wang) en ajoutant des couleurs sur les arêtes des tuiles avec la règle que deux tuiles qui partagent une arête doivent avoir la même couleur pour cette arête. Dans ce cas, pour une même tuile géométrique, on peut avoir plusieurs copies avec des couleurs différentes sur les arêtes. Étant donné un sous-shift X de pavages par losanges, le problème du domino sur X prend en entrée un ensemble fini de règles locales F et renvoie Vrai si il existe un pavage de X qui respecte les règles locales F et Faux dans le cas contraire. Ce problème est co-Récursivement-Énumérable-complet et nous allons présenter une réduction many-one depuis le problème du domino classique sur les tuiles de Wang.
This talk has three parts. First, I want to explain how monoid acts can be viewed as models of computation. Second, I will sketch my thesis work on categories of actions of topological monoids. Third, if there is time, I would like to explain how this work would need to be extended in order to capture the monoid acts from the first part, point out some obstacles that need to be overcome, and give some intuition on what building such a topos-theoretic context could provide. Note that while I will give complexity-theoretic motivation, I want to apply this theory to monoid and group acts of all kinds, so feel free to ask questions about your favourite ones.
The notion of hypergraph has been introduced as a generalization of graphs so that each hyperedge is a subset of the set of vertices, without constraints on its cardinality. A widely investigated problem related both to graphs and to hypergraphs concerns the characterization and reconstruction from their degree sequences. Concerning graphs, this problem has been efficiently solved in 1960 by Erdos and Gallai, while no efficient solutions are possible in the case of hypergraphs, even in the simple case of 3-uniform ones, as shown in 2018 by Deza et al. So, to reduce the NP-hard core of the hypergraph reconstruction problem, we consider a class of degree sequences defined by Deza et al. that show interesting geometrical and algebraic properties. We propose some ideas on how use those properties to challenge the related reconstruction problem.
En sémantique des langages, l'équivalence contextuelle de programmes est une notion clef, mais sa définition se prête mal à la manipulation, on cherche donc des modèles corrects et complets lui donnant une expression plus pratique. Les jeux et la sémantique interactive sont de tels modèles, applicables à beaucoup de langages et donnant une équivalence coïnductivement définie. Dans cette présentation je vais esquisser visuellement et de manière intuitive un formalisme de jeux général qui se prête bien à la formalisation dans un assistant de preuve. Je vais également décrire différents genres d'arbres inductifs et coïnductifs ainsi qu'une manière pratique de construire des jeux à partir de composants plus élémentaires.
In this seminar, I will focus on the characterisation of bijective digitized rotations and reflections. Although the characterisation of bijective digitized rotations in 2D is well known, the extension to 3D is still an open problem. A certification algorithm exists that allows to verify that a digitized 3D rotation defined by a quaternion is bijective. In this seminar, we show how we use geometric algebra to represent a bijective digitized rotation as a pair of bijective digitized reflections. Visualization of bijective digitized reflections in 3D using geometric algebra leads to a conjectured characterization of 3D bijective digitized reflections and, thus, rotations. So far, any known quaternion that defines a bijective digitized rotation verifies the conjecture. An approximation method of any 3D digitized reflection by a conjectured bijective one is also proposed. Some experimental results will be shown with DGtal.
We introduce open parity games, which is a compositional approach to parity games. This is achieved by adding open ends to the usual notion of parity games. We introduce the category of open parity games, which is defined using standard definitions for graph games. We also define a graphical language for open parity games as a prop, which have recently been used in many applications as graphical languages. We introduce a suitable semantic category inspired by the work by Grellois and Melliès on the semantics of higher-order model checking. Computing the set of winning positions in open parity games yields a functor to the semantic category. Finally, by interpreting the graphical language in the semantic category, we show that this computation can be carried out compositionally. We also discuss current work on an efficient implementation of a compositional solver of graph games.
Monadic interpreters have been used for a long time as a mean to embed arbitrary computations in purely functional contexts. At its core, the idea is elementary: the object language of interest is implemented as an executable interpreter in the host language, and monads are simply the abstraction used to embed features such as side effects, failure, non-determinism. By building these interpreters on top of the free monad, the approach has offered a comfortable design point notably enabling an extensible syntax, reusable modular components, structural compositional definitions, as well as algebraic reasoning. The approach has percolated beyond its programming roots: it is also used as a way to formalize the semantics of computational systems, programming languages notably, in proof assistants based on dependently typed theories. In such assistants, the host language is even more restricted: programs are all pure, but also provably terminating. Divergent programs can nonetheless be embedded using for instance the Capretta monad: intuitively, a lazy, infinite (coinductive) tree models the dynamic of the computation. Interaction trees are a specific implementation, in the Coq proof assistant, of a set of tools realizing this tradition. They provide a coinductive implementation of the iterative free monad, equipped with a set of combinators, allowing notably for general recursion. Each iterator comes with its equational theory established with respect to a notion of weak bisimulation --- i.e. termination sensitive, but ignoring the amount of fuel consumed --- and practical support for equational reasoning. Further effects are implemented into richer monads via a general notion of interpretation, allowing one to introduce the missing algebras required for proper semantic reasoning. Beyond program equivalence, support for arbitrary heterogeneous relational reasoning is provided, typically allowing one to prove a compilation pass correct. Introduced in 2020, the project has spawned realistic applications --- they are used to model LLVM IR's semantics notably ---, as well as extensions to reduce the necessary boilerplate, or to offer proper support for non-determinism. In this talk, I will attempt to paint an illustrative overview of the core ideas and contributions constitutive of this line of work.
This talk proposes full convexity as an alternative definition of digital convexity, which is valid in arbitrary dimension. It solves many problems related to its usual definitions, like possible non connectedness or non simple connectedness, while encompassing its desirable features. Fully convex sets are digitally convex, but are also connected and simply connected. They have a morphological characterisation, which induces a simple convexity test algorithm. Arithmetic planes are fully convex too. Full convexity implies local full convexity, hence it enables local shape analysis, with an unambiguous definition of convex, concave and planar points. We propose also a natural definition of tangent subsets to a digital set. It gives rise to the tangential cover in 2D, and to consistent extensions in arbitrary dimension. We present two applications of tangency: the first one is a simple algorithm for building a polygonal mesh from a set of digital points, with reversibility property, the second one is the definition and computation of shortest paths within digital sets. In a second part of the talk, we study the problem of building a fully convex hull. We propose an iterative operator for this purpose, which computes a fully convex enveloppe in finite time. We can even build a fully convex enveloppe within another fully convex set (a kind of relative convex hull). We show how it induces several natural digital polyhedral models, whose cells of different dimensions are all fully convex sets. As perspective to this work, we study the problem of fully convex set intersection, which is the last step toward a full digital analogue of continuous convexity.
L'allostérie est un phénomène d'importance fondamentale en biologie qui permet la régulation de la fonction et l'adaptabilité dynamique des enzymes et protéines. Malgré sa découverte il y a plus d'un siècle, l'allostérie reste une énigme biophysique, parfois appelée « second secret de la vie ». La difficulté est principalement associée à la nature complexe des mécanismes allostériques qui se manifestent comme l'altération de la fonction biologique d'une protéine/enzyme (c-à-d. la liaison d'un substrat/ligand au site active) par la liaison d'un « autre objet » (``allos stereos'' en grec) à un site distant (plus d'un nanomètre) du site actif, le site effecteur. Ainsi, au cœur de l'allostérie, il y a une propagation d'un signal du site effecteur au site actif à travers une matrice protéique dense, où l'un des enjeux principal est représenté par l'élucidation des interactions physico-chimiques entre résidus d'acides aminés qui permettent la communication entre les deux sites : les chemins allostériques. Ici, nous proposons une approche multidisciplinaire basée sur la combinaison de méthodes de chimie théorique, impliquant des simulations de dynamique moléculaire de mouvements de protéines, des analyses (bio)physiques des systèmes allostériques, incluant des alignements multiples de séquences de systèmes allostériques connus, et des outils mathématiques basés sur la théorie des graphes et d'apprentissage automatique qui peuvent grandement aider à la compréhension de de la complexité des interactions dynamiques impliquées dans les différents systèmes allostériques. Le projet vise à développer des outils rapides et robustes pour identifier des chemins allostériques inconnus. La caractérisation et les prédictions de points allostériques peuvent élucider et exploiter pleinement la modulation allostérique dans les enzymes et dans les complexes ADN-protéine, avec de potentielles grandes applications dans l'ingénierie des enzymes et dans la découverte de médicaments.
Reconstructing digital humans is a key problem in 3D vision with many applications for autonomous driving, robotics, Virtual and Augmented Reality and has attracted a lot of research for decades. In this talk I will discuss about non-invasive hardware-based solutions to jointly capture human body shape and motion. We will see that efficient modelisation of human body deformation is key to enable real-time tracking. I will also present recent works about AI-based solutions for both human shape reconstruction from a single color images and full body animation with minimum driving signal such as a skeleton. We will see that deep learning opens new perspectives and possibilities to create real digital humans and animate them in the digital spaces.
Constructive modal logics are obtained by adding to intuitionistic logic a minimal set of axioms for the box and diamond modalities. During this talk I will present two new semantics for proofs in these logics. The first semantics captures syntactically the proof equivalence enforced by non-duplicating rules permutations, and it is defined by means of the graphical syntax of combinatorial proofs. The second semantics captures a coarse notion of proof equivalence, and it is given by means of winning innocent strategies of a two-player game over graphs encoding formulas. This latter semantics is provided with a notion of compositionality and indeed defines the first concrete model of a denotational semantics for these logics.
La théorie des types de Martin-Löf compte parmi les instances les plus abouties de la correspondance preuves-programmes : les types dépendants et les types inductifs permettent de spécifier des propriétés complexes aux programmes, et la hiérarchie d'univers fournit une puissance logique suffisante pour encoder l'essentiel des constructions mathématiques -- ce qui en fait un outil de choix pour les assistants de preuves! Toutefois, l'égalité inductive fournie par la théorie n'est pas très adaptée au raisonnement mathématique, car elle encode l'égalité des programmes (intensionnalité'') et non l'égalité des comportements (
extensionnalité''). Cela implique des conséquences désagréables : il est impossible de prouver que les fonctions qui à n associent respectivement n+2 et 2+n sont égales, il est impossible de quotienter un type par une relation, etc. C'est précisément pour remédier à ça qu'a été développée l'idée de théorie des types observationnelle, qui fournirait ces principes d'extensionnalité souhaitables, tout en préservant la correspondance preuves-programmes et les propriétés qui en font un outil si pratique (normalisation, canonicité, décidabilité du typage…). Dans cet exposé, je présenterai TT^obs, une altération conceptuellement simple de la théorie de Martin-Löf qui en fait une théorie observationnelle complète, je montrerai quelques exemples d'utilisation, et j'ébaucherai sa méta-théorie si le temps le permet.
Tout polynôme multivarié P(X_1,...,X_n) peut être écrit comme une somme de monômes, i.e., une somme de produits de variables et de constantes du corps. La taille naturelle d'une telle expression est le nombre de monômes. Mais, que se passe-t-il si on rajoute un nouveau niveau de complexité en considérant les expressions de la forme : somme de produits de sommes (de variables et de constantes) ? Maintenant, il devient moins clair comment montrer qu'un polynôme donné n'a pas de petite expression. Dans cet exposé nous résoudrons exactement ce problème. Plus précisément, nous prouvons que certains polynômes explicites n'ont pas de représentations ``somme de produits de sommes'' (SPS) de taille polynomiale. Nous pouvons aussi obtenir des résultats similaires pour les SPSP, SPSPS, ... etc. pour toutes les expressions de profondeur constante.
We introduce a new formalisation of language computation, called keyboards. We consider a set of atomic operations (writing a letter, erasing a letter, going to the right or to the left) and we define a keyboard as a set of finite sequences of such operations, called keys. The generated language is the set of words obtained by applying some non-empty sequence of those keys. Unlike classical models of computation, every key can be applied anytime. We define various classes of languages based on different sets of atomic operations, and compare their expressive powers. We also compare them to rational, context-free and context-sensitive languages. We obtain a strict hierarchy of classes, whose expressiveness is orthogonal to the one of the aforementioned classical models. We also study closure properties of those classes, as well as fundamental complexity problems on keyboards.
We study the decomposition of multivariate polynomials as sums of powers of linear forms. In this talk, we focus on the following problem: given a homogeneous polynomial of degree 3 over a field, decide whether it can be written as a sum of cubes of linearly independent linear forms over an extension field. This task can be equivalently expressed as a decomposition problem for symmetric tensors of order 3. Even if the input polynomial has rational coefficients, the answer may depend on the choice of the extension field. We study the cases where the extension field is either the real or the complex numbers. Our main result is an algorithm that solves this problem in polynomial time when implemented in the bit model of computation. Furthermore, contrary to the previous algorithms for the same problem, our algorithm is algebraic and does not make any appeal to polynomial factorization. We also discuss how our algorithm can be extended to other tensor decomposition problems. This talk is based on a joint work with Pascal Koiran.
In this talk, we will see how to build trace models of programming languages in a systematic way using labelled transition systems designed by operational game semantics. The primary purpose of these models is to characterize contextual equivalence, the maximal observational equational theory, thanks to full-abstraction results. We will consider higher-order programming languages with features that include mutable store (local references), control operators (call/cc), cryptographic operators (dynamic sealing), and rich type systems (algebraic data types, parametric polymorphism). We will see how to apply this framework to prove a fully abstract compilations result from parametric polymorphism to untyped cryptographic lambda-calculus.