Realizability and parametricity are two well-known approaches to the semantics of System F, the architectural language for polymorphism. Many well-known realizability semantics can be recast in a simple topological form as induced by closure operators over sets of lambda-terms. This allows to generalize some completeness results known in the literature to a wide class of semantics (including Krivine's saturated sets and several variants of Girard's reducibility candidates), and to relate realizability with parametricity and dinaturality, an approach to parametricity arising from the functorial semantics of polymorphism. Our main result is that for a general class of realizability semantics (those which satisfy a particular topological property) one can prove a parametricity theorem'' stating that closed realizers are parametric and a
dinaturality theorem'' stating that closed realizers of positive types are dinatural. We compare our results with Wadler's approach which sees realizability and parametricity as some sort of adjoint functors. Finally, we briefly discuss the case of Girard's original definition of reducibility candidates, whose ``not trivial and somehow mysterious'' [Riba 2007] structure does not fit yet within our approach.
Finding lower bounds in complexity theory has proven to be an extremely difficult task. We analyze three proofs of lower bounds that use heavy techniques from algebraic geometry through the lense of dynamical systems. Interpreting programs as graphings – generalizations of dynamical systems that model Girard's Geometry of Interaction, we show that the three proofs share the same structure and use algebraic geometry to give a bound on the topological entropy of the system representing the program. This work, joint with Thomas Seiller, aims at proposing Geometry of Interaction derived methods to study dynamical properties of models of computation beyond Curry-Howard.
In this talk, I present a framework for recursive proofs of coinductive predicates, which are defined via functor liftings to fibrations. This framework is based on the so-called later modality and Löb-induction. Intuitively, the role of the later modality is thereby to control the use of coinduction hypotheses. Since the framework works on certain fibrations, it can be instantiated in very diverse situation like, for instance, set-based predicates and relations, quantitative predicates and relations, syntactic first-order logic, or dependent type theory. Apart from showing the underlying technical constructions of the framework, I will demonstrate how it can be used in those examples. Moreover, I will briefly talk about some recent progress, in collaboration with Katya Komendantskaya and Yue Li, in the direction of automatic proof search for this framework.
Je commencerai par une introduction basique aux différents outils utilisés dans mon domaine de recherche, à savoir la théorie des catégories, l'algèbre homotopique à la Quillen et l'interprétation de la logique à la Lawvere. Aucune connaissance n'est prérequise et je m'appuierai sur des analogies algébriques accessibles à tous mathématiciens (monoides, groupes, etc.) et sur des exemples pertinents en regard des thématiques du LIMD. Une fois ces notions introduites, je présenterai le résultat central de travaux récents effectués avec Paul-André Melliès : étant donnée une bifibration E-->B où la base et les fibres sont équipées de structures de catégories de modèles, quelles sont les conditions pour recoller ces dernières en une structure de catégorie de modèles sur la catégorie totale E ? J'essaierai enfin d'expliquer les motivations de ces travaux qui trouvent leur origine à la fois dans la théorie de l'homotopie catégorique et dans la sémantique de la théorie des types dépendents.
Structural operational semantics is a family of syntactic formats for specifying the operational semantics of programming languages, in the form of a labelled transition system. Fiore and his collaborators have proposed an abstract framework for structural operational semantics based on bialgebras, in which they managed to prove that bisimilarity is a congruence. However, their framework does not scale well to languages with variable binding. We give an abstract account of structural operational semantics based on Weber's parametric right adjoint monads, which encompasses variable binding. On the example of pi-calculus, the key idea is that, while Fiore models the syntax through a monad on a certain presheaf category, we use a subtly different presheaf category inspired by our previous work on sheaf models for concurrent languages. The crucial consequence is that the relevant monad is a parametric right adjoint. This yields a very simple proof of congruence of bisimilarity.
The ring of multivariate polynomials F[x_1, x_2, ..., x_n] is a unique factorization domain. We consider the following problem: ``Is there an 'efficient' algorithm that outputs a non-trivial factor of the given input polynomial''. This question has applications in algebraic complexity, for example, in proving the connection between polynomial identity testing (PIT) and lower bounds. In this talk, we will consider the closure of various classes of polynomial families under factorization. [Kaltofen86-90] studied this problem for VP. A slew of work in the recent years has brought it back into the limelight: [DSY09] studied circuits of small depth and factors of a special form, [Oliveria16] studied formulas of small depth, [DSS18] studied ABPs and formulas, [CKS18] studied the polynomial class VNP. We will take a look at these algorithms and state some open problems in the area.
La réécriture de dimension supérieure a pour origine des travaux de Squier sur le problème du mot dans les monoïdes. A partir d'une présentation d'un monoïde, Squier a pu calculer en basse dimension des invariants homotopiques de ce monoïde. Depuis, elle a été adaptée à d'autres structures, et en particulier aux PRO, où elle permet de prouver des théorèmes de cohérence comme celui de MacLane pour les catégories monoïdales. Par ailleurs, dans le cas des monoïdes, les constructions de réécriture ont été étendues en dimension supérieure. Au cours de cet exposé, je montrerai comment il est possible d'unifier ces théories de réécriture dans diverses structures. En particulier, ceci permet de réinterpréter les constructions effectuées en réécriture en termes homotopiques. Cette réinterprétation s'appuie en particulier sur la notion de omega-catégorie cubique et sur le produit de Gray.
In this talk we will discuss a link between geometry of continued fractions and global relations for singularities of complex projective toric surfaces. The results are based on recent development of lattice trigonometric functions that are invariant with respect to Aff(2,Z)-group action.
Les nombres de Markov sont des entiers positifs qui apparaissent dans les triplets de solutions de l’équation diophantienne, x^2+y^2+z^2 = 3xyz, appelée l’équation de Markov. Il est possible de trouver tous les solutions à partir d’un triplet par un algorithme simple. Pourtant, il y a un célèbre problème ouvert formulé par Frobenius : est-il vrai qu'étant donné un entier positif z, il existe au plus un couple (x,y) d’entiers positifs avec x < y < z tel que (x,y,z) soit une solution? Ces nombres apparaissent dans le contexte des fractions continues et de l’approximation diophantienne des nombres réels irrationnels par des nombres rationnels. Ils apparaissent aussi dans de très nombreux domaines des mathématiques comme les formes quadratiques binaires, la géométrie hyperbolique et la combinatoire des mots etc... Le but de cette exposé est de présenter une partie de la théorie de Markov qui est construite autour de l’équation de Markov et de donner la conjecture d’unicité sur les nombres de Markov. Au final, on introduira une involution des irrationnels susceptible d’être pertinente pour le sujet.
Implicative algebras, developed by Alexandre Miquel, are very simple algebraic structures generalizing at the same time complete Boolean algebras and Krivine realizability algebras, in such a way that they allow to express in a same setting the theory of forcing (in the sense of Cohen) and the theory of classical realizability (in the sense of Krivine). Besides, they have the nice feature of providing a common framework for the interpretation both of types and programs. The main default of these structures is that they are deeply oriented towards the λ-calculus, and that they only allows to faithfully interpret languages in call-by-name. To remediate the situation, we introduce two variants of implicative algebras: disjunctive algebras, centered on the “par” (⅋) connective of linear logic (but in a non-linear framework) and naturally adapted to languages in call-by-name; and conjunctives algebras, centered on the “tensor” (⊗) connective of linear logic and adapted to languages in call-by-value. Amongst other properties, we will see that disjunctive algebras are particular cases of implicative algebras and that conjunctive algebras can be obtained from disjunctive algebras (by reversing the underlying order).
A Majority Automata consists of applying over the vertices of a undirected graph (with states 0’s and 1’s) an operator that chooses the most represented state among the neighbors of a vertex. This rule is applied in parallel over all the nodes of the graph. When the graph is a regular lattice ( in one or more dimensions) it is called the Majority Cellular Automata. In this seminar we will study the computational complexity of the following prediction problem: PRED: Given an initial configuration and a specific site initially at state a ( 0 or 1), is there a time step T≥1 such that this site changes state? The complexity of PRED is characterized by the possibility to find an algorithm that give the answer faster than the running of the automata simulation in a serial computer. More precisely, if we are able to determine an algorithm running in a parallel computer in polylog time (class NC). Otherwise, the problem may be P-Complete ( one of the most dificult in class P of Polynomial problems) or … worse. We will applied this kind of results to the discrete Schelling’s segregation model. Also we will present the Sakoda’s Social Discret model.
TBA
Complexity theory helps us predict and control resources, usually time and space, consumed by programs. Static analysis on specific syntactic criterion allows us to categorize some programs. A common approach is to observe the program’s data’s behavior. For instance, the detection of non-size-increasing programs is based on a simple principle : counting memory allocation and deallocation, particularly in loops. This way, we can detect programs which compute within a constant amount of space. This method can easily be expressed as property on control flow graphs. Because analyses on data's behaviour are syntactic, they can be done at compile time. Because they are only static, those analyses are not always computable or easily computable and approximations need are needed. Size-Change Principle'' from C. S. Lee, N. D. Jones et A. M. Ben-Amram presented a method to predict termination by observing resources evolution and a lot of research came from this theory. Until now, these implicit complexity theories were essentially applied on more or less toy languages. This thesis applies implicit computational complexity methods into
real life'' programs by manipulating intermediate representation languages in compilers. This give an accurate idea of the actual expressivity of these analyses and show that implicit computational complexity and compilers communities can fuel each other fruitfully. As we show in this thesis, the methods developed are quite generals and open the way to several new applications.
Soit c un nombre rationnel. Considérons le polynôme φ(X) = X^2 - c. On s’intéressse aux cycles de φ dans Q. Plus précisément, on s’intéresse à l’une des conjectures de Poonen selon laquelle tout cycle de φ dans Q admet une longueur au plus égale à 3. Dans notre exposé, on discutera de cette conjecture et on rappellera les résultats connus. En suite, on utilisera des moyens arithmetiques, combinatoriaux et analytiques simples pour étudier des cas particuliers de ce problème. Les outils utilisés dans cet exposé sont accessibles aux étudiants de master 2.
La logique linéaire différentielle (DiLL) a été construite après une étude de modèles vectoriels de la logique linéaire, où les preuves sont interprétées par des séries plus ou moins formelles. Il s'agit donc de modèles discrets, où la différentielle extrait la partie linéaire d'une série entière. On cherche à trouver un modèle continu de la logique linéaire différentielle classique : il nous faut à la fois une catégorie cartésienne close de fonctions lisses et une catégorie monoidale close d'espaces réfléxifs. Nous allons détailler une solution partielle à ce problème, à travers d'espaces nucléaires et d'espaces de distributions. Nous verrons comment ce modèle suggère une syntaxe séparée en classes de formules, chaque classe correspondant aux solutions d'une EDP linéaire. Nous montrerons que chaque classe liée à une EDP dont on peut construire la solution se comporte comme une exponentielle intermédiaire, et vérifie les règles exponentielles de la logique linéaire différentielle. Si le temps le permet, nous aborderons un travail en collaboration avec Y. Dabrowski , où nous trouvons plusieurs modèles lisses de la logique linéaire différentielle, en faisant le choix discriminant d'interpréter la disjonction multiplicative de LL par le produit epsilon de Schwartz.
In the first part of this talk, I'll recall the construction of category of games and innocent deterministic strategies introduced by Harmer, Hyland and Mellies [1]. Compared with the original method by Hyland and Ong [2], this method holds two specific advantages. First, it outlines the structural conditions on certain games and strategies by introducing separate entities (the schedules) that focus most of the required proof work. Second, the methods lays out a pretty clear combinatorial ‘recipe’ that could be replicated in other settings. That will be the goal of the second part of the talk, which will develop a 2-categorical and sheaf-theoretic formulation of non-deterministic innocent strategies, based on this ‘recipe’. During the course of this construction, I'll outline specific properties that give us a better understanding of both deterministic and non-deterministic strategies.
[1] Categorical combinatorics of innocent strategies, Harmer, Hyland, Mellies, LiCS 2007.
[2] On full abstraction for PCF I, II and III, Hyland, Ong, Information and Computation 2000.