On va montrer comment programmer sur des streams avec l'axiome de l'ultrafiltre, qui en quelque sorte correspond à une forme limitée de programmation concurrente pour calculer un stream infini.
A tout mot infini w uniquement ergodique, considéré comme un élément d'un sous-shift, nous associons un nombre réel nu(w) entre 0 et 1 égal à la mesure de l'ensemble des mots inférieurs ou égal à w. Lopez et Narbel ont montré (2016) que si la complexité du mot est assez basse, le décalage de mots correspond à un échange d'intervalles pour ces nombres associés. Nous montrons que si le mot w est un point fixe d'un ``bon'' morphisme, et en particulier d'un morphisme binaire, alors nu(w) et les valeurs de nu pour tous les décalages de w peuvent être trouvés à l'aide d'un morphisme sur les nombres réels.
Une nouvelle définition, implicite, du champ de gradient d'une image discrète est proposée. L'opération de différentiation qui associe une image a son champ de gradient, défini sur une grille deux fois plus fine, est non linéaire, et peut être vue comme l'inverse de l'opération (linéaire) d'intégration. Alors, la variation totale d'une image est simplement la norme l1 de l'amplitude de son champ de gradient. Cette nouvelle définition de la variation totale donne des contours nets et possède de meilleures propriétés d'isotropie que la définition classique.
TBA
I will present an algebraic approach to stochastic graph-rewriting. Rules are seen as particular elements of an algebra of “diagrams”: the diagram algebra D. Diagrams can be thought of as formal computational traces represented in partial time. They can be evaluated to normal diagrams (each corresponding to a rule) and generate an associative unital non-commutative algebra of rules: the rule algebra R. Evaluation becomes a morphism of unital associative algebras which maps general diagrams in D to normal ones in R. In this algebraic reformulation, usual distinctions between graph observables (real-valued maps on the set of graphs defined by counting subgraphs) and rules disappear. Actual graph-rewriting is recovered as a canonical representation of the rule algebra as linear operators over the vector space generated by (isomorphism classes of) finite graphs. This shift of point of view, away from its canonical representation to the rule algebra itself, has unexpected consequences. We find that natural variants of the evaluation morphism map give rise to concepts of graph transformations hitherto not considered. This is joint work with N. Behr and V. Danos.
Operational Techniques (Kripke Logical Relations and Environmental/Open/Parametric Bisimulations) have matured enough to become now formidable tools to prove contextual equivalence of effectful higher-order programs. However, it is not yet possible to automate such proofs -- the problem being of course in general undecidable. In this talks, we will see how to take the best of these techniques to design an automatic procedure which is able many non-trivial examples of equivalence, including most of the examples from the literature. The talk will describe the main ingredients of this method: - Symbolic reduction to evaluate of programs, - Transition systems of heap invariants, as used with Kripke Logical Relations, - Temporal logic to abstract over the control flow between the program and its environment, - Circular proofs to automate the reasoning over guarded recursive predicates.
Unification is a central operation in the construction of a range of computational logic systems based on first-order and higher-order logics. First-order unification has a number of properties that dominates the way it is incorporated within such systems. In particular, first-order unification is decidable, unary, and can be performed on untyped term structures. None of these three properties hold for full higher-order unification: unification is undecidable, unifiers can be incomparable, and term-level typing can dominate the search for unifiers. The so-called pattern subset of higher-order unification was designed to be a small extension to first-order unification that respected the basic laws governing λ-binding (the equalities of α, β, and η-conversion) but which also satisfied those three properties. While the pattern fragment of higher-order unification has been popular in various implemented systems and in various theoretical consideration, it is too weak for a number of applications. In this paper, we define an extension of pattern unification that is motivated by some existing applications and which satisfies these three properties. The main idea behind this extension is that the arguments to a higher-order, free variables can be more than just distinct bound variables: they can also be terms constructed from (sufficient numbers of) such variables using term constructor and where no argument is a subterm of any other argument. We show that this extension to pattern unification satisfies the three properties mentioned above.
Thomas Streicher has reformulated Krivine's notion of classical realizability into abstract Krivine structures and showed that from any such structure one can build a tripos out of it. They are called Krivine triposes and form a subclass of relative realizability triposes in the sense of van Oosten and Hofstra. In this talk, I will present a characterization of those Krivine triposes, indeed, they are exactly boolean subtriposes of relative realizability triposes. I will also talk about a concrete construction of non-localic Krivine triposes. These results are from my master thesis supervised by Jaap van Oosten.
At POPL'96, Hughes, Pareto and Sabry presented an approach to the termination of closed ML-like programs based on type-checking and the abstract interpretation of a semantic notion of size. This approach was then extended by several authors to open terms and more complex type systems. In the first part, I will show how it can be extended in other directions: arbitrary user-defined notions of size and rewriting-based function definitions. In the second part, I will show how the decidability of type-checking (with size-induced subtyping) can be reduced to the existence, in the size algebra, of a smallest solution for any solvable set of inequalities, and show it for the successor algebra (which is enough to subsume Coq termination checker for instance).
In this talk I will present some recent work, with H. Michalewski, on the study of an extension of MSO on infinite trees with the so-called ``measure quantifier''. This kind of research is motivated, as I will discuss, by a long-standing open problem in the field of verification of probabilistic programs. I will state a negative (i.e., undecidability) result but also present some interesting (currently) open problems.
We introduce location graphs, a process calculus framework for modeling dynamic component systems. A key aspect of the framework is its ability to model different forms of component composition, dynamic component structures with sharing, and different forms of separation and encapsulation constraints. We present the operational semantics of the framework and hint at a type system for location graphs.
Directed algebraic topology is a young subject which takes inspiration from homotopy theory and concurrent processes. Differently from algebraic topology, it studies situations in which paths are, in general, not invertible. For this reason directed algebraic topology is particularly suitable for modelling non-reversible phenomena like concurrent processes, where processes do not reverse. In this talk, based on [1], I start from concurrent processes and show how directed algebraic topology is a natural model for it. [1] Martin Raussen, ``Contributions to Directed Algebraic Topology: with inspirations from concurrency theory'', Doctoral Thesis, Department of Mathematical Sciences, Aalborg University.
Les terrains de jeux sont des catégories double avec de la structure et des propriétés supplémentaires. Les quelques exemples de terrains de jeux connus s'appuient sur des constructions similaires. Je vais présenter une généralisation de cette construction de catégories double à partir de données plus simples que l'on appellera ``signature''. Moyennant certaines hypothèses sur la signature, on parvient à montrer une propriété cruciale des terrains de jeux : la propriété de fibration. On appliquera cette construction pour construire un terrain de jeux pour les jeux Hyland-Ong, et on comparera la structure obtenue aux structures classiques dans ces jeux.
Dans les calculs de processus (ccs, pi-calcul), les bisimulations modulos (up-to) sont des techniques de preuves utilisées pour montrer des équivalences entre processus [1]. Une méthode alternative, mais très similaire, consiste à montrer que deux processus sont solutions d'une même équation [2]. On présentera une telle technique reposant sur la non-divergence d'une solution minimale de l'équation, basée sur [3], qui illustre la correspondance entre bisimulations modulos et unicité des solutions, ainsi que les propriétés attendues d'un contexte dans un cadre abstrait (LTS). [1] Sangiorgi, Davide, and David Walker. The pi-calculus: a Theory of Mobile Processes. Cambridge university press, 2003. [2] Sangiorgi, Davide. Equations, contractions, and unique solutions.'' POPL 2015. [3] Roscoe, A. W.
Topology, computer science, and the mathematics of convergence.'' Topology and category theory in computer science. Oxford University Press, Inc., 1991.
Monotonicity is a fundamental notion in mathematics and computation. For usual real-valued functions R → R this simply corresponds to the notion that a function is increasing (or decreasing) in its argument, however this can be parametrised by any partially ordered domain and codomain we wish. In computation we deal with programs that compute Boolean functions, {0,1} → {0,1}. Restricting to increasing functions over this structure can be seen as prohibiting the use of negation in a program; for instance monotone Boolean functions are computed by Boolean circuits without NOT gates. The idea of restricting negation scales to other models of computation, and for some important classes of functions the formulation is naturally robust, not depending on the particular model at hand, e.g. for the polynomial-time functions. Monotone computational problems abound in practice, e.g. sorting a string and detecting cliques in graphs, and 'nonuniform' monotone models of computation, such as monotone circuits, have been fundamental objects of study in computational complexity for decades.
In this talk I will propose a project that develops logical characterisations of monotone complexity classes, via a proof theoretic approach. Namely, the project will identify theories of arithmetic whose formally representable functions coincide with certain monotone classes, and also develop fundamental recursion-theoretic programming languages in which to extract the monotone functions themselves. In particular the project focusses on the role of structural proof theory, i.e. the duplication and erasure of formulae, in controlling monotonicity.
À venir
A Theorem of Gao, Jackson and Seward, originally conjectured to be false by Glasner and Uspenskij, asserts that every countable group admits a strongly aperiodic subshift over a 2-symbol alphabet. Their proof consists of a quite technical construction. We give a shorter proof of their result by using the asymetrical version of Lovasz Local Lemma which allows us also to prove that this subshift is effectively closed (with an oracle to the word problem of the group) in the case of a finitely generated group. This is about joint work with Nathalie Aubrun and Stéphan Thomassé.
Travail en collaboration avec Arkady Leiderman, Ben Gurion University, Beer-Sheva, Israel. Un espace topologique $L$ est un espace ordonnable lorsqu'il existe un ordre total $leq$ sur $L$ tel que la topologie sur $L$ est engendrée par les intervalles ouverts (non nécessairement bornés) à extrémités dans $L$ (exemples: $N$, $Z$, $Q$, $R$; contre-exemple: $C$). Un espace $L$ est un espace héréditairement ordonnable si toute image continue de $L$ est ordonnable. On montre le résultat suivant: Si $L$ est un espace héréditairement ordonnable alors $L$ est un espace dénombrable et compact (i.e. un sous-espace compact de $R$, qui est donc homéomorphe a un ordinal dénombrable ayant un plus grand élément). En fait on montre un résultat plus général.
Bisimulation is used in concurrency theory as a proof method for establishing behavioural equivalence of processes. Up-to techniques are a means of enhancing this proof method. In this talk I will argue that up-to techniques can be of general use, and discuss how this is supported by our coalgebraic framework of up-to techniques, that provides enhanced proof techniques for a wide variety of systems and a wide variety of properties. I will discuss our recent work on extending this framework to cover equivalences for systems that involve silent transitions.
Le traitement catégorique de la théorie des probabilités formulé par Giry et Lawvere, basé sur la monade de probabilité G, permet de manipuler de façon élégante des probabilités d'ordre supérieur. Dans ce cadre, je présenterai une reconstruction du processus de Dirichlet, un objet largement utilisé en inférence bayésienne. Ce processus prend la forme d'une famille de mesures dans GG(X) indexée par M(X), où X est un espace Polonais et M(X) est l'espace des mesures finies non-zéro sur X. Sa construction repose sur deux outils dont l’intérêt dépasse le simple cas de la construction de Dirichlet. Le premier de ces outils est une version fonctorielle du théorème d'extension de Bochner adapté au cadre Polonais, qui permet d'étendre une famille projective de probabilités en une probabilité limite. Le second outil est une méthode de ``décomposition'' qui associe à tout espace Polonais zéro-dimensionnel une famille projective d'espaces finis, dont la limite projective est une compactification de l'espace originel. La combinaison de ces deux outils avec des propriétés bien connues de Dirichlet sur les espaces finis nous permet de reconstruire Dirichlet comme une transformation naturelle de M vers GG. Cette construction améliore les précédentes en ce qu'elle prouve la continuité de Dirichlet en ses paramètres.