The Lean Theorem Prover

Lean is a new player in the field of proof assistants for Homotopy Type Theory. It is being developed by Leonardo de Moura working at Microsoft Research, and it is still under active development for the foreseeable future. The code is open source, and available on Github.

You can install it on Windows, OS X or Linux. It will come with a useful mode for Emacs, with syntax highlighting, on-the-fly syntax checking, autocompletion and many other features. There is also an online version of Lean which you can try in your browser. The on-line version is quite a bit slower than the native version and it takes a little while to load, but it is still useful to try out small code snippets. You are invited to test the code snippets in this post in the on-line version. You can run code by pressing shift+enter.

In this post I’ll first say more about the Lean proof assistant, and then talk about the considerations for the HoTT library of Lean (Lean has two libraries, the standard library and the HoTT library). I will also cover our approach to higher inductive types. Since Lean is not mature yet, things mentioned below can change in the future.

Update January 2017: the newest version of Lean currently doesn’t support HoTT, but there is a frozen version which does support HoTT. The newest version is available here, and the frozen version is available here. To use the frozen version, you will have to compile it from the source code yourself.

Continue reading

Posted in Code, Higher Inductive Types, Programming | 48 Comments

Real-cohesive homotopy type theory

Two new papers have recently appeared online:

Both of them have fairly chatty introductions, so I’ll try to restrain myself from pontificating at length here about their contents. Just go read the introductions. Instead I’ll say a few words about how these papers came about and how they are related to each other.

Continue reading

Posted in Applications, Foundations, Paper | 12 Comments

A new class of models for the univalence axiom

First of all, in case anyone missed it, Chris Kapulkin recently wrote a guest post at the n-category cafe summarizing the current state of the art regarding “homotopy type theory as the internal language of higher categories”.

I’ve just posted a preprint which improves that state a bit, providing a version of “Lang(C)” containing univalent strict universes for a wider class of (∞,1)-toposes C:

Continue reading

Posted in Models, Paper, Univalence | 4 Comments

Constructing the Propositional Truncation using Nonrecursive HITs

In this post, I want to talk about a construction of the propositional truncation of an arbitrary type using only non-recursive HITs. The whole construction is formalized in the new proof assistant Lean, and available on Github. I’ll write another blog post explaining more about Lean in August.

Continue reading

Posted in Code, Higher Inductive Types | 25 Comments

Universal Properties of Truncations

Some days ago at the HoTT/UF workshop in Warsaw (which was a great event!), I have talked about functions out of truncations. I have focussed on the propositional truncation \Vert - \Vert, and I want to write this blog post in case someone could not attend the talk but is interested nevertheless. There are also my slides and the arXiv paper.
The topic of my talk can be summarised as

Question: What is the type \Vert A \Vert \to B ?

This question is very easy if B is propositional, because then we have

Partial answer (1): If B is propositional, then (\Vert A \Vert \to B) \simeq (A \to B)

by the usual universal property, with the equivalence given by the canonical map. Without an assumption on B, it is much harder.

The rest of this post contains a special case which already contains some of the important ideas. It is a bit lengthy, and in case you don’t have time to read everything, here is the

Main result (general universal property of the propositional truncation).
In a dependent type theory with at least \mathbf{1}, \Sigma-, \Pi-, and identity types, which furthermore has Reedy \omega^{\mathsf{op}}-limits (“infinite \Sigma-types”), we can define the type of coherently constant functions A \xrightarrow {\omega} B as the type of natural transformations between type-valued presheaves. If the type theory has propositional truncations, we can construct a canonical map from \Vert A \Vert \to B to A \xrightarrow {\omega} B. If the type theory further has function extensionality, then this canonical map is an equivalence.
If B is known to be n-truncated for some fixed n, we can drop the assumption of Reedy \omega^{\mathsf{op}}-limits and perform the whole construction in “standard syntactical” HoTT. This describes how functions \Vert A \Vert \to B can be defined if B is not known to be propositional, and it streamlines the usual approach of finding a propositional Q with A \to Q and Q \to B.


Here comes the long version of this blog post. So, what is a function g : \Vert A \Vert \to B? If we think of elements of \Vert A \Vert as anonymous inhabitants of A, we could expect that such a g is “the same” as a function f : A \to B which “cannot look at its input”. But then, how can we specify what it means to “not look at its input” internally? A first attempt could be requesting that f is weakly constant, \mathsf{wconst}_f :\equiv \Pi_{x,y:A} f(x) = f(y). Indeed, it has been shown:

Partial answer (2): If B is a set (h-set, 0-truncated), then (\Vert A \Vert \to B) \simeq (\Sigma (f : A \to B). \mathsf{wconst}_f), where the function from left to right is the canonical one.

It is not surprising that we still need this strong condition on B if we want weak constancy to be a sufficient answer: just throwing in a bunch of paths, which might or might not “fit together” (we just don’t know anything) seems wrong. Indeed, from Mike’s recent construction, we know that the statement does become false (contradicts univalence) without this requirement.

Given a function f : A \to B and a proof c : \mathsf{wconst}_f, we can try to fix the problem that the paths given by c “do not fit together” by throwing in a coherence proof, i.e. an element of \mathsf{coh}_{f,c} :\equiv \Pi_{x,y,z:A} c(x,y) \cdot c(y,z) = c(x,z). We should already know that this will introduce its own problems and thus not fix everything, but at least, we get:

Partial answer (3): If B is 1-truncated, then (\Vert A \Vert \to B) \simeq (\Sigma (f : A \to B). \Sigma (c:\mathsf{wconst}_f). \mathsf{coh}_{f,c}), again given by a canonical map from left to right.

In my talk, I have given a proof of “partial answer (3)” via “reasoning with equivalences” (slides p. 5 ff). I start by assuming a point a_0 : A. The type B is then equivalent to the following (explanation below):

\Sigma (f_1 : B).
\Sigma (f : A \to B). \Sigma (c_1 : \Pi_{a:A} f(a) = f_1).
\Sigma (c : \mathsf{const}_f). \Sigma(d_1 : \Pi_{a^1, a^2:A} c(a^1,a^2) \cdot c_1(a^2) = c_1(a^1)
\Sigma (c_2 : f(a_0) = f_1). \Sigma (d_3 : c(a_0,a_0) \cdot c_1(a_0) = c_2).
\Sigma (d : \mathsf{coh}_{f,c_0}).
\Sigma (d_2 : \Pi_{a:A} c(a_0,a) \cdot c_1(a) = c_2).
\mathbf{1}

In the above long nested \Sigma-type, the first line is just the B that we start with. The second line adds two factors/\Sigma-components which, by function extensionality, cancel each other out (they form a singleton). The same is true for the pair in the third line, and for the pair in the fourth line. Lines five and six look different, but they are not really; it’s just that their “partners” (which would complete them to singletons) are hard to write down and, at the same time, contractible; thus, they are omitted. As B is assumed to be a 1-type, it is easy to see that the \Sigma-components in lines five and six are both propositional, and it’s also easy to see that the other \Sigma-components imply that they are both inhabited. Of course, the seventh line does nothing.

Simply by re-ordering the \Sigma-components in the above type, and not doing anything else, we get:

\Sigma (f : A \to B). \Sigma(c : \mathsf{const}_f). \Sigma(d : \mathsf{coh}_{f,c}).
\Sigma (f_1 : B) \Sigma(c_2 : f(a_0) = f_1).
\Sigma (c_1 : \Pi_{a:A} f(a) = f_1). \Sigma(d_2 : \Pi_{a:A} c(a_0,a) \cdot c_1(a) = c_2).
\Sigma (d_1 : \Pi_{a^1, a^2:A} c(a^1,a^2) \cdot c_1(a^2) = c_1(a^1)).
\Sigma (d_3 : c(a_0,a_0) \cdot c_1(a_0) = c_2).
\mathbf{1}

By the same argument as before, the components in the pair in line two cancel each other out (i.e. the pair is contractible). The same is true for line three. Line four and five are propositional as B is 1-truncated, but easily seen to be inhabited. Thus, the whole nested \Sigma-type is equivalent to \Sigma (f : A \to B). \Sigma(c : \mathsf{const}_f). \mathsf{coh}_{f,c}.
In summary, we have constructed an equivalence B \simeq \Sigma (f : A \to B). \Sigma(c : \mathsf{const}_f). \mathsf{coh}_{f,c}. By going through the construction step-by-step, we see that the function part of the equivalence (map from left to right) is the canonical one (let’s write \mathsf{canon}_1), mapping b : B to the triple (\lambda \_. b, \lambda \_ \_ . \mathsf{refl}_b , \lambda \_ \_ \_ . \mathsf{refl}_{\mathsf{refl}_b}). Before we have started the construction, we have assumed a point a_0 : A. But, and this is the crucial observation, the function \mathsf{canon}_1 does not depend on a_0! So, if the assumption A implies that \mathsf{canon}_1 is an equivalence, then \Vert A \Vert is already enough. Thus, we have \Vert A \Vert \to \big(B \simeq \Sigma (f : A \to B). \Sigma(c : \mathsf{const}_f). \mathsf{coh}_{f,c}\big). We can move the \Vert A \Vert \to-part to both sides of the equivalence, and on the right-hand side, we can apply the usual “distributivity of \Sigma and \Pi (or \to)”, to move the \Vert A \Vert \to into each \Sigma-component; but in each \Sigma-component, the \Vert A \Vert gets eaten by an A (we have that \Vert A \Vert \to \Pi_{a : A}\ldots and \Pi_{a:A} \ldots are equivalent), which gives us the claimed result.

The strategy we have used is very minimalistic. It does not need any “technology”: we just add and remove contractible pairs. For this “expanding and contracting” strategy, we have really only used very basic components of type theory (\Sigma, \Pi, identity types with function extensionality, propositional truncation, but even that only in the very end). If we want to weaken the requirement on B by one more level (i.e. if we want to derive a “universal property” which characterises \Vert A \Vert \to B if B is 2-truncated), we have to add one more coherence condition (which ensures that the \mathsf{coh}_{f,c}-component is well-behaved). The core idea is that this expanding and contracting strategy can be done for any truncation level of B, even if B is not known to be n-truncated at all, where the tower of conditions becomes infinite. I call an element of this “infinite \Sigma-type” a coherently constant function from A to B.

The idea is that the \Sigma-components (f : A \to B) and (c : \mathsf{const}_f) and (d : \mathsf{coh}_{f,c}) we used in the special case can be seen as components of a natural transformation between [2]-truncated semi-simplicial types. By semi-simplicial type, I mean a Reedy fibrant diagram \Delta_+^{\mathsf{op}} \to \mathcal{C}. By [2]-truncated semi-simplicial types, I mean the “initial segments” of semi-simplicial types, the first three components X_0, X_1, X_2 as described here.
The first diagram we need is what I call the “trivial semi-simplicial type” over A, written T \! A, and its initial part is given by T \! A_{[0]} :\equiv A, T \! A_{[1]} :\equiv A \times A, and T \! A_{[2]} :\equiv A \times A \times A. If we use the “fibred” notation (i.e. give the fibres over the matching objects), this would be T \! A_{[0]} :\equiv A, T \! A_{[1]}(a_1,a_2) :\equiv \mathbf{1}, T \! A_{[2]}(a_1,a_2,a_3,\_,\_,\_) :\equiv \mathbf{1}. In the terminology of simplicial sets, this is the [0]-coskeleton of [the diagram that is constantly] A.
The second important diagram, I call it the equality semi-simplicial type over B, is written E \! B. One potential definition for the lowest levels would be given by (I only give the fibres this time): E \! B_{[0]} :\equiv B, E \! B_{[1]}(b_1,b_2) :\equiv b_1 = b_2, E \! B_{[2]}(b_1,b_2,b_3,p_{12},p_{23},p_{13}) :\equiv p_{12} \cdot p_{23} = p_{13}. This is a fibrant replacement of B. (We are lucky because T \! A is already fibrant; otherwise, we should have constructed a fibrant replacement as well.)
If we now check what a (strict) natural transformation between T \! A and E \! B (viewed as diagrams over the full subcategory of \Delta_+^{\mathsf{op}} with objects [0],[1],[2]) is, it is easy to see that the [0]-component is exactly a map f : A \to B, the [1]-component is exactly a proof c : \mathsf{const}_f, and the [2]-component is just a proof d : \mathsf{coh}_{f,c}. (The type of such natural transformations is given by the limit of the exponential of T \! A and E \! B.)

However, even with this idea, and with the above proof for the case that B is 1-truncated, generalising this proof to the infinite case with infinitely many coherence conditions requires some ideas and a couple of technical steps which, for me, have been quite difficult. Therefore, it has taken me a very long time to write this up cleanly in an article. I am very grateful to the reviewers (this work is going to appear in the post-proceedings of TYPES’14) and to Steve (my thesis examiner) for many suggestions and interesting connections they have pointed out. In particular, one reviewer has remarked that the main result (the equivalence of coherently constant functions A \xrightarrow{\omega} B and maps out of the truncation \Vert A \Vert \to B) is a type-theoretic version of Proposition 6.2.3.4 in Lurie’s Higher Topos Theory; and Vladimir has pointed out the connection to the work of his former student Alexander Vishik. Unfortunately, both are among the many things that I have yet to understand, but there is certainly a lot to explore. If you can see further connections which I have not mentioned here or in the paper, then this is very likely because I am not aware of them, and I’d be happy to hear about it!

Posted in Foundations, Homotopy Theory, Models, Paper, Talk | 3 Comments

Modules for Modalities

As defined in chapter 7 of the book, a modality is an operation on types that behaves somewhat like the n-truncation. Specifically, it consists of a collection of types, called the modal ones, together with a way to turn any type A into a modal one \bigcirc A, with a universal property making the modal types a reflective subcategory (or more precisely, sub-(∞,1)-category) of the category of all types. Moreover, the modal types are assumed to be closed under Σs (closure under some other type formers like Π is automatic).

We called them “modalities” because under propositions-as-(some)-types, they look like the classical notion of a modal operator in logic: a unary operation on propositions. Since these are “monadic” modalities — in particular, we have A \to \bigcirc A rather than the other way around — they are most closely analogous to the “possibility” modality of classical modal logic. But since they act on all types, not just mere-propositions, for emphasis we might call them higher modalities.

The example of n-truncation shows that there are interesting new modalities in a homotopy world; some other examples are mentioned in the exercises of chapter 7. Moreover, most of the basic theory of n-truncation — essentially, any aspect of it that involves only one value of n — is actually true for any modality.

Over the last year, I’ve implemented this basic theory of modalities in the HoTT Coq library. In the process, I found one minor error in the book, and learned a lot about modalities and about Coq. For instance, my post about universal properties grew out of looking for the best way to define modalities.

In this post, I want to talk about something else I learned about while formalizing modalities: Coq’s modules, and in particular their universe-polymorphic nature. (This post is somewhat overdue in that everything I’ll be talking about has been implemented for several months; I just haven’t had time to blog about it until now.)

Continue reading

Posted in Code, Programming | 19 Comments

Not every weakly constant function is conditionally constant

As discussed at length on the mailing list some time ago, there are several different things that one might mean by saying that a function f:A\to B is “constant”. Here is my preferred terminology:

  • f is constant if we have b:B such that f(a)=b for all a:A.
    This is equivalent to saying that f factors through \mathbf{1}.
  • f is conditionally constant if it factors through \Vert A \Vert.
  • f is weakly constant if for all a_1,a_2:A we have f(a_1)=f(a_2).

In particular, the identity function of \emptyset is conditionally constant, but not constant. I don’t have a problem with that; getting definitions right often means that they behave slightly oddly on the empty set (until we get used to it). The term “weakly constant” was introduced by Kraus, Escardo, Coquand, and Altenkirch, although they immediately dropped the adjective for the rest of their paper, which I will not do. The term “conditionally constant” is intended to indicate that f is, or would be, constant, as soon as its domain is inhabited.

It’s obvious that every constant function is conditionally constant, and \mathrm{id}_{\emptyset} shows the converse fails. Similarly, it’s easy to show that conditionally constant implies weakly constant. KECA showed that the converse holds if either B is a set or \Vert A \Vert \to A, and conjectured that it fails in general. In this post I will describe a counterexample proving this conjecture.

Continue reading

Posted in Univalence | 10 Comments