Constructing the Propositional Truncation using Nonrecursive HITs

In this post, I want to talk about a construction of the propositional truncation of an arbitrary type using only non-recursive HITs. The whole construction is formalized in the new proof assistant Lean, and available on Github. I’ll write another blog post explaining more about Lean in August.

Continue reading

Posted in Code, Higher Inductive Types | 25 Comments

Universal Properties of Truncations

Some days ago at the HoTT/UF workshop in Warsaw (which was a great event!), I have talked about functions out of truncations. I have focussed on the propositional truncation \Vert - \Vert, and I want to write this blog post in case someone could not attend the talk but is interested nevertheless. There are also my slides and the arXiv paper.
The topic of my talk can be summarised as

Question: What is the type \Vert A \Vert \to B ?

This question is very easy if B is propositional, because then we have

Partial answer (1): If B is propositional, then (\Vert A \Vert \to B) \simeq (A \to B)

by the usual universal property, with the equivalence given by the canonical map. Without an assumption on B, it is much harder.

The rest of this post contains a special case which already contains some of the important ideas. It is a bit lengthy, and in case you don’t have time to read everything, here is the

Main result (general universal property of the propositional truncation).
In a dependent type theory with at least \mathbf{1}, \Sigma-, \Pi-, and identity types, which furthermore has Reedy \omega^{\mathsf{op}}-limits (“infinite \Sigma-types”), we can define the type of coherently constant functions A \xrightarrow {\omega} B as the type of natural transformations between type-valued presheaves. If the type theory has propositional truncations, we can construct a canonical map from \Vert A \Vert \to B to A \xrightarrow {\omega} B. If the type theory further has function extensionality, then this canonical map is an equivalence.
If B is known to be n-truncated for some fixed n, we can drop the assumption of Reedy \omega^{\mathsf{op}}-limits and perform the whole construction in “standard syntactical” HoTT. This describes how functions \Vert A \Vert \to B can be defined if B is not known to be propositional, and it streamlines the usual approach of finding a propositional Q with A \to Q and Q \to B.


Here comes the long version of this blog post. So, what is a function g : \Vert A \Vert \to B? If we think of elements of \Vert A \Vert as anonymous inhabitants of A, we could expect that such a g is “the same” as a function f : A \to B which “cannot look at its input”. But then, how can we specify what it means to “not look at its input” internally? A first attempt could be requesting that f is weakly constant, \mathsf{wconst}_f :\equiv \Pi_{x,y:A} f(x) = f(y). Indeed, it has been shown:

Partial answer (2): If B is a set (h-set, 0-truncated), then (\Vert A \Vert \to B) \simeq (\Sigma (f : A \to B). \mathsf{wconst}_f), where the function from left to right is the canonical one.

It is not surprising that we still need this strong condition on B if we want weak constancy to be a sufficient answer: just throwing in a bunch of paths, which might or might not “fit together” (we just don’t know anything) seems wrong. Indeed, from Mike’s recent construction, we know that the statement does become false (contradicts univalence) without this requirement.

Given a function f : A \to B and a proof c : \mathsf{wconst}_f, we can try to fix the problem that the paths given by c “do not fit together” by throwing in a coherence proof, i.e. an element of \mathsf{coh}_{f,c} :\equiv \Pi_{x,y,z:A} c(x,y) \cdot c(y,z) = c(x,z). We should already know that this will introduce its own problems and thus not fix everything, but at least, we get:

Partial answer (3): If B is 1-truncated, then (\Vert A \Vert \to B) \simeq (\Sigma (f : A \to B). \Sigma (c:\mathsf{wconst}_f). \mathsf{coh}_{f,c}), again given by a canonical map from left to right.

In my talk, I have given a proof of “partial answer (3)” via “reasoning with equivalences” (slides p. 5 ff). I start by assuming a point a_0 : A. The type B is then equivalent to the following (explanation below):

\Sigma (f_1 : B).
\Sigma (f : A \to B). \Sigma (c_1 : \Pi_{a:A} f(a) = f_1).
\Sigma (c : \mathsf{const}_f). \Sigma(d_1 : \Pi_{a^1, a^2:A} c(a^1,a^2) \cdot c_1(a^2) = c_1(a^1)
\Sigma (c_2 : f(a_0) = f_1). \Sigma (d_3 : c(a_0,a_0) \cdot c_1(a_0) = c_2).
\Sigma (d : \mathsf{coh}_{f,c_0}).
\Sigma (d_2 : \Pi_{a:A} c(a_0,a) \cdot c_1(a) = c_2).
\mathbf{1}

In the above long nested \Sigma-type, the first line is just the B that we start with. The second line adds two factors/\Sigma-components which, by function extensionality, cancel each other out (they form a singleton). The same is true for the pair in the third line, and for the pair in the fourth line. Lines five and six look different, but they are not really; it’s just that their “partners” (which would complete them to singletons) are hard to write down and, at the same time, contractible; thus, they are omitted. As B is assumed to be a 1-type, it is easy to see that the \Sigma-components in lines five and six are both propositional, and it’s also easy to see that the other \Sigma-components imply that they are both inhabited. Of course, the seventh line does nothing.

Simply by re-ordering the \Sigma-components in the above type, and not doing anything else, we get:

\Sigma (f : A \to B). \Sigma(c : \mathsf{const}_f). \Sigma(d : \mathsf{coh}_{f,c}).
\Sigma (f_1 : B) \Sigma(c_2 : f(a_0) = f_1).
\Sigma (c_1 : \Pi_{a:A} f(a) = f_1). \Sigma(d_2 : \Pi_{a:A} c(a_0,a) \cdot c_1(a) = c_2).
\Sigma (d_1 : \Pi_{a^1, a^2:A} c(a^1,a^2) \cdot c_1(a^2) = c_1(a^1)).
\Sigma (d_3 : c(a_0,a_0) \cdot c_1(a_0) = c_2).
\mathbf{1}

By the same argument as before, the components in the pair in line two cancel each other out (i.e. the pair is contractible). The same is true for line three. Line four and five are propositional as B is 1-truncated, but easily seen to be inhabited. Thus, the whole nested \Sigma-type is equivalent to \Sigma (f : A \to B). \Sigma(c : \mathsf{const}_f). \mathsf{coh}_{f,c}.
In summary, we have constructed an equivalence B \simeq \Sigma (f : A \to B). \Sigma(c : \mathsf{const}_f). \mathsf{coh}_{f,c}. By going through the construction step-by-step, we see that the function part of the equivalence (map from left to right) is the canonical one (let’s write \mathsf{canon}_1), mapping b : B to the triple (\lambda \_. b, \lambda \_ \_ . \mathsf{refl}_b , \lambda \_ \_ \_ . \mathsf{refl}_{\mathsf{refl}_b}). Before we have started the construction, we have assumed a point a_0 : A. But, and this is the crucial observation, the function \mathsf{canon}_1 does not depend on a_0! So, if the assumption A implies that \mathsf{canon}_1 is an equivalence, then \Vert A \Vert is already enough. Thus, we have \Vert A \Vert \to \big(B \simeq \Sigma (f : A \to B). \Sigma(c : \mathsf{const}_f). \mathsf{coh}_{f,c}\big). We can move the \Vert A \Vert \to-part to both sides of the equivalence, and on the right-hand side, we can apply the usual “distributivity of \Sigma and \Pi (or \to)”, to move the \Vert A \Vert \to into each \Sigma-component; but in each \Sigma-component, the \Vert A \Vert gets eaten by an A (we have that \Vert A \Vert \to \Pi_{a : A}\ldots and \Pi_{a:A} \ldots are equivalent), which gives us the claimed result.

The strategy we have used is very minimalistic. It does not need any “technology”: we just add and remove contractible pairs. For this “expanding and contracting” strategy, we have really only used very basic components of type theory (\Sigma, \Pi, identity types with function extensionality, propositional truncation, but even that only in the very end). If we want to weaken the requirement on B by one more level (i.e. if we want to derive a “universal property” which characterises \Vert A \Vert \to B if B is 2-truncated), we have to add one more coherence condition (which ensures that the \mathsf{coh}_{f,c}-component is well-behaved). The core idea is that this expanding and contracting strategy can be done for any truncation level of B, even if B is not known to be n-truncated at all, where the tower of conditions becomes infinite. I call an element of this “infinite \Sigma-type” a coherently constant function from A to B.

The idea is that the \Sigma-components (f : A \to B) and (c : \mathsf{const}_f) and (d : \mathsf{coh}_{f,c}) we used in the special case can be seen as components of a natural transformation between [2]-truncated semi-simplicial types. By semi-simplicial type, I mean a Reedy fibrant diagram \Delta_+^{\mathsf{op}} \to \mathcal{C}. By [2]-truncated semi-simplicial types, I mean the “initial segments” of semi-simplicial types, the first three components X_0, X_1, X_2 as described here.
The first diagram we need is what I call the “trivial semi-simplicial type” over A, written T \! A, and its initial part is given by T \! A_{[0]} :\equiv A, T \! A_{[1]} :\equiv A \times A, and T \! A_{[2]} :\equiv A \times A \times A. If we use the “fibred” notation (i.e. give the fibres over the matching objects), this would be T \! A_{[0]} :\equiv A, T \! A_{[1]}(a_1,a_2) :\equiv \mathbf{1}, T \! A_{[2]}(a_1,a_2,a_3,\_,\_,\_) :\equiv \mathbf{1}. In the terminology of simplicial sets, this is the [0]-coskeleton of [the diagram that is constantly] A.
The second important diagram, I call it the equality semi-simplicial type over B, is written E \! B. One potential definition for the lowest levels would be given by (I only give the fibres this time): E \! B_{[0]} :\equiv B, E \! B_{[1]}(b_1,b_2) :\equiv b_1 = b_2, E \! B_{[2]}(b_1,b_2,b_3,p_{12},p_{23},p_{13}) :\equiv p_{12} \cdot p_{23} = p_{13}. This is a fibrant replacement of B. (We are lucky because T \! A is already fibrant; otherwise, we should have constructed a fibrant replacement as well.)
If we now check what a (strict) natural transformation between T \! A and E \! B (viewed as diagrams over the full subcategory of \Delta_+^{\mathsf{op}} with objects [0],[1],[2]) is, it is easy to see that the [0]-component is exactly a map f : A \to B, the [1]-component is exactly a proof c : \mathsf{const}_f, and the [2]-component is just a proof d : \mathsf{coh}_{f,c}. (The type of such natural transformations is given by the limit of the exponential of T \! A and E \! B.)

However, even with this idea, and with the above proof for the case that B is 1-truncated, generalising this proof to the infinite case with infinitely many coherence conditions requires some ideas and a couple of technical steps which, for me, have been quite difficult. Therefore, it has taken me a very long time to write this up cleanly in an article. I am very grateful to the reviewers (this work is going to appear in the post-proceedings of TYPES’14) and to Steve (my thesis examiner) for many suggestions and interesting connections they have pointed out. In particular, one reviewer has remarked that the main result (the equivalence of coherently constant functions A \xrightarrow{\omega} B and maps out of the truncation \Vert A \Vert \to B) is a type-theoretic version of Proposition 6.2.3.4 in Lurie’s Higher Topos Theory; and Vladimir has pointed out the connection to the work of his former student Alexander Vishik. Unfortunately, both are among the many things that I have yet to understand, but there is certainly a lot to explore. If you can see further connections which I have not mentioned here or in the paper, then this is very likely because I am not aware of them, and I’d be happy to hear about it!

Posted in Foundations, Homotopy Theory, Models, Paper, Talk | 3 Comments

Modules for Modalities

As defined in chapter 7 of the book, a modality is an operation on types that behaves somewhat like the n-truncation. Specifically, it consists of a collection of types, called the modal ones, together with a way to turn any type A into a modal one \bigcirc A, with a universal property making the modal types a reflective subcategory (or more precisely, sub-(∞,1)-category) of the category of all types. Moreover, the modal types are assumed to be closed under Σs (closure under some other type formers like Π is automatic).

We called them “modalities” because under propositions-as-(some)-types, they look like the classical notion of a modal operator in logic: a unary operation on propositions. Since these are “monadic” modalities — in particular, we have A \to \bigcirc A rather than the other way around — they are most closely analogous to the “possibility” modality of classical modal logic. But since they act on all types, not just mere-propositions, for emphasis we might call them higher modalities.

The example of n-truncation shows that there are interesting new modalities in a homotopy world; some other examples are mentioned in the exercises of chapter 7. Moreover, most of the basic theory of n-truncation — essentially, any aspect of it that involves only one value of n — is actually true for any modality.

Over the last year, I’ve implemented this basic theory of modalities in the HoTT Coq library. In the process, I found one minor error in the book, and learned a lot about modalities and about Coq. For instance, my post about universal properties grew out of looking for the best way to define modalities.

In this post, I want to talk about something else I learned about while formalizing modalities: Coq’s modules, and in particular their universe-polymorphic nature. (This post is somewhat overdue in that everything I’ll be talking about has been implemented for several months; I just haven’t had time to blog about it until now.)

Continue reading

Posted in Code, Programming | 21 Comments

Not every weakly constant function is conditionally constant

As discussed at length on the mailing list some time ago, there are several different things that one might mean by saying that a function f:A\to B is “constant”. Here is my preferred terminology:

  • f is constant if we have b:B such that f(a)=b for all a:A.
    This is equivalent to saying that f factors through \mathbf{1}.
  • f is conditionally constant if it factors through \Vert A \Vert.
  • f is weakly constant if for all a_1,a_2:A we have f(a_1)=f(a_2).

In particular, the identity function of \emptyset is conditionally constant, but not constant. I don’t have a problem with that; getting definitions right often means that they behave slightly oddly on the empty set (until we get used to it). The term “weakly constant” was introduced by Kraus, Escardo, Coquand, and Altenkirch, although they immediately dropped the adjective for the rest of their paper, which I will not do. The term “conditionally constant” is intended to indicate that f is, or would be, constant, as soon as its domain is inhabited.

It’s obvious that every constant function is conditionally constant, and \mathrm{id}_{\emptyset} shows the converse fails. Similarly, it’s easy to show that conditionally constant implies weakly constant. KECA showed that the converse holds if either B is a set or \Vert A \Vert \to A, and conjectured that it fails in general. In this post I will describe a counterexample proving this conjecture.

Continue reading

Posted in Univalence | 10 Comments

Double Groupoids and Crossed Modules in HoTT

The past eight months I spent at CMU for my master thesis project. I ended up formalizing some algebraic structures used in Ronald Brown’s book “Non-Abelian Algebraic Topology”: Double groupoids with thin structure and crossed modules over groupoids.

As the language for the formalizations I chose the newly developed theorem prover Lean. Lean is developed by Leonardo de Moura at Microsoft Research and provides a dependently typed language similar to Agda or Coq. Currently, there are two modes for Lean: A proof-irrelevant mode and a HoTT mode. The HoTT library was written at the same time as my project, mainly by Floris van Doorn and, for a smaller part, me. You can find papers about Lean and its elaboration algorithm on the website, as well as a link to a version of Lean that runs in your web browser.

Double groupoids are a special case double categories which, besides objects and morphisms, specify a set of two-cells for each (not necessarily commuting) square diagram in a way such that the two-cells form a category with respect to vertical and to horizontal composition. That includes the existence of vertically and horizontally degenerate fillers of square diagrams with the identity on two opposing sides and the same morphism on the other side. The vertically degenerate squares should distribute over the horizontal composition of morphisms and vice versa. Furthermore, the vertically and horizontally degenerate square along the identity should be the same. A last axiom is the interchange law which ensures that, in the case of two-cells which are composable in a 2×2-grid, it doesn’t matter whether to first compose vertically or horizontally.

I decided to formalize double categories in the same way categories are usually formalized using dependent types using a type of two-cells depending on four points and four morphisms:

D_2 : \prod_{a, b, c, d : D_0} \text{hom}(a,b) \to \text{hom}(c,d) \to \text{hom}(a,c) \to \text{hom}(b,d) \to \mathcal{U}.In the formalization, I first define a worm precategory which only allows for two-cell composition in one direction. Then, I use Lean’s inheritance mechanism to get the full notion of a double (pre)category:


structure worm_precat {D₀ : Type} (C  : precategory D₀)
  (D₂ : Π ⦃a b c d : D₀⦄
    (f : hom a b) (g : hom c d) (h : hom a c) (i : hom b d), Type) :=
  (comp₁ : proof Π ⦃a b c₁ d₁ c₂ d₂ : D₀⦄
    ⦃f₁ : hom a b⦄ ⦃g₁ : hom c₁ d₁⦄ ⦃h₁ : hom a c₁⦄ ⦃i₁ : hom b d₁⦄
    ⦃g₂ : hom c₂ d₂⦄ ⦃h₂ : hom c₁ c₂⦄ ⦃i₂ : hom d₁ d₂⦄,
    (D₂ g₁ g₂ h₂ i₂) → (D₂ f₁ g₁ h₁ i₁)
    → (@D₂ a b c₂ d₂ f₁ g₂ (h₂ ∘ h₁) (i₂ ∘ i₁)) qed)
  (ID₁ : proof Π ⦃a b : D₀⦄ (f : hom a b), D₂ f f (ID a) (ID b) qed)
  (assoc₁ : proof Π ⦃a b c₁ d₁ c₂ d₂ c₃ d₃ : D₀⦄
    ⦃f  : hom a b⦄   ⦃g₁ : hom c₁ d₁⦄ ⦃h₁ : hom a c₁⦄ ⦃i₁ : hom b d₁⦄
    ⦃g₂ : hom c₂ d₂⦄ ⦃h₂ : hom c₁ c₂⦄ ⦃i₂ : hom d₁ d₂⦄
    ⦃g₃ : hom c₃ d₃⦄ ⦃h₃ : hom c₂ c₃⦄ ⦃i₃ : hom d₂ d₃⦄
    (w : D₂ g₂ g₃ h₃ i₃) (v : D₂ g₁ g₂ h₂ i₂) (u : D₂ f g₁ h₁ i₁),
    (assoc i₃ i₂ i₁) ▹ ((assoc h₃ h₂ h₁) ▹
        (comp₁ w (comp₁ v u))) = (comp₁ (comp₁ w v) u) qed)
  ...

structure dbl_precat {D₀ : Type} (C : precategory D₀)
  (D₂ : Π ⦃a b c d : D₀⦄
    (f : hom a b) (g : hom c d) (h : hom a c) (i : hom b d), Type)
  extends worm_precat C D₂,
    worm_precat C (λ ⦃a b c d : D₀⦄ f g h i, D₂ h i f g)
  renaming comp₁→comp₂ ID₁→ID₂ assoc₁→assoc₂
    id_left₁→id_left₂ id_right₁→id_right₂ homH'→homH'_dontuse :=
  ...
  (interchange : proof Π {a₀₀ a₀₁ a₀₂ a₁₀ a₁₁ a₁₂ a₂₀ a₂₁ a₂₂ : D₀}
    {f₀₀ : hom a₀₀ a₀₁} {f₀₁ : hom a₀₁ a₀₂} {f₁₀ : hom a₁₀ a₁₁}
    {f₁₁ : hom a₁₁ a₁₂} {f₂₀ : hom a₂₀ a₂₁} {f₂₁ : hom a₂₁ a₂₂}
    {g₀₀ : hom a₀₀ a₁₀} {g₀₁ : hom a₀₁ a₁₁} {g₀₂ : hom a₀₂ a₁₂}
    {g₁₀ : hom a₁₀ a₂₀} {g₁₁ : hom a₁₁ a₂₁} {g₁₂ : hom a₁₂ a₂₂}
    (x : D₂ f₁₁ f₂₁ g₁₁ g₁₂) (w : D₂ f₁₀ f₂₀ g₁₀ g₁₁)
    (v : D₂ f₀₁ f₁₁ g₀₁ g₀₂) (u : D₂ f₀₀ f₁₀ g₀₀ g₀₁),
    comp₁ (comp₂ x w) (comp₂ v u) = comp₂ (comp₁ x v) (comp₁ w u) qed)

Double groupoids are double categories where all three categories involved — the underlying category of points and morphisms, the vertical category of two-cells, and the horizontal category of two-cells — are groupoids and that are equipped with a thin structure: A selection of a unique thin filler of each commutative square shell which is closed under composition and degenerate squares.

It turns out that a category equivalent to the one of double groupoids is the category of crossed modules over a groupoid. A crossed module is defined as a groupoid $P$ together with a family (M_p)_{p \in P} of groups (or, equivalently, a totally disconnected groupoid) on the objects of P and for each p \in P a group homomorphism \mu_p : M_p \to \text{hom}(p,p) and a groupoid action \phi of P on the (M_p)_{p \in P}. The family of homomorphisms and the action should interact by resembling conjugation via \mu_q(\phi(f,x)) = f \circ \mu_p(x) \circ f^{-1} and \phi(\mu_p(c),x) = c \cdot x \cdot c^{-1} for each p, q \in P, $f \in \text{hom}_P(p,q)$ and c, x \in M_p.


structure xmod {P₀ : Type} [P : groupoid P₀] (M : P₀ → Group) :=
  (P₀_hset : is_hset P₀)
  (μ : Π ⦃p : P₀⦄, M p → hom p p)
  (μ_respect_comp : Π ⦃p : P₀⦄ (b a : M p), μ (b * a) = μ b ∘ μ a)
  (μ_respect_id : Π (p : P₀), μ 1 = ID p)
  (φ : Π ⦃p q : P₀⦄, hom p q → M p → M q)
  (φ_respect_id : Π ⦃p : P₀⦄ (x : M p), φ (ID p) x = x)
  (φ_respect_P_comp : Π ⦃p q r : P₀⦄ (b : hom q r) (a : hom p q) (x : M p),
    φ (b ∘ a) x = φ b (φ a x))
  (φ_respect_M_comp : Π ⦃p q : P₀⦄ (a : hom p q) (y x : M p),
    φ a (y * x) = (φ a y) * (φ a x))
  (CM1 : Π ⦃p q : P₀⦄ (a : hom p q) (x : M p), μ (φ a x) = a ∘ (μ x) ∘ a⁻¹)
  (CM2 : Π ⦃p : P₀⦄ (c x : M p), φ (μ c) x = c * (x * c⁻¹ᵍ))

After formlizing both categories, DGpd and Xmod including the definition of their respective morphisms, I defined the functors \gamma and \lambda which establish their equivalence.


  definition gamma.functor :
    functor Cat_dbl_gpd.{l₁ l₂ l₃} Cat_xmod.{(max l₁ l₂) l₂ l₃} :=
  begin
    fapply functor.mk,
      intro G, apply (gamma.on_objects G),
      intros [G, H, F], apply (gamma.on_morphisms F),
      intro G, cases G,
        fapply xmod_morphism_congr, apply idp, apply idp,
        repeat ( apply eq_of_homotopy ; intros), cases x_1, apply idp,
    ...
  end

Since establishing the functors was quite tedious, I didn’t finish proving the equivalence, but I might do that in the future.

As a little application on 2-types I instantiated the fundamental double groupoid of a 2-type X which is presented by a set C and a 1-type A by arbitrary functions \iota : C \to A and \iota' : A \to X. Here, the type of objects is the set C, the set of morphisms between a, b : C is the identity type \iota(a) =_A \iota(b), and the set of two-cells associated to four points a, b, c, d : C and paths f, g, h, i on the top, bottom, left and right of a diagram is the set \text{ap}_{\iota'}(h) \cdot \text{ap}_{\iota'}(g) = \text{ap}_{\iota'}(f) \cdot \text{ap}_{\iota'}(i). This could serve as the starting point to formalize a 2-dimensional Seifert-van Kampen theorem using double groupoids or crossed modules.

You can find my thesis here. The Lean code used can be found on github, due to recent changes in Lean, it most probably won’t compile with the newest version of Lean but with an older snapshot that can be found here.

Posted in Code, Homotopy Theory | 31 Comments

The torus is the product of two circles, cubically

Back in the summer of 2012, emboldened by how nicely the calculation π₁(S¹) had gone, I asked a summer research intern, Joseph Lee, to work on formalizing a proof that the higher-inductive definition of the torus (see Section 6.6 of the HoTT book) is equivalent to a product of two circles.  This seemed to me like it should be a simple exercise: define the functions back and forth by recursion, and then prove that they’re mutually inverse by induction.  Joseph and I  managed to define the two functions, but after spending a while staring at pages-long path algebra goals in both Agda and Coq, we eventually threw in the towel on proving that they were mutually inverse.  By the end of the IAS year, both Kristina Sojakova and Peter Lumsdaine had given proof sketches, and Kristina’s later appeared as a 25-page proof in the exercise solutions of the book.  But it still bothered me that we didn’t have a simple proof of what seemed like it should be a simple theorem…

Since the Bezem, Coquand, and Huber cubical sets model of HoTT was developed, Guillaume Brunerie and I have been thinking about cubical type theories based on these ideas (more on that in some other post).  As part of this, we have also played with using cubical ideas in “book HoTT” (MLTT with axioms for univalence and HITs) in Agda.  The main ingredient is to make use of more general “cube types” than just the identity type (the special case of lines).  Each cube type is dependent on the boundary of a cube, and represents the “inside” of the boundary.  For example, we have a type of squares, dependent on four points and four paths that make up the boundary of a square.  And a type of cubes, dependent on the eight points, twelve lines, and six squares that make up the boundary of a cube (implicit arguments are crucial here: everything but the squares can be implicit).  Another ingredient is to use path-over-a-path and higher cube-over-a-cube types to represent paths in fibrations.

Now, these cube types are not really “new”, in the sense that they can be defined in terms of higher identity types. For example, a square can be represented by a disc between composites, and a path-over can be reduced to homogeneous paths using transport. However,  isolating these cubes and cube-overs as abstractions, and developing some facts about them, has allowed for the formalization of some new examples.  Guillaume used this approach to prove the 3×3 lemma about pushouts that is used in the calculation of the Hopf fibration.  I used it to resolve a question about the contractibility of a patch theory. Evan Cavallo used it in the proof of the Mayer-Vietoris theorem.

And, we finally have a simple write-the-maps-back-and-forth-and-then-induct-and-beta-reduce proof that the torus is a product of two circles.

You can read about it here:

A Cubical Approach to Synthetic Homotopy Theory
Daniel R. Licata and Guillaume Brunerie

Homotopy theory can be developed synthetically in homotopy type theory, using types to describe spaces, the identity type to describe paths in a space, and iterated identity types to describe higher-dimensional paths. While some aspects of homotopy theory have been developed synthetically and formalized in proof assistants, some seemingly straightforward examples have proved difficult because the required manipulations of paths becomes complicated. In this paper, we describe a cubical approach to developing homotopy theory within type theory. The identity type is complemented with higher-dimensional cube types, such as a type of squares, dependent on four points and four lines, and a type of three-dimensional cubes, dependent on the boundary of a cube. Path-over-a-path types and higher generalizations are used to describe cubes in a fibration over a cube in the base. These higher-dimensional cube and path-over types can be defined from the usual identity type, but isolating them as independent conceptual abstractions has allowed for the formalization of some previously difficult examples.

Posted in Higher Inductive Types, Homotopy Theory | 31 Comments

HoTT is not an interpretation of MLTT into abstract homotopy theory

Almost at the top of the HoTT website are the words:

Homotopy Type Theory refers to a new interpretation of Martin-Löf’s system of intensional, constructive type theory into abstract homotopy theory.  ”

I think it is time to change these words into something that, at least, is not wrong.

Is there an interpretation of the MLTT into abstract homotopy theory? This would be the case if the 2007/2009 paper by Awodey and Warren defined, as it is often claimed today, an interpretation of MLTT into any Quillen model category.

However, that paper did not provide a construction of any interpretation of the MLTT . It outlined several ideas connecting the factorization axioms of Quillen model categories with the introduction and elimination rules for the identity types in the intensional Martin-Lof Type Theory. These ideas did not quite work out because the operations that one can define on a general Quillen model category do not satisfy the relations that are present in the MLTT .

Using an analogy from the representation theory, they noticed that there is a similarity between the generators of  a certain “group” (MLTT) and operations in some categories.  This would define representations of the group in such categories but it turned out that one of the most important relations in the group did not hold in the categories.  The paper claimed (without a proof) that there is a new “group” that they called “a form of MLTT” that did not have this relation in its definition and stated the main result by saying that “there is an interpretation of a form of MLTT in any Quillen model category”.

The truth concerning the interpretations of MLTT into homotopy theory is different.

1. No interpretation of the MLTT into abstract homotopy theory (general Quillen model category) is known. Moreover, it is unreasonable to expect such an interpretation to exist if only because not every Quillen model category is locally cartesian closed. For example, the category of complexes of abelian groups is a Quillen model category that is not even cartesian closed.

2. The interpretation of the rules for identity types on model categories from a class that contains such important examples as topological spaces and simplicial sets was constructed in a 2008/2012 paper by Benno van den Berg and Richard Garner.

3. An interpretation of the rules for the dependent products, dependent sums, identity types and universes on the category of simplicial sets was constructed by Vladimir Voevodsky in 2009. The outline of the construction was made publicly available in the early 2010 and then written up in a 2012 paper by Chris Kapulkin, Peter LeFanu Lumsdaine and Vladimir Voevodsky.

There is a substantial difficulty in adding the rules for universes to the rules for the dependent products, dependent sums and identity types. These three groups of rules are independent from each other and can be studied separately. The rules for a universe connect the rules from these three groups to each other making it necessary to coordinate their interpretations.

4. An interpretation of the the same complement of rules on the categories of special diagrams of simplicial sets was constructed in 2012/13 by Michael Shulman. This was an important advance since it proved that an interpretation of these rules that satisfy the univalence axiom need not satisfy the excluded middle property.

The results mentioned above all provide interpretation of the various *rules* of the MLTT not of the MLTT itself.

MLTT is a syntactic object. Its definition starts with a specification of the “raw” syntax. Then one considers four sets that consist of all the sentences of four standard shapes that can be written in this syntax. Then, one takes the smallest quadruple of subsets of these sets that is closed under certain operations (the “rules”).  Then one performs an extra step of taking the quotient of the two of the resulting sets by equivalences relations determined by the other two sets.

At this point one is left with two sets and a number of operations on these two sets. From this structure one constructs, using yet another procedure, a category. This category is called the syntactic category of the MLTT.

An interpretation of the MLTT is a functor from the syntactic category to another category.

There is a way to define interpretation of the rules of the MLTT on any category with some additional structure. It is a long standing conjecture that the syntactic category of the MLTT is the initial object among categories where the interpretation of the rules of the MLTT is given.

This conjecture is far from being proved.

In a remarkable 1991 book by Thomas Streicher proved an analog of this conjecture for a much more simple type theory called the Calculus of Constructions (not to be confused with the Calculus of Inductive Constructions!). At the moment it remains to be the only substantially non-trivial analog of this conjecture known.

Until this conjecture is proved, all of the previous papers can only claim interpretation of the rules of the MLTT not an interpretation of the MLTT. Proving this conjecture in a way that will also enable us to prove its analogs for yet more complex type theories such as the Calculus of Inductive Constructions and its extensions with new classes of inductive types is the most important, from my point of view, goal that needs to be achieved in the development of the UF and HoTT.

Posted in Uncategorized | 30 Comments