]]>We are pleased to announce the AMS Special Session on Homotopy Type Theory, to be held on January 11, 2018 in San Diego, California, as part of the Joint Mathematics Meetings (to be held January 10 – 13).

Homotopy Type Theory (HoTT) is a new field of study that relates constructive type theory to abstract homotopy theory. Types are regarded as synthetic spaces of arbitrary dimension and type equality as homotopy equivalence. Experience has shown that HoTT is able to represent many mathematical objects of independent interest in a direct and natural way. Its foundations in constructive type theory permit the statement and proof of theorems about these objects within HoTT itself, enabling formalization in proof assistants and providing a constructive foundation for other branches of mathematics.

This Special Session is affiliated with the AMS Mathematics Research Communities (MRC) workshop for early-career researchers in Homotopy Type Theory organized by Dan Christensen, Chris Kapulkin, Dan Licata, Emily Riehl and Mike Shulman, which took place last June.

The Special Session will include talks by MRC participants, as well as by senior researchers in the field, on various aspects of higher-dimensional type theory including categorical semantics, computation, and the formalization of mathematical theories. There will also be a panel discussion featuring distinguished experts from the field.

Further information about the Special Session, including a schedule and abstracts, can be found at: http://jointmathematicsmeetings.org/meetings/national/jmm2018/2197_program_ss14.html.

Please note that the early registration deadline is December 20, 2017.If you have any questions about about the Special Session, please feel free to contact one of the organizers. We look forward to seeing you in San Diego.

Simon Cho (University of Michigan)

Liron Cohen (Cornell University)

Ed Morehouse (Wesleyan University)

Known impredicative encodings of various inductive types in System F, such as the type

of natural numbers do not satisfy the relevant -computation rules. The aim of this work is to refine the System F encodings by moving to a system of HoTT with an impredicative universe, so that the relevant -rules are satisfied (along with all the other rules). As a result, the so-determined types have their expected universal properties. The main result is the construction of a type of natural numbers which is the initial algebra for the expected endofunctor .

For the inductive types treated in the thesis, we do not use the full power of HoTT; we need only postulate -types, identity types, “large” -types over an impredicative universe and function extensionality. Having large -types over an impredicative universe means that given a type and a type family , we may form the dependent function type

Note that this type is in even if is not.

We obtain a translation of System F types into type theory by replacing second order quantification by dependent products over (or alternatively over the subtype of given by some h-level).

For brevity, we will focus on the construction of the natural numbers (though in the thesis, the coproduct of sets and the unit type is first treated with special cases of this method). We consider categories of algebras for endofunctors:

where the type of objects of is given by

(the type of sets (in )) and morphisms are simply functions between sets.

We can write down the type of -algebras:

and homomorphisms between algebras and :

which together form the category .

We seek the initial object in . Denote this by and moreover let be the forgetful functor to and be the covariant Yoneda embedding. We reason as follows:

using the fact that the diagonal functor is left adjoint to the limit functor for the last step. With this, we have a proposal for the definition of the underlying set of the initial -algebra as the limit of the forgetful functor. Using the fact that is defined as a limit, we obtain an algebra structure . As creates limits, is guaranteed to be initial in .

But we want to define in type theory. We do this using products and equalizers as is well known from category theory. Explicitly, we take the equalizer of the following two maps between products:

given by:

The equalizer is, of course:

which inhabits . Impredicativity is crucial for this: it guarantees that the product over lands in .

This method can be used to construct an initial algebra, and therefore a fixed-point, for *any* endofunctor ! We won’t pursue this remarkable fact here, but only consider the case at hand, where the functor is . Then the equalizer becomes our definition of the type of natural numbers (so let us rename to for the remainder). Observe that this encoding can be seen as a subtype of (a translation of) the System F encoding given at the start. Indeed, the indexing object of is equivalent to , by

With this, we can define a successor function and zero element, for instance:

(the successor function takes a little more work). We can also define a recursor , given any and . In other words, the introduction rules hold, and we can eliminate *into other sets*. Further, the -rules hold definitionally – as expected, since they hold for the System F encodings.

Finally we come to the desired result, the -rule for :

**Theorem. **Let and . Moreover, let such that:

for any . Then

Note that the -rule holds *propositionally. *By Awodey, Gambino, and Sojakova we therefore also have, equivalently, the induction principle for , aka the dependent elimination rule. As a corollary, we can prove the universal property that any -algebra homomorphism is propositionally equal to the appropriate recursor (as a -algebra homomorphism). Again we emphasise the need for impredicativity: in the proof of , we have to be able to plug into quantifiers over .

A semantic rendering of the above is that we have built a type that always determines a natural numbers object—whereas the System F encoding need not always do so (see Rummelhoff). In an appendix, we discuss a realizability semantics for the system we work in. Building more exotic types (that need not be sets) becomes more complicated; we leave this to future work.

]]>https://github.com/mortberg/cubicaltt/tree/master/lectures

The lectures cover the main features of the system and don’t assume any prior knowledge of Homotopy Type Theory or Univalent Foundations. Only basic familiarity with type theory and proof assistants based on type theory is assumed. The lectures are in the form of cubicaltt files and can be loaded in the cubicaltt proof assistant.

cubicaltt is based on a novel type theory called Cubical Type Theory that provides new ways to reason about equality. Most notably it makes various extensionality principles, like function extensionality and Voevodsky’s univalence axiom, into theorems instead of axioms. This is done such that these principles have computational content and in particular that we can transport structures between equivalent types and that these transports compute. This is different from when one postulates the univalence axiom in a proof assistant like Coq or Agda. If one just adds an axiom there is no way for Coq or Agda to know how it should compute and one looses the good computational properties of type theory. In particular canonicity no longer holds and one can produce terms that are stuck (e.g. booleans that are neither true nor false but don’t reduce further). In other words this is like having a programming language in which one doesn’t know how to run the programs. So cubicaltt provides an operational semantics for Homotopy Type Theory and Univalent Foundations by giving a computational justification for the univalence axiom and (some) higher inductive types.

Cubical Type Theory has a model in cubical sets with lots of structure (symmetries, connections, diagonals) and is hence consistent. Furthermore, Simon Huber has proved that Cubical Type Theory satisfies canonicity for natural numbers which gives a syntactic proof of consistency. Many of the features of the type theory are very inspired by the model, but for more syntactically minded people I believe that it is definitely possible to use cubicaltt without knowing anything about the model. The lecture notes are hence written with almost no references to the model.

The cubicaltt system is based on Mini-TT:

"A simple type-theoretic language: Mini-TT" (2009) Thierry Coquand, Yoshiki Kinoshita, Bengt Nordström and Makoto Takeya In "From Semantics to Computer Science; Essays in Honour of Gilles Kahn"

Mini-TT is a variant Martin-Löf type theory with datatypes and cubicaltt extends Mini-TT with:

- Path types
- Compositions
- Glue types
- Id types
- Some higher inductive types

The lectures cover the first 3 of these and hence correspond to sections 2-7 of:

"Cubical Type Theory: a constructive interpretation of the univalence axiom" Cyril Cohen, Thierry Coquand, Simon Huber and Anders Mörtberg To appear in post-proceedings of TYPES 2016 https://arxiv.org/abs/1611.02108

I should say that cubicaltt is mainly meant to be a prototype implementation of Cubical Type Theory in which we can do experiments, however it was never our goal to implement a competitor to any of the more established proof assistants. Because of this there are no implicit arguments, type classes, proper universe management, termination checker, etc… Proofs in cubicaltt hence tend to get quite verbose, but it is definitely possible to do some fun things. See for example:

- binnat.ctt – Binary natural numbers and isomorphism to unary numbers. Example of data and program refinement by doing a proof for unary numbers by computation with binary numbers.
- setquot.ctt – Formalization of impredicative set quotients á la Voevodsky.
- hz.ctt – defined as an (impredicative set) quotient of
`nat * nat`

. - category.ctt – Categories. Structure identity principle. Pullbacks. (Due to Rafaël Bocquet)
- csystem.ctt – Definition of C-systems and universe categories. Construction of a C-system from a universe category. (Due to Rafaël Bocquet)

For a complete list of all the examples see:

https://github.com/mortberg/cubicaltt/tree/master/examples

For those who cannot live without implicit arguments and other features of modern proof assistants there is now an experimental cubical mode shipped with the master branch of Agda. For installation instructions and examples see:

https://agda.readthedocs.io/en/latest/language/cubical.html

https://github.com/Saizan/cubical-demo

In this post I will give some examples of the main features of cubicaltt, but for a more comprehensive introduction see the lecture notes. As cubicaltt is an experimental prototype things can (and probably will) change in the future (e.g. see the paragraph on HITs below).

The basic type theory on which cubicaltt is based has Π and ∑ types (with eta and surjective pairing), a universe `U`

, datatypes, recursive definitions and mutually recursive definitions (in particular inductive-recursive definitions). Note that general datatypes and (mutually recursive) definitions are not part of the version of Cubical Type Theory in the paper.

Below is an example of how natural numbers and addition are defined:

data nat = zero | suc (n : nat) add (m : nat) : nat -> nat = split zero -> m suc n -> suc (add m n)

If one loads this in the cubicaltt read-eval-print-loop one can compute things:

> add (suc zero) (suc zero) EVAL: suc (suc zero)

The homotopical interpretation of equality tells us that we can think of an equality proof between a and b in a type A as a path between a and b in a space A. cubicaltt takes this literally and adds a primitive Path type that should be thought of as a function out of an abstract interval with fixed endpoints.

We call the elements of the interval names/directions/dimensions and typically use i, j, k to denote them. The elements of the interval are generated by the following grammar (where dim is a dimension like i, j, k…):

r,s := 0 | 1 | dim | - r | r /\ s | r \/ s

The endpoints are `0`

and `1`

, – corresponds to symmetry (r in is mapped to `1-r`

), while `/\`

and `\/`

are so called “connections”. The connections can be thought of mapping r and s in to `min(r,s)`

and `max(r,s)`

respectively. As Path types behave like functions out of the interval there is both path abstraction and application (just like for function types). Reflexivity is written:

refl (A : U) (a : A) : Path A a a = <i> a

and corresponds to a constant path:

<i> a a -----------> a

with the intuition is that `<i> a`

is a function `\(i : ) -> a`

. However for deep reasons the interval isn’t a type (as it isn’t fibrant) so we cannot write functions out of it directly and hence we have this special notation for path abstraction.

If we have a path from a to b then we can compute its left end-point by applying it to `0`

:

face0 (A : U) (a b : A) (p : Path A a b) : A = p @ 0

This is of course convertible to `a`

. We can also reverse a path by using symmetry:

sym (A : U) (a b : A) (p : Path A a b) : Path A b a = <i> p @ -i

Assuming that some arguments could be made implicit this satisfies the equality

sym (sym p) == p

judgmentally. This is one of many examples of equalities that hold judgmentally in cubicaltt but not in standard type theory where sym would be defined by induction on p. This is useful for formalizing mathematics, for example we get the judgmental equality `C^op^op == C`

for a category `C`

that cannot be obtained in standard type theory with the usual definition of category without using any tricks (see opposite.ctt for a formal proof of this).

We can also directly define `cong`

(or `ap`

or `mapOnPath`

):

cong (A B : U) (f : A -> B) (a b : A) (p : Path A a b) : Path B (f a) (f b) = <i> f (p @ i)

Once again this satisfies some equations judgmentally that we don’t get in standard type theory where this would have been defined by induction on p:

cong id p == p cong g (cong f p) == cong (g o f) p

Finally the connections can be used to construct higher dimensional cubes from lower dimensional ones (e.g. squares from lines). If `p : Path A a b`

then ` <i j> p @ i /\ j`

is the interior of the square:

p a -----------------> b ^ ^ | | | | <j> a | | p | | | | | | a -----------------> a <i> a

Here i corresponds to the left-to-right dimension and j corresponds to the down-to-up dimension. To compute the left and right sides just plug in `i=0`

and `i=1`

in the term inside the square:

<j> p @ 0 /\ j = <j> p @ 0 = <j> a (p is a path from a to b) <j> p @ 1 /\ j = <j> p @ j = p (using eta for Path types)

These give a short proof of contractibility of singletons (i.e. that the type `(x : A) * Path A a x`

is contractible for all `a : A`

), for details see the lecture notes or the paper. Because connections allow us to build higher dimensional cubes from lower dimensional ones they are extremely useful for reasoning about higher dimensional equality proofs.

Another cool thing with Path types is that they allow us to give a direct proof of function extensionality by just swapping the path and lambda abstractions:

funExt (A B : U) (f g : A -> B) (p : (x : A) -> Path B (f x) (g x)) : Path (A -> B) f g = <i> \(a : A) -> (p a) @ i

To see that this makes sense we can compute the end-points:

`(<i> \(a : A) -> (p a) @ i) @ 0 = \(a : A) -> (p a) @ 0`

`= \(a : A) -> f a`

`= f`

and similarly for the right end-point. Note that the last equality follows from eta for Π types.

We have now seen that Path types allows us to define the constants of HoTT (like `cong`

or `funExt`

), but when doing proofs with Path types one rarely uses these constants explicitly. Instead one can directly prove things with the Path type primitives, for example the proof of function extensionality for dependent functions is exactly the same as the one for non-dependent functions above.

We cannot yet prove the principle of path induction (or `J`

) with what we have seen so far. In order to do this we need to be able to turn any path between types A and B into a function from A to B, in other words we need to be able to define `transport`

(or `cast`

or `coe`

):

transport : Path U A B -> A -> B

The computation rules for the transport operation in cubicaltt is introduced by recursion on the type one is transporting in. This is quite different from traditional type theory where the identity type is introduced as an inductive family with one constructor (`refl`

). A difficulty with this approach is that in order to be able to define transport in a Path type we need to keep track of the end-points of the Path type we are transporting in. To solve this we introduce a more general operation called composition.

Composition can be used to define the composition of paths (hence the name). Given paths `p : Path A a b`

and `q : Path A b c`

the composite is obtained by computing the missing top line of this open square:

a c ^ ^ | | | | <j> a | | q | | | | | | a ----------------> b p @ i

In the drawing I’m assuming that we have a direction `i : `

in context that goes left-to-right and that the j goes down-to-up (but it’s not in context, rather it’s implicitly bound by the comp operation). As we are constructing a Path from a to c we can use the i and put `p @ i`

as bottom. The code for this is as follows:

compPath (A : U) (a b c : A) (p : Path A a b) (q : Path A b c) : Path A a c = <i> comp (<_> A) (p @ i) [ (i = 0) -> <j> a , (i = 1) -> q ]

One way to summarize what compositions gives us is the so called “box principle” that says that “any open box has a lid”. Here “box” means (n+1)-dimensional cube and the lid is an n-dimensional cube. The comp operation takes as second argument the bottom of the box and then a list of sides. Note that the collection of sides doesn’t have to be exhaustive (as opposed to the original cubical set model) and one way to think about the sides is as a collection of constraints that the resulting lid has to satisfy. The first argument of comp is a path between types, in the above example this path is constant but it doesn’t have to be. This is what allows us to define transport:

transport (A B : U) (p : Path U A B) (a : A) : B = comp p a []

Combining this with the contractibility of singletons we can easily prove the elimination principle for Path types. However the computation rule does not hold judgmentally. This is often not too much of a problem in practice as the Path types satisfy various judgmental equalities that normal Id types don’t. Also, having the possibility to reason about higher equalities directly using path types and compositions is often very convenient and leads to very nice and new ways to construct proofs about higher equalities in a geometric way by directly reasoning about higher dimensional cubes.

The composition operations are related to the filling operations (as in Kan simplicial sets) in the sense that the filling operations takes an open box and computes a filler with the composition as one of its faces. One of the great things about cubical sets with connections is that we can reduce the filling of an open box to its composition. This is a difference compared to the original cubical set model and it provides a significant simplification as we only have to explain how to do compositions in open boxes and not also how to fill them.

The final main ingredient of cubicaltt are the Glue types. These are what allows us to have a direct algorithm for composition in the universe and to prove the univalence axiom. These types add the possibility to glue types along equivalences (i.e. maps with contractible fibers) onto another type. In particular this allows us to directly define one of the key ingredients of the univalence axiom:

ua (A B : U) (e : equiv A B) : Path U A B = <i> Glue B [ (i = 0) -> (A,e) , (i = 1) -> (B,idEquiv B) ]

This corresponds to the missing line at the top of:

A B | | e | | idEquiv B | | V V B --------> B B

The sides of this square are equivalences while the bottom and top are lines in direction i (so this produces a path from A to B as desired).

We have formalized three proofs of the univalence axiom in cubicaltt:

- A very direct proof due to Simon Huber and me using higher dimensional glueing.
- The more conceptual proof from section 7.2 of the paper in which we show that the
`unglue`

function is an equivalence (formalized by Fabian Ruch). - A proof from
`ua`

and its computation rule (`uabeta`

). Both of these constants are easy to define and are sufficient for the full univalence axiom as noted in a post by Dan Licata on the HoTT google group.

All of these proofs can be found in the file univalence.ctt and are explained in the paper (proofs 1 and 3 are in Appendix B).

Note that one often doesn’t need full univalence to do interesting things. So just like for Path types it’s often easier to just use the Glue primitives directly instead of invoking the full univalence axiom. For instance if we have proved that negation is an involution for bool we can directly get a non-trivial path from bool to bool using ua (which is just a Glue):

notEq : Path U bool bool = ua boob bool notEquiv

And we can use this non-trivial equality to transport true and compute the result:

> transport notEq true EVAL: false

This is all that the lectures cover, in the rest of this post I will discuss the two extensions of cubicaltt from the paper and their status in cubicaltt.

As pointed out above the computation rule for Path types doesn’t hold judgmentally. Luckily there is a neat trick due to Andrew Swan that allows us to define a new type that is equivalent to `Path A a b`

for which the computation rule holds judgmentally. For details see section 9.1 of the paper. We call this type `Id A a b`

as it corresponds to Martin-Löf’s identity type. We have implemented this in cubicaltt and proved the univalence axiom expressed exclusively using Id types, for details see idtypes.ctt.

For practical formalizations it is probably often more convenient to use the Path types directly as they have the nice primitives discussed above, but the fact that we can define Id types is very important from a theoretical point of view as it shows that cubicaltt with Id is really an extension of Martin-Löf type theory. Furthermore as we can prove univalence expressed using Id types we get that any proof in univalent type theory (MLTT extended with the univalence axiom) can be translated into cubicaltt.

The second extension to cubicaltt are HITs. We have a general syntax for adding these and some of them work fine on the master branch, see for example:

- circle.ctt – The circle as a HIT. Computation of winding numbers.
- helix.ctt – The loop space of the circle is equal to Z.
- susp.ctt – Suspension and n-spheres.
- torsor.ctt – Torsors. Proof that S1 is equal to BZ, the classifying

space of Z. (Due to Rafaël Bocquet) - torus.ctt – Proof that Torus = S1 * S1 in only 100 loc (due to Dan

Licata).

However there are various known issues with how the composition operations compute for recursive HITs (e.g. truncations) and HITs where the end-points contain function applications (e.g. pushouts). We have a very experimental branch that tries to resolve these issues called “hcomptrans”. This branch contains some new (currently undocumented) primitives that we are experimenting with and so far it seems like these are solving the various issues for the above two classes of more complicated HITs that don’t work on the master branch. So hopefully there will soon be a new cubical type theory with support for a large class of HITs.

That’s all I wanted to say about cubicaltt in this post. If someone plays around with the system and proves something cool don’t hesitate to file a pull request or file issues if you find some bugs.

]]>https://arxiv.org/abs/1701.07538

The main result of that article is a type theoretic replacement construction in a univalent universe that is closed under pushouts. Recall that in set theory, the replacement axiom asserts that if is a class function, assigning to any set a new set , then the image of any set , i.e. the set is again a set. In homotopy type theory we consider instead a map from a small type into a locally small type , and our main result is the construction of a small type with the universal property of the image of .

We say that a type is small if it is in , and for the purpose of this blog post smallness and locally smallness will always be with respect to . Before we define local smallness, let us recall the following rephrasing of the `encode-decode method’, which we might also call the Licata-Shulman theorem:

**Theorem. ***Let be a type with , and let be a type with . Then the following are equivalent.*

*The total space is contractible.**The canonical map defined by path induction, mapping to , is a fiberwise equivalence.*

Note that this theorem follows from the fact that a fiberwise map is a fiberwise equivalence if and only if it induces an equivalence on total spaces. Since for path spaces the total space will be contractible, we observe that *any* fiberwise equivalence establishes contractibility of the total space, i.e. we might add the following equivalent statement to the theorem.

*There (merely) exists a family of equivalences . In other words, is in the connected component of the type family .*

There are at least two equivalent ways of saying that a (possibly large) type is locally small:

- For each there is a type and an equivalence .
- For each there is a type ; for each there is a term , and the canonical dependent function defined by path induction by sending to is an equivalence.

Note that the data in the first structure is clearly a (large) mere proposition, because there can be at most one such a type family , while the equivalences in the second structure are canonical with respect to the choice of reflexivity . To see that these are indeed equivalent, note that the family of equivalences in the first structure is a fiberwise equivalence, hence it induces an equivalence on total spaces. Therefore it follows that the total space is contractible. Thus we see by Licata’s theorem that the canoncial fiberwise map is a fiberwise equivalence. Furthermore, it is not hard to see that the family of equivalences is equal to the canonical family of equivalences. There is slightly more to show, but let us keep up the pace and go on.

Examples of locally small types include any small type, any mere proposition regardless of their size, the universe is locally small by the univalence axiom, and if is small and is locally small then the type is locally small. Observe also that the univalence axiom follows if we assume the `uncanonical univalence axiom’, namely that there merely exists a family of equivalences . Thus we see that the slogan ‘identity of the universe is equivalent to equivalence’ actually implies univalence.

**Main Theorem.** *Let be a univalent universe that is closed under pushouts. Suppose that , that is a locally small type, and let . Then we can construct*

*a small type ,**a factorization*

*such that is an embedding that satisfies the universal property of the image inclusion, namely that for any embedding , of which the domain is possibly large, if factors through , then so does .*

Recall that factors through an embedding in at most one way. Writing for the mere proposition that factors through , we see that satisfies the universal property of the image inclusion precisely when the canonical map

is an equivalence.

Most of the paper is concerned with the construction with which we prove this theorem: the join construction. By repeatedly joining a map with itself, one eventually arrives at an embedding. The join of two maps and is defined by first pulling back, and then taking the pushout, as indicated in the following diagram

In the case , the type is equivalent to the usual join of types . Just like the join of types, the join of maps with a common codomain is associative, commutative, and it has a unit: the unique map from the empty type into . The join of two embeddings is again an embedding. We show that the last statement can be strengthened: the maps that are idempotent in a canonical way (i.e. the canonical morphism in the slice category over is an equivalence) are precisely the embeddings.

Below, I will indicate how we can use the above theorem to construct the n-truncations for any on any univalent universe that is closed under pushouts. Other applications include the construction of set-quotients and of Rezk-completion, since these are both constructed as the image of the Yoneda-embedding, and it also follows that the univalent completion of any dependent type can be constructed as a type in , namely , without needing to resort to more exotic higher inductive types. In particular, any connected component of the universe is equivalent to a small type.

**Theorem.*** Let be a univalent universe that is closed under pushouts. Then we can define for any *

*an n-truncation operation ,**a map**such that for any , the type is n-truncated and satisfies the (dependent) universal property of n-truncation, namely that for every type family of possibly large types such that each is n-truncated, the canonical map*

given by precomposition by is an equivalence.

*Construction.* The proof is by induction on . The case is trivial (take ). For the induction hypothesis we assume an n-truncation operation with structure described in the statement of the theorem.

First, we define by . As we have seen, the universe is locally small, and therefore the type is locally small. Therefore we can define

.

For the proof that is indeed -truncated, and satisfies the universal property of the n-truncation we refer to the article.

]]>

In this blog post, we work with the full repertoire of HoTT axioms, including univalence, propositional truncations, and pushouts. For the paper, we have carefully analysed which assumptions are used in which theorem, if any.

Parametricity is a property of terms of a language. If your language only has parametric terms, then polymorphic functions have to be invariant under the type parameter. So in MLTT, the only term inhabiting the type of polymorphic endomaps is the polymorphic identity .

In univalent foundations, we cannot prove *internally* that every term is parametric. This is because excluded middle is not parametric (exercise 6.9 of the HoTT book tells us that, assuming LEM, we can define a polymorphic endomap that flips the booleans), but there exist classical models of univalent foundations. So if we *could* prove this internally, excluded middle would be false, and thus the classical models would be invalid.

In the abovementioned blog post, we observed that exercise 6.9 of the HoTT book has a converse: if is the flip map on the type of booleans, then excluded middle holds. In the paper on arXiv, we have a stronger result:

**Theorem.** There exist and a type and a point with if and only if excluded middle holds.

Notice that there are no requirements on the type or the point . We have also applied the technique used for this theorem in other scenarios, for example:

**Theorem.** There exist and types and points with if and only if *weak* excluded middle holds.

The results in the paper illustrate that different violations of parametricity have different proof-theoretic strength: some violations are impossible, while others imply varying amounts of excluded middle.

In contrast to parametricity, which proves that terms of some language necessarily have some properties, it is currently unknown if non-identity automorphisms of the universe are definable in univalent foundations. But some believe that this may not be the case.

In the presence of excluded middle, we *can* define non-identity automorphisms of the universe. Given a type , we use excluded middle to decide if is a proposition. If it is, we map to , and otherwise we map to itself. Assuming excluded middle, we have for any proposition, so this is an automorphism.

The above automorphism swaps the empty type with the unit type and leaves all other types unchanged. More generally, assuming excluded middle we can swap any two types with equivalent automorphism ∞-groups, since in that case the corresponding connected components of the universe are equivalent. Still more generally, we can permute arbitrarily any family of types all having the same automorphism ∞-group.

The simplest case of this is when all the types are *rigid*, i.e. have trivial automorphism ∞-group. The types and are both rigid, and at least with excluded middle no other sets are; but there can be rigid higher types. For instance, if is a group that is a set (i.e. a 1-group), then its Eilenberg-Mac Lane space is a 1-type, and its automorphism ∞-group is a 1-type whose is the outer automorphisms of and whose is the center of . Thus, if has trivial outer automorphism group and trivial center, then is rigid. Such groups are not uncommon, including for instance the symmetric group for any . Thus, assuming excluded middle we can permute these arbitrarily, producing uncountably many automorphisms of the universe.

In the converse direction, we recorded the following.

**Theorem.** If there is an automorphism of the universe that maps some inhabited type to the empty type, then excluded middle holds.

**Corollary.** If there is an automorphism of the universe with , then the double negation

of the law of excluded middle holds.

This corollary relates to an unclaimed prize: if from an arbitrary equivalence such that for a particular you get a non-provable consequence of excluded middle, then you get -many beers. So this corollary wins you 0 beers. Although perhaps sober, we think this is an achievement worth recording.

Using this corollary, in turn, we can win -many beers, where is excluded middle for propositions in the universe . If we have . Suppose is an automorphism of with , then . For suppose that , and hence . So by the corollary, we obtain . But implies by cumulativity, so also holds, contradicting our assumption that .

To date no one has been able to win 1 beer.

]]>

The MRC program

nurtures early-career mathematicians—those who are close to finishing their doctorates or have recently finished—and provides them with opportunities to build social and collaborative networks to inspire and sustain each other in their work.

MRCs are held in the “breathtaking mountain setting” of Snowbird Resort in Utah. The HoTT MRC will be organized by Dan Christensen, Chris Kapulkin, Dan Licata, Emily Riehl, and myself. From the description:

The goal of this workshop is to bring together advanced graduate students and postdocs having some background in one (or more) areas such as algebraic topology, category theory, mathematical logic, or computer science, with the goal of learning how these areas come together in homotopy type theory, and working together to prove new results. Basic knowledge of just one of these areas will be sufficient to be a successful participant.

So if you are within a few years of your Ph.D. on either side, and are interested in HoTT, please consider applying! I think this has the potential to be a really exciting week, and a really great way to “jump-start” a research program in HoTT or related to it. Even though the application deadline isn’t until March 1, we would appreciate it for planning purposes if interested folks could apply as soon as possible. (The majority of places are for U.S. citizens or those affiliated with U.S. institutions, though there may be space for a few international participants. Women and underrepresented minorities are especially encouraged to apply.)

There are a lot of things that might happen at this workshop. There is a general list of topics posted with the description, and as the date approaches we’ll make further plans depending on our participants and their backgrounds (which is one of the reasons we want you to apply now). One topic that I think is a good candidate for quick progress is synthetic homotopy theory, where I suspect there’s still a lot of low-hanging fruit ready to be picked by collaborations between people familiar with classical homotopy theory and people with more experience thinking type-theoretically. Another topic that’s less of a sure thing, but that I am really hoping to get more people working on, is the problem of semantics for univalence: although I’ve about exhausted my own ideas in this direction, I still have hopes that there are model categories with strict univalent universes out there that present all -toposes, which might be found by fresh eyes. And, as you can see, there are plenty of other potential topics as well.

Feel free to ask any questions of me or any of the organizers.

]]>SQL is the lingua franca for retrieving structured data. Existing semantics for SQL, however, either do not model crucial features of the language (e.g., relational algebra lacks bag semantics, correlated subqueries, and aggregation), or make it hard to formally reason about SQL query rewrites (e.g., the SQL standard’s English is too informal). This post focuses on the ways that HoTT concepts (e.g., Homotopy Types, the Univalence Axiom, and Truncation) enabled us to develop HoTTSQL — a new SQL semantics that makes it easy to formally reason about SQL query rewrites. Our paper also details the rich set of SQL features supported by HoTTSQL.

You can download this blog post’s source (implemented in Coq using the HoTT library). Learn more about HoTTSQL by visiting our website.

The basic datatype in SQL is a relation, which is a *bag* (i.e., multiset) of tuples with the same given schema. You can think of a tuple’s schema as being like a variable’s type in a programming language. We formalize a bag of some type A as a function that maps every element of A to a type. The type’s cardinality indicates how many times the element appears in the bag.

Definition Bag A := A -> Type.

For example, the bag numbers = {| 7, 42, 7 |} can be represented as:

Definition numbers : Bag nat :=

fun n => match n with

| 7 => Bool

| 42 => Unit

| _ => Empty

end.

A SQL query maps one or more input relations to an output relation. We can implement SQL queries as operations on bags. For example, a disjoint union query in SQL can be implemented as a function that takes two input bags r1 and r2, and returns a bag in which every tuple a appears r1 a + r2 a times. Note that the cardinality of the sum type r1 a + r2 a is equal to the sum of the cardinalities of r1 a and r2 a.

Definition bagUnion {A} (r1 r2:Bag A) : Bag A :=

fun (a:A) => r1 a + r2 a.

Most database systems contain a query optimizer that applies SQL rewrite rules to improve query performance. We can verify SQL rewrite rules by proving the equality of two bags. For example, we can show that the union of r1 and r2 is equal to the union of r2 and r1, using functional extensionality (by_extensionality), the univalence axiom (path_universe_uncurried), and symmetry of the sum type (equiv_sum_symm).

Lemma bag_union_symm {A} (r1 r2 : Bag A) :

bagUnion r1 r2 = bagUnion r2 r1.

Proof.

unfold bagUnion.

by_extensionality a.

(* r1 a + r1 a = r2 a + r2 a *)

apply path_universe_uncurried.

(* r1 a + r1 a <~> r2 a + r2 a *)

apply equiv_sum_symm.

Qed.

Note that + and * on homotopy types are *like* the operations of a commutative semi-ring, Empty and Unit are *like* the identity elements of a commutative semi-ring, and there are paths witnessing the commutative semi-ring axioms for these operations and identity elements. We use the terminology *like* here, because algebraic structures over higher-dimensional types in HoTT are usually defined using coherence conditions between the equalities witnessing the structure’s axioms, which we have not yet attempted to prove.

Many SQL rewrite rules simplify to an expressions built using the operators of this semi-ring (e.g. r1 a + r1 a = r2 a + r2 a above), and could thus be potentially solved or simplified using a ring tactic (see). Unfortunately, Coq’s ring tactic is not yet ported to the HoTT library. Porting ring may dramatically simplify many of our proofs (Anyone interested in porting the ring tactic? Let us know).

It is reasonable to assume that SQL relations are bags that map tuples only to 0-truncated types (types with no higher homotopical information), because real-world databases’ input relations only contain tuples with finite multiplicity (Fin n is 0-truncated), and because SQL queries only use type operators that preserve 0-truncation. However, HoTTSQL does not requires this assumption, and as future work, it may be interesting to understand what the “cardinality” of a type with higher homotopical information means.

How to model bags is a fundamental design decision for mechanizing formal proofs of SQL query equivalences. Our formalization of bags is unconventional but effective for reasoning about SQL query rewrites, as we will see.
## Schemas

Previous work has modeled bags as *lists* (e.g., as done by Malecha et al.), where SQL queries are recursive functions over input lists, and two bags are equal iff their underlying lists are equal up to element reordering. Proving two queries equal thus requires induction on input lists (including coming up with induction hypothesis) and reasoning about list permutations. In contrast, by modeling bags as functions from tuples to types, proving two queries equal just requires proving the equality of two HoTT types.

In the database research community, prior work has modeled bags as *functions to natural numbers* (e.g., as done by Green et al.). Using this approach, one cannot define the potentially infinite sum ∑ a, r a that counts the number of elements in a bag r. This is important since a basic operation in SQL, projection, requires counting all tuples in a bag that match a certain predicate. In contrast, by modeling bags as functions from tuples to types, we can count the number of elements in a bag using the sigma type ∑, where the cardinality of the sigma type ∑ a, r a is equal to the sum of the cardinalities of r a for all a.

Traditionally, a relation is modeled as a bag of n-ary tuples, and a relation’s *schema* both describes how many elements there are in each tuple (i.e., n), and the the type of each element. Thus, a schema is formalized as a list of types.

In HoTTSQL, a relation is modeled as a bag of nested pairs (nested binary-tuples), and a relation’s schema both describes the nesting of the pairs and the types of the leaf pairs. In HoTTSQL, a schema is thus formalized as a binary tree, where each node stores only its child nodes, and each leaf stores a type. Our formalization of schemas as trees and tuples as nested pairs is unconventional. We will see later how this choice simplifies reasoning.

Inductive Schema :=

| node (s1 s2 : Schema)

| leaf (T:Type)

.

For example, a schema for people (with a name, age, and employment status) can be expressed as Person : Schema := node (leaf Name) (node (leaf Nat) (leaf Bool)).

We formalize a *tuple* as a function Tuple that takes a schema s and returns a nested pair which matches the tree structure and types of s.

Fixpoint Tuple (s:Schema) : Type :=

match s with

| node s1 s2 => Tuple s1 * Tuple s2

| leaf T => T

end.

For example, Tuple Person = Name * (Nat * Bool) and (Alice, (23, false)) : Tuple Person.

Finally, we formalize a *relation* as a bag of tuples that match a given schema s.

Definition Relation (s:Schema) := Bag (Tuple s).

Recall that a SQL query maps one or more input relations to an output relation, and that we can implement SQL queries with operations on bags. In this section, we incrementally introduce various SQL queries, and describe their semantics in terms of bags.

The following subset of the SQL language supports unioning relations, and selecting (i.e., filtering) tuples in a relation.

Inductive SQL : Schema -> Type :=

| union {s} : SQL s -> SQL s -> SQL s

| select {s} : Pred s -> SQL s -> SQL s

(* … *)

.

Fixpoint denoteSQL {s} (q : SQL s) : Relation s :=

match q with

| union _ q1 q2 => fun t => denoteSQL q1 t + denoteSQL q2 t

| select _ b q => fun t => denotePred b t * denoteSQL q t

(* … *)

end.

The query select b q removes all the tuples from the relation returned by the query q where the predicate b does not hold. We denote the predicate as a function denotePred(b) : Tuple s -> Type that maps a tuple to a (-1)-truncated type. denotePred(b) t is Unit if the predicate holds for t, and Empty otherwise. The query multiplies the relation with the predicate to implement the semantics of the query (i.e., n * Unit = n and n * Empty = Empty, where n is the multiplicity of the input tuple t).

To syntactically resemble SQL, we write q1 UNION ALL q2 for union q1 q2, q WHERE b for select b q, and SELECT * FROM q for q. We write ⟦q⟧ for the denotation of a query denoteQuery q, and ⟦b⟧ for the denotation of a predicate denotePred b.

To prove that two SQL queries are equal, one has to prove that their two denotations are equal, i.e., that two bags returned by the two queries are equal, given any input relation(s). The following example shows how we can prove that selection distributes over union, by reducing it to showing the distributivity of * over + (lemma sum_distrib_l).

Lemma proj_union_distr s (q1 q2 : SQL s) (p:Pred s) :

⟦ SELECT * FROM (q1 UNION ALL q2) WHERE p ⟧ =

⟦ (SELECT * FROM q1 WHERE p) UNION ALL

(SELECT * FROM q2 WHERE p) ⟧.

Proof.

simpl.

by_extensionality t.

(* ⟦p⟧ t * (⟦q1⟧ t + ⟦q2⟧ t) = ⟦p⟧ t * ⟦q1⟧ t + ⟦p⟧ t * ⟦q2⟧ t *)

apply path_universe_uncurried.

apply sum_distrib_l.

Qed.

So far, we have seen the use of homotopy types to model SQL relations, and have seen the use of the univalence axiom to prove SQL rewrite rules. We now show the use of truncation to model the removal of duplicates in SQL relations. To show an example of duplicate removal in SQL, we first have to extend our semantics of the SQL language with more features.

Inductive Proj : Schema -> Schema -> Type :=

| left {s s’} : Proj (node s s’) s

| right {s s’} : Proj (node s’ s) s

(* … *)

.

Inductive SQL : Schema -> Type :=

(* … *)

| distinct {s} : SQL s -> SQL s

| product {s1 s2} : SQL s1 -> SQL s2 -> SQL (node s1 s2)

| project {s s’} : Proj s s’ -> SQL s -> SQL s’

(* … *)

.

Fixpoint denoteProj {s s’} (p : Proj s s’) : Tuple s ->

Tuple s’ :=

match p with

| left _ _ => fst

| right _ _ => snd

(* … *)

end.

Fixpoint denoteSQL {s} (q : SQL s) : Relation s :=

match q with

(* … *)

| distinct _ q => fun t => ║ denoteSQL q t ║

| product _ _ q1 q2 => fun t => denoteSQL q1 (fst t) *

denoteSQL q2 (snd t)

| project _ _ p q => fun t’ => ∑ t, denoteSQL q t *

(denoteProj p t = t’)

(* … *)

end.

The query distinct q removes duplicate tuples in the relation returned by the query q using the (-1)-truncation function ║ q ║ (see HoTT book, chapter 3.7).

The query product q1 q2 creates the cartesian product of q1 and q2, i.e., it returns a bag that maps every tuple consisting of two tuples t1 and t2 to the number of times t1 appears in q1 multiplied by the number of times t2 appears in q2.

The query project p q projects elements from each tuple contained in the query q. The projection is defined by p, and is denoted as a function that takes a tuple of some schema s and returns a new tuple of some schema s’. For example, left is the projection that takes a tuple and returns the tuple’s first element. We assume that tuples have no higher homotopical information, and that equality between tuples is thus (-1)-truncated.

Like before, we write DISTINCT q for distinct q, FROM q1, q2 for product q1 q2, and SELECT p q for project p q. We write ⟦p⟧ for the denotation of a projection denoteProj p.

Projection of products is the reason HoTTSQL must model schemas as nested pairs. If schemas were flat n-ary tuples, the left projection would not know which elements of the tuple formerly belonged to the left input relation of the product, and could thus not project them (feel free to contact us if you have ideas on how to better represent schemas)

Projection requires summing over all tuples in a bag, as multiple tuples may be merged into one. This sum is over an infinite domain (all tuples) and thus cannot generally be implemented with natural numbers. Implementing it using the ∑ (sigma) type is however trivial.

Equipped with these additional features, we can now prove the following rewrite rule.

Lemma self_join s (q : SQL s) :

⟦ DISTINCT SELECT left FROM q, q ⟧ =

⟦ DISTINCT SELECT * FROM q ⟧.

The two queries are equal, because the left query performs a redundant self-join. Powerful database query optimizations, such as magic sets rewrites and conjunctive query equivalences are based on redundant self-joins elimination.

To prove the equivalence of any two (-1)-truncated types ║ q1 ║ and ║ q2 ║, it suffices to prove the bi-implication q1 <-> q2 (lemma equiv_iff_trunc). This is one of the cases where concepts from HoTT simplify formal reasoning in a big way. Instead of having to apply a series of equational rewriting rules (which is complicated by the fact that they need to be applied under the variable bindings of Σ), we can prove the goal using deductive reasoning.

Proof.

simpl.

by_extensionality t.

(* ║ ∑ t’, ⟦q⟧ (fst t’) * ⟦q⟧ (snd t’) * (fst t’ = t) ║ =

║ ⟦q⟧ t ║ *)

apply equiv_iff_trunc.

split.

– (* ∃ t’, ⟦q⟧ (fst t’) ∧ ⟦q⟧ (snd t’) ∧ (fst t’ = t) →

⟦q⟧ t *)

intros [[t1 t2] [[h1 h2] eq]].

destruct eq.

apply h1.

– (* ⟦q⟧ t →

∃ t’, ⟦q⟧ (fst t’) ∧ ⟦q⟧ (snd t’) ∧ (fst t’ = t) *)

intros h.

exists (t, t).

(* ⟦q⟧ t ∧ ⟦q⟧ t ∧ (t = t) *)

split; [split|].

+ apply h.

+ apply h.

+ reflexivity.

The queries in the above rewrite rule fall in the well-studied category of conjunctive queries where equality is decidable (while equality between arbitrary SQL queries is undecidable). Using Coq’s support for automating deductive reasoning (with Ltac), we have implemented a decision procedure for the equality of conjunctive queries (it’s only 40 lines of code; see this posts source for details), the aforementioned rewrite rule can thus be proven in one line of Coq code.

Restart.

conjuctiveQueryProof.

Qed.

We have shown how concepts from HoTT have enabled us to develop HoTTSQL, a SQL semantics that makes it easy to formally reason about SQL query rewrites.

We model bags of type A as a function A -> Type. Bags can be proven equal using the univalence axiom. In contrast to models of bags as list A, we require no inductive or permutation proofs. In contrast to models of bags as A -> nat, we can count the number of elements in any bag.

Duplicate elimination in SQL is implemented using (-1)-truncation, which leads to clean and easily automatable deductive proofs. Many of our proofs could be further simplified with a ring tactic for the 0-trucated type semi-ring.

Visit our website to access our source code, learn how we denote other advanced SQL features such as correlated subqueries, aggregation, advanced projections, etc, and how we proved complex rewrite rules (e.g., magic set rewrites).

Contact us if you have any question, feedback, or know how to improve HoTTSQL (e.g., you know how to use more concepts from HoTT to extend HoTTSQL).

My dissertation was on the topic of combinatorial species, and specifically on the idea of using species as a foundation for thinking about generalized notions of algebraic data types. (Species are sort of dual to containers; I think both have intereseting and complementary things to offer in this space.) I didn’t really end up getting very far into practicalities, instead getting sucked into a bunch of more foundational issues.

To use species as a basis for computational things, I wanted to first “port” the definition from traditional, set-theory-based, classical mathematics into a constructive type theory. HoTT came along at just the right time, and seems to provide exactly the right framework for thinking about a constructive encoding of combinatorial species.

For those who are familiar with HoTT, this post will contain nothing all that new. But I hope it can serve as a nice example of an “application” of HoTT. (At least, it’s more applied than research in HoTT itself.)

Traditionally, a species is defined as a functor , where is the groupoid of finite sets and bijections, and is the category of finite sets and (total) functions. Intuitively, we can think of a species as mapping finite sets of “labels” to finite sets of “structures” built from those labels. For example, the species of linear orderings (*i.e.* lists) maps the finite set of labels to the size- set of all possible linear orderings of those labels. Functoriality ensures that the specific identity of the labels does not matter—we can always coherently relabel things.

So what happens when we try to define species inside a constructive type theory? The crucial piece is : the thing that makes species interesting is that they have built into them a notion of bijective relabelling, and this is encoded by the groupoid . The first problem we run into is how to encode the notion of a *finite* set, since the notion of finiteness is nontrivial in a constructive setting.

One might well ask why we even care about finiteness in the first place. Why not just use the groupoid of *all* sets and bijections? To be honest, I have asked myself this question many times, and I still don’t feel as though I have an entirely satisfactory answer. But what it seems to come down to is the fact that species can be seen as a categorification of generating functions. Generating functions over the semiring can be represented by functions , that is, each natural number maps to some coefficient in ; each natural number, categorified, corresponds to (an equivalence class of) *finite* sets. Finite label sets are also important insofar as our goal is to actually use species as a basis for *computation*. In a computational setting, one often wants to be able to do things like enumerate all labels (*e.g.* in order to iterate through them, to do something like a map or fold). It will therefore be important that our encoding of finiteness actually has some computational content that we can use to enumerate labels.

Our first attempt might be to say that a finite set will be encoded as a type together with a bijection between and a canonical finite set of a particular natural number size. That is, assuming standard inductively defined types and ,

However, this is unsatisfactory, since defining a suitable notion of bijections/isomorphisms between such finite sets is tricky. Since is supposed to be a groupoid, we are naturally led to try using equalities (*i.e.* paths) as morphisms—but this does not work with the above definition of finite sets. In , there are supposed to be different morphisms between any two sets of size . However, given any two same-size inhabitants of the above type, there is only *one* path between them—intuitively, this is because paths between -types correspond to tuples of paths relating the components pointwise, and such paths must therefore preserve the *particular* relation to . The only bijection which is allowed is the one which sends each element related to to the other element related to , for each .

So elements of the above type are not just finite sets, they are finite sets *with a total order*, and paths between them must be order-preserving; this is too restrictive. (However, this type is not without interest, and can be used to build a counterpart to L-species. In fact, I think this is exactly the right setting in which to understand the relationship between species and L-species, and more generally the difference between isomorphism and *equipotence* of species; there is more on this in my dissertation.)

We can fix things using propositional truncation. In particular, we define

That is, a “finite set” is a type together with some *hidden* evidence that is equivalent to for some . (I will sometimes abuse notation and write instead of .) A few observations:

- First, we can pull the size out of the propositional truncation, that is, . Intuitively, this is because if a set is finite, there is only one possible size it can have, so the evidence that it has that size is actually a mere proposition.
- More generally, I mentioned previously that we sometimes want to use the computational evidence for the finiteness of a set of labels,
*e.g.*enumerating the labels in order to do things like maps and folds. It may seem at first glance that we cannot do this, since the computational evidence is now hidden inside a propositional truncation. But actually, things are exactly the way they should be: the point is that we can use the bijection hidden in the propositional truncation*as long as the result does not depend on the particular bijection we find there*. For example, we cannot write a function which returns the value of type corresponding to , since this reveals something about the underlying bijection; but we can write a function which finds the smallest value of (with respect to some linear ordering), by iterating through all the values of and taking the minimum. - It is not hard to show that if , then is a set (
*i.e.*a 0-type) with decidable equality, since is equivalent to the 0-type . Likewise, itself is a 1-type. - Finally, note that paths between inhabitants of now do exactly what we want: a path is really just a path between 0-types, that is, a bijection, since trivially.

We can now define species in HoTT as functions of type . The main reason I think this is the Right Definition ™ of species in HoTT is that functoriality comes for free! When defining species in set theory, one must say “a species is a functor, *i.e.* a pair of mappings satisfying such-and-such properties”. When constructing a particular species one must explicitly demonstrate the functoriality properties; since the mappings are just functions on sets, it is quite possible to write down mappings which are not functorial. But in HoTT, all functions are functorial with respect to paths, and we are using paths to represent the morphisms in , so any function of type automatically has the right functoriality properties—it is literally impossible to write down an invalid species. Actually, in my dissertation I define species as functors between certain categories built from and , but the point is that any function can be automatically lifted to such a functor.

Here’s another nice thing about the theory of species in HoTT. In HoTT, coends whose index category are groupoids are just plain -types. That is, if is a groupoid, a category, and , then . In set theory, this coend would be a *quotient* of the corresponding -type, but in HoTT the isomorphisms of are required to correspond to paths, which automatically induce paths over the -type which correspond to the necessary quotient. Put another way, we can define coends in HoTT as a certain HIT, but in the case that is a groupoid we already get all the paths given by the higher path constructor anyway, so it is redundant. So, what does this have to do with species, I hear you ask? Well, several species constructions involve coends (most notably partitional product); since species are functors from a groupoid, the definitions of these constructions in HoTT are particularly simple. We again get the right thing essentially “for free”.

There’s lots more in my dissertation, of course, but these are a few of the key ideas specifically relating species and HoTT. I am far from being an expert on either, but am happy to entertain comments, questions, etc. I can also point you to the right section of my dissertation if you’re interested in more detail about anything I mentioned above.

]]>In a typical functional programming career, at some point one encounters the notions of parametricity and free theorems.

Parametricity can be used to answer questions such as: is every function

f : forall x. x -> x

equal to the identity function? Parametricity tells us that this is true for System F.

However, this is a metatheoretical statement. Parametricity gives properties about the *terms* of a language, rather than proving *internally* that certain elements satisfy some properties.

So what can we prove internally about a polymorphic function ?

In particular, we can see that internal proofs (claiming that must be the identity function for every type ) *cannot* exist: exercise 6.9 of the HoTT book tells us that, assuming LEM, we can exhibit a function such that is (Notice that the proof of this is not quite as trivial as it may seem: LEM only gives us if is a (mere) proposition (a.k.a. subsingleton). Hence, simple case analysis on does not work, because this is not necessarily a proposition.)

And given the fact that LEM is consistent with univalent foundations, this means that a proof that is the identity function cannot exist.

I have proved that LEM is exactly what is needed to get a polymorphic function that is not the identity on the booleans.

**Theorem.** If there is a function with then LEM holds.

If then by simply trying both elements we can find an explicit boolean such that Without loss of generality, we can assume

For the remainder of this analysis, let be an arbitrary proposition. Then we want to achieve to prove LEM.

We will consider a type with three points, where we identify two points depending on whether holds. In other words, we consider the quotient of a three-element type, where the relation between two of those points is the proposition

I will call this space and it can be defined as where is the *suspension* of This particular way of defining the quotient, which is equivalent to a quotient of a three-point set, will make case analysis simpler to set up. (Note that suspensions are not generally quotients: we use the fact that is a proposition here.)

Notice that if holds, then and also

We will consider at the type (*not* itself!). Now the proof continues by defining

(where is the equivalence given by the identity function on ) and doing case analysis on and if necessary also on for some elements I do not believe it is very instructive to spell out all cases explicitly here. I wrote a more detailed note containing an explicit proof.

Notice that doing case analysis here is simply an instance of the induction principle for In particular, we do not require decidable equality of (which would already give us which is exactly what we are trying to prove).

For the sake of illustration, here is one case:

- Assume holds. Then since then by transporting along an appropriate equivalence (namely the one that identifies with we get But since is an equivalence for which is a fixed point, must be the identity everywhere, that is, which is a contradiction.

I formalized this proof in Agda using the HoTT-Agda library

Thanks to Martín Escardó, my supervisor, for his support. Thanks to Uday Reddy for giving the talk on parametricity that inspired me to think about this.

]]>