Urs Schreiber’s problems were all about formalizing results in higher differential geometry, that make also sense in the quite abstract setting of differential cohesive toposes and cohesive toposes.

A differential cohesive topos is a topos with some extra structure given by three monads and three comonads with some nice properties and adjunctions between them. There is some work concerned with having this structure in homotopy type theory. A specialized cohesive homotopy type theory concerned with three of the six (co-)monads, called real-cohesive homotopy type theory was introduced by Mike Shulman.

What I want to sketch here today is concerned only with one of the monads of differential cohesion. I will call this monad coreduction and denote it with . By the axioms of differential cohesion, it has a left and a right adjoint and is idempotent. These properties are more than enough to model a monadic modality in homotopy type theory. Monadic modalities were already defined at the end of section 7 in the HoTT-Book and named just “modalities” and it is possible to have a homotopy type theory with a monadic modality just by adding some axioms — which is known not to work for non-trivial *comonadic* modalities.

So let us assume that is a monadic modality in HoTT. That means that we have a map and a *unit*

such that a property holds, that I won’t really go into in this post — but here it is for completeness: For any dependent type on some type , such that the unit maps are equivalences for all , the map

is an equivalence. So the inverse to this map is an induction principle, that only holds for dependent types subject to the condition above.

The n-truncations and double negation are examples of monadic modalities.

At this point (or earlier), one might ask: “Where is the differential geometry”? The answer is that in this setting, all types carry differential geometric structure that is accessible via and . This makes sense if we think of some very special interpretations of and (and HoTT), where the unit is given as the quotient map from a space to its quotient by a relation that identifies *infinitesimally close* points in .

Since we have this abstract monadic modality, we can turn this around and define the notion of two points being *infinitesimally close*, denoted “” in terms of the units:

where “” denotes the identity type (of in this case). The collection of all points y in a type X that are infinitesimally close to a fixed x in X, is called the *formal disk* at x. Let us denote it with :

Using some basic properties of monadic modalities, one can show, that any map preserves inifinitesimal closeness, i.e.

is inhabited. For any x in A, we can use this to get a map

which behaves a lot like the differential of a smooth function. For example, the chain rule holds

and if f is an equivalence, all induced are also equivalences. The latter corresponds to the fact that the differential of a diffeomorphism is invertible.

If we have a 0-group G with unit e, the left tranlations are a family of equivalences that consistently identify with all other formal disks in G given by the differentials .

This is essentially a generalization of the fact, that the tangent bundle of a Lie-group is trivialized by left translations and a solution to the first part of the first of Urs Schreiber’s problems I mentioned in the beginning.

With the exception of the chain rule, all of this was in my dissertation, which I defended in 2017. A couple of month ago, I wrote an article about this and put it on the arxiv and since monday, there is an improved version with an introduction that explains what *monads* you can think of and relates the setup to Synthetic Differential Geometry.

There is also a recording on youtube of a talk I gave about this in Bonn.

**International Conference on Homotopy Type Theory
(HoTT 2019)**

Carnegie Mellon University

12 – 17 August 2019

There will also be an associated:

**HoTT Summer School**

7 – 10 August 2019

More details to follow in the fall!

]]>Unfortunately, Wikispaces is closing, so the UF-IAS-2012 wiki will no longer be accessible there. With the help of Richard Williamson, we have migrated all of its content to a new archival copy hosted on the nLab server:

Let us know if you find any formatting or other problems.

]]>For some time I was a bit upset about this. But maybe this is our fault, by often trying to explain univalence only imprecisely, mixing the explanation of the models with the explanation of the underlying Martin-Löf type theory, with none of the two explained sufficiently precisely.

There are long, precise explanations such as the HoTT book, for example, or the various formalizations in Coq, Agda and Lean.

But perhaps we don’t have publicly available material with a self-contained, brief and complete formulation of univalence, so that interested mathematicians and logicians can try to contemplate the axiom in a fully defined form.

So here is an attempt of a self-contained, brief and complete formulation of Voevodsky’s Univalence Axiom in the arxiv.

This has an Agda file with univalence defined from scratch as an ancillary file, without the use of any library at all, to try to show what the length of a self-contained definition of the univalence type is. Perhaps somebody should add a Coq “version from scratch” of this.

There is also a web version UnivalenceFromScratch to try to make this as accessible as possible, with the text and the Agda code together.

The above notes explain the univalence axiom only. Regarding its role, we recommend Dan Grayson’s introduction to univalent foundations for mathematicians.

]]>]]>We are pleased to announce the AMS Special Session on Homotopy Type Theory, to be held on January 11, 2018 in San Diego, California, as part of the Joint Mathematics Meetings (to be held January 10 – 13).

Homotopy Type Theory (HoTT) is a new field of study that relates constructive type theory to abstract homotopy theory. Types are regarded as synthetic spaces of arbitrary dimension and type equality as homotopy equivalence. Experience has shown that HoTT is able to represent many mathematical objects of independent interest in a direct and natural way. Its foundations in constructive type theory permit the statement and proof of theorems about these objects within HoTT itself, enabling formalization in proof assistants and providing a constructive foundation for other branches of mathematics.

This Special Session is affiliated with the AMS Mathematics Research Communities (MRC) workshop for early-career researchers in Homotopy Type Theory organized by Dan Christensen, Chris Kapulkin, Dan Licata, Emily Riehl and Mike Shulman, which took place last June.

The Special Session will include talks by MRC participants, as well as by senior researchers in the field, on various aspects of higher-dimensional type theory including categorical semantics, computation, and the formalization of mathematical theories. There will also be a panel discussion featuring distinguished experts from the field.

Further information about the Special Session, including a schedule and abstracts, can be found at: http://jointmathematicsmeetings.org/meetings/national/jmm2018/2197_program_ss14.html.

Please note that the early registration deadline is December 20, 2017.If you have any questions about about the Special Session, please feel free to contact one of the organizers. We look forward to seeing you in San Diego.

Simon Cho (University of Michigan)

Liron Cohen (Cornell University)

Ed Morehouse (Wesleyan University)

Known impredicative encodings of various inductive types in System F, such as the type

of natural numbers do not satisfy the relevant -computation rules. The aim of this work is to refine the System F encodings by moving to a system of HoTT with an impredicative universe, so that the relevant -rules are satisfied (along with all the other rules). As a result, the so-determined types have their expected universal properties. The main result is the construction of a type of natural numbers which is the initial algebra for the expected endofunctor .

For the inductive types treated in the thesis, we do not use the full power of HoTT; we need only postulate -types, identity types, “large” -types over an impredicative universe and function extensionality. Having large -types over an impredicative universe means that given a type and a type family , we may form the dependent function type

Note that this type is in even if is not.

We obtain a translation of System F types into type theory by replacing second order quantification by dependent products over (or alternatively over the subtype of given by some h-level).

For brevity, we will focus on the construction of the natural numbers (though in the thesis, the coproduct of sets and the unit type is first treated with special cases of this method). We consider categories of algebras for endofunctors:

where the type of objects of is given by

(the type of sets (in )) and morphisms are simply functions between sets.

We can write down the type of -algebras:

and homomorphisms between algebras and :

which together form the category .

We seek the initial object in . Denote this by and moreover let be the forgetful functor to and be the covariant Yoneda embedding. We reason as follows:

using the fact that the diagonal functor is left adjoint to the limit functor for the last step. With this, we have a proposal for the definition of the underlying set of the initial -algebra as the limit of the forgetful functor. Using the fact that is defined as a limit, we obtain an algebra structure . As creates limits, is guaranteed to be initial in .

But we want to define in type theory. We do this using products and equalizers as is well known from category theory. Explicitly, we take the equalizer of the following two maps between products:

given by:

The equalizer is, of course:

which inhabits . Impredicativity is crucial for this: it guarantees that the product over lands in .

This method can be used to construct an initial algebra, and therefore a fixed-point, for *any* endofunctor ! We won’t pursue this remarkable fact here, but only consider the case at hand, where the functor is . Then the equalizer becomes our definition of the type of natural numbers (so let us rename to for the remainder). Observe that this encoding can be seen as a subtype of (a translation of) the System F encoding given at the start. Indeed, the indexing object of is equivalent to , by

With this, we can define a successor function and zero element, for instance:

(the successor function takes a little more work). We can also define a recursor , given any and . In other words, the introduction rules hold, and we can eliminate *into other sets*. Further, the -rules hold definitionally – as expected, since they hold for the System F encodings.

Finally we come to the desired result, the -rule for :

**Theorem. **Let and . Moreover, let such that:

for any . Then

Note that the -rule holds *propositionally. *By Awodey, Gambino, and Sojakova we therefore also have, equivalently, the induction principle for , aka the dependent elimination rule. As a corollary, we can prove the universal property that any -algebra homomorphism is propositionally equal to the appropriate recursor (as a -algebra homomorphism). Again we emphasise the need for impredicativity: in the proof of , we have to be able to plug into quantifiers over .

A semantic rendering of the above is that we have built a type that always determines a natural numbers object—whereas the System F encoding need not always do so (see Rummelhoff). In an appendix, we discuss a realizability semantics for the system we work in. Building more exotic types (that need not be sets) becomes more complicated; we leave this to future work.

]]>https://github.com/mortberg/cubicaltt/tree/master/lectures

The lectures cover the main features of the system and don’t assume any prior knowledge of Homotopy Type Theory or Univalent Foundations. Only basic familiarity with type theory and proof assistants based on type theory is assumed. The lectures are in the form of cubicaltt files and can be loaded in the cubicaltt proof assistant.

cubicaltt is based on a novel type theory called Cubical Type Theory that provides new ways to reason about equality. Most notably it makes various extensionality principles, like function extensionality and Voevodsky’s univalence axiom, into theorems instead of axioms. This is done such that these principles have computational content and in particular that we can transport structures between equivalent types and that these transports compute. This is different from when one postulates the univalence axiom in a proof assistant like Coq or Agda. If one just adds an axiom there is no way for Coq or Agda to know how it should compute and one looses the good computational properties of type theory. In particular canonicity no longer holds and one can produce terms that are stuck (e.g. booleans that are neither true nor false but don’t reduce further). In other words this is like having a programming language in which one doesn’t know how to run the programs. So cubicaltt provides an operational semantics for Homotopy Type Theory and Univalent Foundations by giving a computational justification for the univalence axiom and (some) higher inductive types.

Cubical Type Theory has a model in cubical sets with lots of structure (symmetries, connections, diagonals) and is hence consistent. Furthermore, Simon Huber has proved that Cubical Type Theory satisfies canonicity for natural numbers which gives a syntactic proof of consistency. Many of the features of the type theory are very inspired by the model, but for more syntactically minded people I believe that it is definitely possible to use cubicaltt without knowing anything about the model. The lecture notes are hence written with almost no references to the model.

The cubicaltt system is based on Mini-TT:

"A simple type-theoretic language: Mini-TT" (2009) Thierry Coquand, Yoshiki Kinoshita, Bengt Nordström and Makoto Takeya In "From Semantics to Computer Science; Essays in Honour of Gilles Kahn"

Mini-TT is a variant Martin-Löf type theory with datatypes and cubicaltt extends Mini-TT with:

- Path types
- Compositions
- Glue types
- Id types
- Some higher inductive types

The lectures cover the first 3 of these and hence correspond to sections 2-7 of:

"Cubical Type Theory: a constructive interpretation of the univalence axiom" Cyril Cohen, Thierry Coquand, Simon Huber and Anders Mörtberg To appear in post-proceedings of TYPES 2016 https://arxiv.org/abs/1611.02108

I should say that cubicaltt is mainly meant to be a prototype implementation of Cubical Type Theory in which we can do experiments, however it was never our goal to implement a competitor to any of the more established proof assistants. Because of this there are no implicit arguments, type classes, proper universe management, termination checker, etc… Proofs in cubicaltt hence tend to get quite verbose, but it is definitely possible to do some fun things. See for example:

- binnat.ctt – Binary natural numbers and isomorphism to unary numbers. Example of data and program refinement by doing a proof for unary numbers by computation with binary numbers.
- setquot.ctt – Formalization of impredicative set quotients á la Voevodsky.
- hz.ctt – defined as an (impredicative set) quotient of
`nat * nat`

. - category.ctt – Categories. Structure identity principle. Pullbacks. (Due to Rafaël Bocquet)
- csystem.ctt – Definition of C-systems and universe categories. Construction of a C-system from a universe category. (Due to Rafaël Bocquet)

For a complete list of all the examples see:

https://github.com/mortberg/cubicaltt/tree/master/examples

For those who cannot live without implicit arguments and other features of modern proof assistants there is now an experimental cubical mode shipped with the master branch of Agda. For installation instructions and examples see:

https://agda.readthedocs.io/en/latest/language/cubical.html

https://github.com/Saizan/cubical-demo

In this post I will give some examples of the main features of cubicaltt, but for a more comprehensive introduction see the lecture notes. As cubicaltt is an experimental prototype things can (and probably will) change in the future (e.g. see the paragraph on HITs below).

The basic type theory on which cubicaltt is based has Π and ∑ types (with eta and surjective pairing), a universe `U`

, datatypes, recursive definitions and mutually recursive definitions (in particular inductive-recursive definitions). Note that general datatypes and (mutually recursive) definitions are not part of the version of Cubical Type Theory in the paper.

Below is an example of how natural numbers and addition are defined:

data nat = zero | suc (n : nat) add (m : nat) : nat -> nat = split zero -> m suc n -> suc (add m n)

If one loads this in the cubicaltt read-eval-print-loop one can compute things:

> add (suc zero) (suc zero) EVAL: suc (suc zero)

The homotopical interpretation of equality tells us that we can think of an equality proof between a and b in a type A as a path between a and b in a space A. cubicaltt takes this literally and adds a primitive Path type that should be thought of as a function out of an abstract interval with fixed endpoints.

We call the elements of the interval names/directions/dimensions and typically use i, j, k to denote them. The elements of the interval are generated by the following grammar (where dim is a dimension like i, j, k…):

r,s := 0 | 1 | dim | - r | r /\ s | r \/ s

The endpoints are `0`

and `1`

, – corresponds to symmetry (r in is mapped to `1-r`

), while `/\`

and `\/`

are so called “connections”. The connections can be thought of mapping r and s in to `min(r,s)`

and `max(r,s)`

respectively. As Path types behave like functions out of the interval there is both path abstraction and application (just like for function types). Reflexivity is written:

refl (A : U) (a : A) : Path A a a = <i> a

and corresponds to a constant path:

<i> a a -----------> a

with the intuition is that `<i> a`

is a function `\(i : ) -> a`

. However for deep reasons the interval isn’t a type (as it isn’t fibrant) so we cannot write functions out of it directly and hence we have this special notation for path abstraction.

If we have a path from a to b then we can compute its left end-point by applying it to `0`

:

face0 (A : U) (a b : A) (p : Path A a b) : A = p @ 0

This is of course convertible to `a`

. We can also reverse a path by using symmetry:

sym (A : U) (a b : A) (p : Path A a b) : Path A b a = <i> p @ -i

Assuming that some arguments could be made implicit this satisfies the equality

sym (sym p) == p

judgmentally. This is one of many examples of equalities that hold judgmentally in cubicaltt but not in standard type theory where sym would be defined by induction on p. This is useful for formalizing mathematics, for example we get the judgmental equality `C^op^op == C`

for a category `C`

that cannot be obtained in standard type theory with the usual definition of category without using any tricks (see opposite.ctt for a formal proof of this).

We can also directly define `cong`

(or `ap`

or `mapOnPath`

):

cong (A B : U) (f : A -> B) (a b : A) (p : Path A a b) : Path B (f a) (f b) = <i> f (p @ i)

Once again this satisfies some equations judgmentally that we don’t get in standard type theory where this would have been defined by induction on p:

cong id p == p cong g (cong f p) == cong (g o f) p

Finally the connections can be used to construct higher dimensional cubes from lower dimensional ones (e.g. squares from lines). If `p : Path A a b`

then ` <i j> p @ i /\ j`

is the interior of the square:

p a -----------------> b ^ ^ | | | | <j> a | | p | | | | | | a -----------------> a <i> a

Here i corresponds to the left-to-right dimension and j corresponds to the down-to-up dimension. To compute the left and right sides just plug in `i=0`

and `i=1`

in the term inside the square:

<j> p @ 0 /\ j = <j> p @ 0 = <j> a (p is a path from a to b) <j> p @ 1 /\ j = <j> p @ j = p (using eta for Path types)

These give a short proof of contractibility of singletons (i.e. that the type `(x : A) * Path A a x`

is contractible for all `a : A`

), for details see the lecture notes or the paper. Because connections allow us to build higher dimensional cubes from lower dimensional ones they are extremely useful for reasoning about higher dimensional equality proofs.

Another cool thing with Path types is that they allow us to give a direct proof of function extensionality by just swapping the path and lambda abstractions:

funExt (A B : U) (f g : A -> B) (p : (x : A) -> Path B (f x) (g x)) : Path (A -> B) f g = <i> \(a : A) -> (p a) @ i

To see that this makes sense we can compute the end-points:

`(<i> \(a : A) -> (p a) @ i) @ 0 = \(a : A) -> (p a) @ 0`

`= \(a : A) -> f a`

`= f`

and similarly for the right end-point. Note that the last equality follows from eta for Π types.

We have now seen that Path types allows us to define the constants of HoTT (like `cong`

or `funExt`

), but when doing proofs with Path types one rarely uses these constants explicitly. Instead one can directly prove things with the Path type primitives, for example the proof of function extensionality for dependent functions is exactly the same as the one for non-dependent functions above.

We cannot yet prove the principle of path induction (or `J`

) with what we have seen so far. In order to do this we need to be able to turn any path between types A and B into a function from A to B, in other words we need to be able to define `transport`

(or `cast`

or `coe`

):

transport : Path U A B -> A -> B

The computation rules for the transport operation in cubicaltt is introduced by recursion on the type one is transporting in. This is quite different from traditional type theory where the identity type is introduced as an inductive family with one constructor (`refl`

). A difficulty with this approach is that in order to be able to define transport in a Path type we need to keep track of the end-points of the Path type we are transporting in. To solve this we introduce a more general operation called composition.

Composition can be used to define the composition of paths (hence the name). Given paths `p : Path A a b`

and `q : Path A b c`

the composite is obtained by computing the missing top line of this open square:

a c ^ ^ | | | | <j> a | | q | | | | | | a ----------------> b p @ i

In the drawing I’m assuming that we have a direction `i : `

in context that goes left-to-right and that the j goes down-to-up (but it’s not in context, rather it’s implicitly bound by the comp operation). As we are constructing a Path from a to c we can use the i and put `p @ i`

as bottom. The code for this is as follows:

compPath (A : U) (a b c : A) (p : Path A a b) (q : Path A b c) : Path A a c = <i> comp (<_> A) (p @ i) [ (i = 0) -> <j> a , (i = 1) -> q ]

One way to summarize what compositions gives us is the so called “box principle” that says that “any open box has a lid”. Here “box” means (n+1)-dimensional cube and the lid is an n-dimensional cube. The comp operation takes as second argument the bottom of the box and then a list of sides. Note that the collection of sides doesn’t have to be exhaustive (as opposed to the original cubical set model) and one way to think about the sides is as a collection of constraints that the resulting lid has to satisfy. The first argument of comp is a path between types, in the above example this path is constant but it doesn’t have to be. This is what allows us to define transport:

transport (A B : U) (p : Path U A B) (a : A) : B = comp p a []

Combining this with the contractibility of singletons we can easily prove the elimination principle for Path types. However the computation rule does not hold judgmentally. This is often not too much of a problem in practice as the Path types satisfy various judgmental equalities that normal Id types don’t. Also, having the possibility to reason about higher equalities directly using path types and compositions is often very convenient and leads to very nice and new ways to construct proofs about higher equalities in a geometric way by directly reasoning about higher dimensional cubes.

The composition operations are related to the filling operations (as in Kan simplicial sets) in the sense that the filling operations takes an open box and computes a filler with the composition as one of its faces. One of the great things about cubical sets with connections is that we can reduce the filling of an open box to its composition. This is a difference compared to the original cubical set model and it provides a significant simplification as we only have to explain how to do compositions in open boxes and not also how to fill them.

The final main ingredient of cubicaltt are the Glue types. These are what allows us to have a direct algorithm for composition in the universe and to prove the univalence axiom. These types add the possibility to glue types along equivalences (i.e. maps with contractible fibers) onto another type. In particular this allows us to directly define one of the key ingredients of the univalence axiom:

ua (A B : U) (e : equiv A B) : Path U A B = <i> Glue B [ (i = 0) -> (A,e) , (i = 1) -> (B,idEquiv B) ]

This corresponds to the missing line at the top of:

A B | | e | | idEquiv B | | V V B --------> B B

The sides of this square are equivalences while the bottom and top are lines in direction i (so this produces a path from A to B as desired).

We have formalized three proofs of the univalence axiom in cubicaltt:

- A very direct proof due to Simon Huber and me using higher dimensional glueing.
- The more conceptual proof from section 7.2 of the paper in which we show that the
`unglue`

function is an equivalence (formalized by Fabian Ruch). - A proof from
`ua`

and its computation rule (`uabeta`

). Both of these constants are easy to define and are sufficient for the full univalence axiom as noted in a post by Dan Licata on the HoTT google group.

All of these proofs can be found in the file univalence.ctt and are explained in the paper (proofs 1 and 3 are in Appendix B).

Note that one often doesn’t need full univalence to do interesting things. So just like for Path types it’s often easier to just use the Glue primitives directly instead of invoking the full univalence axiom. For instance if we have proved that negation is an involution for bool we can directly get a non-trivial path from bool to bool using ua (which is just a Glue):

notEq : Path U bool bool = ua boob bool notEquiv

And we can use this non-trivial equality to transport true and compute the result:

> transport notEq true EVAL: false

This is all that the lectures cover, in the rest of this post I will discuss the two extensions of cubicaltt from the paper and their status in cubicaltt.

As pointed out above the computation rule for Path types doesn’t hold judgmentally. Luckily there is a neat trick due to Andrew Swan that allows us to define a new type that is equivalent to `Path A a b`

for which the computation rule holds judgmentally. For details see section 9.1 of the paper. We call this type `Id A a b`

as it corresponds to Martin-Löf’s identity type. We have implemented this in cubicaltt and proved the univalence axiom expressed exclusively using Id types, for details see idtypes.ctt.

For practical formalizations it is probably often more convenient to use the Path types directly as they have the nice primitives discussed above, but the fact that we can define Id types is very important from a theoretical point of view as it shows that cubicaltt with Id is really an extension of Martin-Löf type theory. Furthermore as we can prove univalence expressed using Id types we get that any proof in univalent type theory (MLTT extended with the univalence axiom) can be translated into cubicaltt.

The second extension to cubicaltt are HITs. We have a general syntax for adding these and some of them work fine on the master branch, see for example:

- circle.ctt – The circle as a HIT. Computation of winding numbers.
- helix.ctt – The loop space of the circle is equal to Z.
- susp.ctt – Suspension and n-spheres.
- torsor.ctt – Torsors. Proof that S1 is equal to BZ, the classifying

space of Z. (Due to Rafaël Bocquet) - torus.ctt – Proof that Torus = S1 * S1 in only 100 loc (due to Dan

Licata).

However there are various known issues with how the composition operations compute for recursive HITs (e.g. truncations) and HITs where the end-points contain function applications (e.g. pushouts). We have a very experimental branch that tries to resolve these issues called “hcomptrans”. This branch contains some new (currently undocumented) primitives that we are experimenting with and so far it seems like these are solving the various issues for the above two classes of more complicated HITs that don’t work on the master branch. So hopefully there will soon be a new cubical type theory with support for a large class of HITs.

That’s all I wanted to say about cubicaltt in this post. If someone plays around with the system and proves something cool don’t hesitate to file a pull request or file issues if you find some bugs.

]]>https://arxiv.org/abs/1701.07538

The main result of that article is a type theoretic replacement construction in a univalent universe that is closed under pushouts. Recall that in set theory, the replacement axiom asserts that if is a class function, assigning to any set a new set , then the image of any set , i.e. the set is again a set. In homotopy type theory we consider instead a map from a small type into a locally small type , and our main result is the construction of a small type with the universal property of the image of .

We say that a type is small if it is in , and for the purpose of this blog post smallness and locally smallness will always be with respect to . Before we define local smallness, let us recall the following rephrasing of the `encode-decode method’, which we might also call the Licata-Shulman theorem:

**Theorem. ***Let be a type with , and let be a type with . Then the following are equivalent.*

*The total space is contractible.**The canonical map defined by path induction, mapping to , is a fiberwise equivalence.*

Note that this theorem follows from the fact that a fiberwise map is a fiberwise equivalence if and only if it induces an equivalence on total spaces. Since for path spaces the total space will be contractible, we observe that *any* fiberwise equivalence establishes contractibility of the total space, i.e. we might add the following equivalent statement to the theorem.

*There (merely) exists a family of equivalences . In other words, is in the connected component of the type family .*

There are at least two equivalent ways of saying that a (possibly large) type is locally small:

- For each there is a type and an equivalence .
- For each there is a type ; for each there is a term , and the canonical dependent function defined by path induction by sending to is an equivalence.

Note that the data in the first structure is clearly a (large) mere proposition, because there can be at most one such a type family , while the equivalences in the second structure are canonical with respect to the choice of reflexivity . To see that these are indeed equivalent, note that the family of equivalences in the first structure is a fiberwise equivalence, hence it induces an equivalence on total spaces. Therefore it follows that the total space is contractible. Thus we see by Licata’s theorem that the canoncial fiberwise map is a fiberwise equivalence. Furthermore, it is not hard to see that the family of equivalences is equal to the canonical family of equivalences. There is slightly more to show, but let us keep up the pace and go on.

Examples of locally small types include any small type, any mere proposition regardless of their size, the universe is locally small by the univalence axiom, and if is small and is locally small then the type is locally small. Observe also that the univalence axiom follows if we assume the `uncanonical univalence axiom’, namely that there merely exists a family of equivalences . Thus we see that the slogan ‘identity of the universe is equivalent to equivalence’ actually implies univalence.

**Main Theorem.** *Let be a univalent universe that is closed under pushouts. Suppose that , that is a locally small type, and let . Then we can construct*

*a small type ,**a factorization*

*such that is an embedding that satisfies the universal property of the image inclusion, namely that for any embedding , of which the domain is possibly large, if factors through , then so does .*

Recall that factors through an embedding in at most one way. Writing for the mere proposition that factors through , we see that satisfies the universal property of the image inclusion precisely when the canonical map

is an equivalence.

Most of the paper is concerned with the construction with which we prove this theorem: the join construction. By repeatedly joining a map with itself, one eventually arrives at an embedding. The join of two maps and is defined by first pulling back, and then taking the pushout, as indicated in the following diagram

In the case , the type is equivalent to the usual join of types . Just like the join of types, the join of maps with a common codomain is associative, commutative, and it has a unit: the unique map from the empty type into . The join of two embeddings is again an embedding. We show that the last statement can be strengthened: the maps that are idempotent in a canonical way (i.e. the canonical morphism in the slice category over is an equivalence) are precisely the embeddings.

Below, I will indicate how we can use the above theorem to construct the n-truncations for any on any univalent universe that is closed under pushouts. Other applications include the construction of set-quotients and of Rezk-completion, since these are both constructed as the image of the Yoneda-embedding, and it also follows that the univalent completion of any dependent type can be constructed as a type in , namely , without needing to resort to more exotic higher inductive types. In particular, any connected component of the universe is equivalent to a small type.

**Theorem.*** Let be a univalent universe that is closed under pushouts. Then we can define for any *

*an n-truncation operation ,**a map**such that for any , the type is n-truncated and satisfies the (dependent) universal property of n-truncation, namely that for every type family of possibly large types such that each is n-truncated, the canonical map*

given by precomposition by is an equivalence.

*Construction.* The proof is by induction on . The case is trivial (take ). For the induction hypothesis we assume an n-truncation operation with structure described in the statement of the theorem.

First, we define by . As we have seen, the universe is locally small, and therefore the type is locally small. Therefore we can define

.

For the proof that is indeed -truncated, and satisfies the universal property of the n-truncation we refer to the article.

]]>

In this blog post, we work with the full repertoire of HoTT axioms, including univalence, propositional truncations, and pushouts. For the paper, we have carefully analysed which assumptions are used in which theorem, if any.

Parametricity is a property of terms of a language. If your language only has parametric terms, then polymorphic functions have to be invariant under the type parameter. So in MLTT, the only term inhabiting the type of polymorphic endomaps is the polymorphic identity .

In univalent foundations, we cannot prove *internally* that every term is parametric. This is because excluded middle is not parametric (exercise 6.9 of the HoTT book tells us that, assuming LEM, we can define a polymorphic endomap that flips the booleans), but there exist classical models of univalent foundations. So if we *could* prove this internally, excluded middle would be false, and thus the classical models would be invalid.

In the abovementioned blog post, we observed that exercise 6.9 of the HoTT book has a converse: if is the flip map on the type of booleans, then excluded middle holds. In the paper on arXiv, we have a stronger result:

**Theorem.** There exist and a type and a point with if and only if excluded middle holds.

Notice that there are no requirements on the type or the point . We have also applied the technique used for this theorem in other scenarios, for example:

**Theorem.** There exist and types and points with if and only if *weak* excluded middle holds.

The results in the paper illustrate that different violations of parametricity have different proof-theoretic strength: some violations are impossible, while others imply varying amounts of excluded middle.

In contrast to parametricity, which proves that terms of some language necessarily have some properties, it is currently unknown if non-identity automorphisms of the universe are definable in univalent foundations. But some believe that this may not be the case.

In the presence of excluded middle, we *can* define non-identity automorphisms of the universe. Given a type , we use excluded middle to decide if is a proposition. If it is, we map to , and otherwise we map to itself. Assuming excluded middle, we have for any proposition, so this is an automorphism.

The above automorphism swaps the empty type with the unit type and leaves all other types unchanged. More generally, assuming excluded middle we can swap any two types with equivalent automorphism ∞-groups, since in that case the corresponding connected components of the universe are equivalent. Still more generally, we can permute arbitrarily any family of types all having the same automorphism ∞-group.

The simplest case of this is when all the types are *rigid*, i.e. have trivial automorphism ∞-group. The types and are both rigid, and at least with excluded middle no other sets are; but there can be rigid higher types. For instance, if is a group that is a set (i.e. a 1-group), then its Eilenberg-Mac Lane space is a 1-type, and its automorphism ∞-group is a 1-type whose is the outer automorphisms of and whose is the center of . Thus, if has trivial outer automorphism group and trivial center, then is rigid. Such groups are not uncommon, including for instance the symmetric group for any . Thus, assuming excluded middle we can permute these arbitrarily, producing uncountably many automorphisms of the universe.

In the converse direction, we recorded the following.

**Theorem.** If there is an automorphism of the universe that maps some inhabited type to the empty type, then excluded middle holds.

**Corollary.** If there is an automorphism of the universe with , then the double negation

of the law of excluded middle holds.

This corollary relates to an unclaimed prize: if from an arbitrary equivalence such that for a particular you get a non-provable consequence of excluded middle, then you get -many beers. So this corollary wins you 0 beers. Although perhaps sober, we think this is an achievement worth recording.

Using this corollary, in turn, we can win -many beers, where is excluded middle for propositions in the universe . If we have . Suppose is an automorphism of with , then . For suppose that , and hence . So by the corollary, we obtain . But implies by cumulativity, so also holds, contradicting our assumption that .

To date no one has been able to win 1 beer.

]]>