Here’s a cute little example of programming in HoTT that I worked out for a recent talk.

## Abstract Types

One of the main ideas used to structure large programs is *abstract types*. You give a specification for a component, and then you divide your program into two parts, the *client* and the *implementation*. In the latter, you implement the component with some concrete code that satisfies the specification. In the client, you use the functionality provided by the component—but, crucially, relying only on the spec, not the concrete implementation. This has lots of advantages: you can change each part independently; different people can work on each part simultaneously; when there’s a bug, you know who to blame….

Here’s a simple example, written in ML (I’ve included links to some lecture notes, in case there are people in the audience who would like more background on ML or on this kind of programming). The abstraction in question is that of a sequence, an ordered collection of values.

signature SEQ = sig type α seq val empty : α seq val single : α → α seq val append : α seq → α seq → α seq (* Behavior: map f < x1, ..., xn > = < f x1, ..., f xn > *) val map : (α → β) → (α seq → β seq) val reduce : (α → α → α) → α → α seq → α end

The first line says that for any type α there is a type `α seq`. There are various ways of building sequences: you can make an empty one, a 1-element one, and you can concatenate two of them. And there are some ways of processing them: `map` applies a function to each element; `reduce` applies a binary operation to pairs of elements (e.g., `reduce + 0` adds up all the elements in an `int seq`). I’ve given a behavioral spec for `map` only; in reality, you would say what the behavior of each operation is. A full sequence spec would have lots more operations, but hopefully you get the idea.

On the implementation side, you can do something simple, like implement sequences with lists:

structure ListSeq : SEQ = struct type α seq = α list val map = List.map ... end

or you might do something more complicated, like implementing sequences as some sort of tree or array for efficiency. Indeed, you can even give a parallel implementation of sequences

structure PSeq : SEQ = ...

where `map` applies `f` to each element in parallel, `reduce` computes on a tree, and so on. Using this implementation of sequences, you can exploit the parallelism available in a multicore machine or in a computer cluster, while writing exactly the same clean functional code that you would have written in a sequential program. We teach this idea of data parallelism in our first-year FP course and second-year algorithms and data structures course at CMU.

## Reasoning using Views

While abstract types get used all over ML and Haskell code, they are less common in dependently typed programming. Why? In simply-typed programming, it’s often good enough to have informal behavioral specs in comments (like `(* map f < x1, ..., xn > = < f x1, ..., f xn > *)`). But with dependent types, you usually need a formal behavioral spec for the component, in order to prove properties about the client code. One cheap (and common) solution is to avoid using abstract types, so that you can reason directly in terms of the implementation in the client code. But then you lose all the benefits of abstract types, and your reasoning is tied to a particular implementation! Moreover, it’s unclear how well this technique scales: as the implementation gets more sophisticated, the code often gets less useful for external reasoning in the client anyway (e.g., if you implement sequences with a balanced binary tree, your client code would need to talk about red-black invariants).

A better solution is to associate a concrete *view* type with the abstract type (Wadler and others have developed this technique for reasoning about and pattern matching on abstract types). One nice way to do this is to stipulate that (1) the abstract type is isomorphic to the view type and (2) all of the operations respect this isomorphism. In our running example, we could give a view of sequences as lists as follows (I’ll still use an ML-ish notation, but write ≃ for the identity type; `Iso` for a type of isomorphisms—functions back and forth that compose to the identity up to ≃; and `show` and `hide` for applying an isomorphism `i` in the forward and backward directions, respectively)):

signature SEQ' = sig type α seq val i : Iso(α seq, α list ) val empty : α seq val empty_spec : show i empty ≃ [] val single : α → α seq val single_spec : ∀ (x) -> show i (single x) ≃ [x] val append : α seq → α seq → α seq val append_spec : ∀ (s1 s2) -> show i (append s1 s2) ≃ (show i s1 @ show i s2) val map : (α → β) → (α seq → β seq) val map_spec : ∀ (s f) -> show i (map f s) ≃ (List.map f (show i s)) val reduce : (α → α → α) → α → α seq → α val reduce_spec : ∀ (b u s) -> (reduce b u s) ≃ (List.reduce b u (show i s)) end

`i` is an isomorphism between the abstract type of sequences and a concrete view type, in this case `list`s. For each operation, we say how it is interacts with the view, essentially saying that each operation on sequences behaves like the corresponding operation on lists. E.g., `map_spec` says that the view of a `map` is the same as doing a `List.map` on the view—and, because `List.map` is defined inductively, this formalizes the above informal spec’s `...`.

These specs suffice to do proofs about the component, independent of the concrete implementation. The main idea is that properties can be derived from the corresponding properties for lists. For example, `map` fusion (functoriality)

map (g . f) ≃ map g . map f

is useful for optimizing a 2-pass algorithm into a one pass algorithm. To prove it, we can reason as follows (writing `.` for function composition and `fromList/toList` for `hide/show i`):

map (g . f) [by map_spec] ≃ fromList . List.map (g.f) . toList [by fusion for List.map] ≃ fromList . List.map g . List.map f . toList [i is an isomorphism] ≃ fromList . List.map g . (toList . fromList) . List.map f . toList [associativity of .] ≃ (fromList . List.map g . toList) . (fromList . List.map f . toList) [by map_spec] ≃ map g . map f

## Views in HoTT

There are a couple of reasons why the above kind of specification is annoying in current dependently typed programming languages. First of all, it’s pretty verbose, and in a boring sort of way: for each component of the signature, you need to give a spec, but these specs are all of the form `X ≃ appropriately wrapped List.X`. Second, while the specs fully characterize the operations up to the identity type, they have no *definitional* behavior—and in some cases, where the implementation has nice definitional behavior, the abstract version will be much less convenient to work with because of this.

We can easily solve the first problem in Homotopy Type Theory, as I will illustrate below. While I don’t have anything concrete to say about the second problem right now, part of the computational interpretation of univalence is going to involve developing better tools for pushing around propositional equalities, so, given this, hopefully the lack of definitional equalities won’t be as much of an issue.

Above, the code for the spec was linear in the size of the component. In Homotopy Type Theory, we can do it in one line! Define `PSeq` and `ListSeq : SEQ` as above, and then here’s the whole spec:

spec : PSeq ≃ ListSeq

Thinking of `SEQ` as a Σ-type, this `spec` entails everything that we said above:

- there is an isomorphism between
`α PSeq.seq`and`α List.seq`for each α.This follows immediately from the “definition” of the identity type at each type: a path in Σ is a pair of paths; a path in Π is a family of paths; a path between types is an isomorphism (really, an equivalence, but because we’re talking about hsets I can ignore the coherence data).

- each
`PSeq`operation behaves like corresponding list operation.First, the definition of path in a Σ-type says that the

`PSeq`operations behave like the transport of the`ListSeq`operations along this isomorphism. Then, using the computation rules for`transport`, this simplifies to the specs given above: each`PSeq`operation behaves like the appropriate wrapping of the corresponding`ListSeq`with the view isomorphism.

In Coq or Agda, you can calculate this all out by hand, but because the proof is basically just computation, I expect that a proof assistant could easily expand the one-line spec into `SEQ'` programmatically. The point is that the process of annotating the signature with the behavioral spec is exactly an instance of the generic programming that is available in HoTT.

We could even go further and observe that the little derivations of facts like `map` fusion from the corresponding list properties can be abstracted away, but I won’t illustrate that there.

## Non-isomorphic views?

One complaint that is typically levied against this kind of spec is that the implementation and the view are often not exactly isomorphic: there might only be functions back and forth, or a section/retraction. For example, if you implement sequences as binary trees, then there will be many associativities of a tree that are modelled by the same list: the concrete implementation might have a finer equivalence than what is desired for the abstract spec. Because we have quotient types available in HoTT, I would hope that we can handle this case, but this needs to be investigated on some examples.

Thanks for the nice post.

Do you want to elaborate on the connection with

The view from the left by James McKinna and Conor McBride ?

They explain that by Curry-Howard, Wadler’s views can be seen as new induction principles on a type.

Good question! In terms of this running example, I think the thing analogous thing to their use of views would be to replace “i” with (1) fromList : a list -> a seq and (2) a dependent elimination rule that says that to prove something about any sequence, it suffices to show it for fromList l for all l (or you could break it into [] and :: cases: To show P(s) for any sequence s, it suffices to show it for P(hide []) and for P(hide (x::show s)) [for any x and s, assuming P(s)]. This is essentially tolist composed with list induction: you can certainly implement this view from the above interface.

Both ways of working seem like good tools, though at least for the “adjoint equivalence” definition of equivalence, what you get from univalence is closer to (fromList, toList) than to (fromList, dependent elim). Though the dependent elim feels a little like the “weak equivalence” definition of equivalence, so maybe there is some relation there.

For reasoning, if you write programs in the client using the dependent elim for the view, you can of course reason about them using the same dependent elim. But the situation I had in mind was that more of the programming happened behind the interface, and the point of the view was to give a spec for those operations without telling the client anything about the actual implementation. And the main point I was trying to emphasize was the *automatic* lifting of isos, which is the new ingredient in HoTT.

Also, the views you want for reasoning are sometimes different than the views you want for programming. E.g., for specs, it seems fine to convert a sequence to a list wholesale, but this would be pretty inefficient. For programming, people often use a 1-level view, where you view a sequence as [] or x::s where s is another *sequence* (not a list)—i.e. you only expose the first level. But then you can’t just reuse the list operations wholesale.

Yes, this is very nice! It’s exciting to see univalent ideas having applications in programming.

I would like to see the the relationship with “Observational Equality” explored more myself, but I guess this is somewhat beside the point.

Homotopy type theory is very much a generalization of OTT: OTT’s propositions are like what we call hprops, and OTT’s types are like what we call hsets. OTT equality is based on setoids, which is a special case of (higher) groupoids. There are a few technical differences (e.g. proofs of propositions are *definitionally* equal in OTT, whereas proofs of hprops are only propositionally equal in HoTT) but on the whole HoTT looks a lot like a higher-dimensional version of OTT.

Thanks for the reply! I am anxious to see how the computational semantics of univalence will be described (I hope it will be comprehensible to mere mortals like myself).

I know that several T.T.s can be encoded as a single Q.I.-I.T. for example: in that spirit, would you think that the O.T.T. one surely is kind of a truncation of the Ho.T.T. one, or something like that?

Can you show how map fusion would be derived in the HoTT version?

At a first cut, you can do the same derivation that I did above: [spec] includes fromList, toList, and all the specs we need to carry out that derivation.

I think you can also do it in a more generic way (using transport), but I need to work out the details.