Last year, Mike Shulman proved that π₁(S¹) is Z in Homotopy Type Theory. While trying to understand Mike’s proof, I came up with a simplification that shortens the proof a bunch (100 lines of Agda as compared to 380 lines of Coq).
My proof uses essentially the same ideas—e.g., the universal cover comes up in the same way, and the functions witnessing the isomorphism are the same. The main new idea is a simpler way of proving the hard direction of “and the composites are the identity”: that if you start with a path on the circle, encode it as its winding number, and then decode it back as a path, then you get the original path. If you already understand Mike’s proof, the diff is essentially this: you can replace steps 25 at the end by

Define an “encoding” function of type
forall (x : circle), (base ~~> x) > circle_cover x
as the transport along circle_cover of 0.
 Use J to prove that decoding (Mike’s step 1) after encoding is the identity (this is the hard composite to work with, because it starts and ends with a path). The trick here is to ask the question for paths (base ~~> x) for an arbitrary x, not just for loops base ~~> base, so that you can use J.
This is clearly similar to Mike’s proof (we could even ask if these two proofs are homotopic!): his proof uses contractibility of a path space, which is exactly J. But the amount of machinery about total spaces that you need here is much less.
A question, though: I came up with this proof basically on typetheoretic grounds (“there must be a simpler proof term of this type”). Is there anything to say about it in geometric or highercategorical terms? In the semantics, is it the same proof as Mike’s? A known but different proof? Or has the type theory suggested a new way of proving this result?
I’ll explain the proof in detail below the fold.
Definitions
First, the definitions. I have defined the circle S¹ so that the elimination form computes definitionally on points (but not paths). This gives the following interface:
S¹ : Set base : S¹ loop : base ≃ base S¹rec : {C : Set} > (a : C) > (p : a ≃ a) > S¹ > C S¹rec a p Base = a  really an equation βloop/rec : {C : Set} > (a : C) > (p : a ≃ a) > resp (S¹rec a p) loop ≃ p S¹elim : {C : S¹ > Set} > (a : C base) (p : subst C loop a ≃ a) > (x : S¹) > C x S¹elim a p Base = a βloop/elim : {C : S¹ > Set} > (a : C base) (p : subst C loop a ≃ a) > respd (S¹elim{C} a p) loop ≃ p
Notation: I use Id M N and M ≃ N interchangeably for the identity type/propositional equality/equivalence/paths (apologies to people who have started using ≃ for weak equivalence/adjoint equivalence/etc.—I had started using it as an infix symbol for general propositional equality). See Paths.agda for definitions if you’re confused. The one thing I should comment on is that I’m using the “logical” terminology for
subst : {A : Set} (p : A > Set) {x y : A} > Id x y > p x > p y resp : {A C : Set} {M N : A} (f : A > C) > Id M N > Id (f M) (f N) respd : {A : Set} {C : A > Set} {M N : A} (f : (x : A) > C x) > (p : Id M N) > Id (subst C p (f M)) (f N)
so “subst” = “transport” and “resp” = “maponpaths”.
For integers:
data Int : Set where Pos : Nat > Int Zero : Int Neg : Nat > Int
This definition is nice for what follows, but has the somewhat unfortunate consequence that 1 is Pos 0, 1 is Neg 0, etc.
The Universal Cover
To define the universal cover, we’ll need the successor isomorphism succ≃ : Int ≃ Int, which is defined by applying univalence to the successor function succ : Int > Int, which maps x to x + 1. The universal cover of the circle maps the circle into the universe, taking the base point to Int and the loop to succ≃:
C : S¹ > Set C = S¹rec Int succ≃
To introduce the proof notation, let’s derive a little fact that we’ll need later: transporting in the cover with the loop is the successor function:
substCloop : subst C loop ≃ succ substCloop = subst C loop ≃〈 substresp C loop 〉 subst (λ x → x) (resp C loop) ≃〈 resp (subst (λ x → x)) (βloop/rec Int succ≃) 〉 subst (λ x → x) succ≃ ≃〈 substuniv _ 〉 succ ∎
The notation here is an Agda trick for writing out an equational deduction in something of a twocolumn proof, except the justifications come in between the things they prove equal:
x ≃〈 reason x equals y 〉 y
In this case, we:
 Reassociate subst C loop into a subst with the identity function and a resp C, so that we can
 Use the HIT βreduction rule for resp C loop (recall that C is defined by an S¹rec.
 Use the βreduction rule for univalence, which says that subst (λ x → x) on an isomorphism given explicitly by the univalence axiom selects the forward direction of the isomorphism.
Functions back and forth
My goal is to prove that Ω₁(S¹) is homotopy equivalent to Z—this can be improved to an adjoint equivalence and therefore a path in the universe. Because Z is an hset, this entails, as easy corollaries, that π₁(S¹) (with the quotienting) is equivalent to Z, and that the higher loop spaces (and therefore homotopy groups) are trivial. To prove that π₁(S¹) and Z are isomorphic groups, we should also check that this map is group homomorphism—composition of paths gets mapped to addition—but I won’t do this here.
So the goal is to define functions
base ≃ base > Int Int > base ≃ base
that compose to the identity.
To start, we can define loop^{n} by induction on n:
loop^ : Int > base ≃ base loop^ Zero = Refl loop^ (Pos Z) = loop loop^ (Pos (S n)) = loop ∘ loop^ (Pos n) loop^ (Neg Z) = ! loop loop^ (Neg (S n)) = ! loop ∘ loop^ (Neg n)
Thinking of the integers as codes (names) for paths, this “decodes” an integer as a path.
The other direction “encodes” a path as its winding number, and can be defined by transporting along the universal cover:
encode' : base ≃ base > Int encode' p = subst C p Zero
The intuition here is that the action on paths of C takes loop to the successor function, and, because it’s functorial, therefore takes loop^{n} to n successor functions composed together. Applying this composite to Zero therefore computes n, giving us the inverse of loop^.
Important Point 1: However, for what’s coming below, it’s useful to observe that encode can be given a more general type:
encode : {a : S¹} > base ≃ a > C a encode p = subst C p Zero
For any point a on the circle and path p from base to a, encode p is an element of the cover of a. There is no reason to restrict attention to loops at base. This generality is going to be important below.
The easy composite
It’s easy to see that if you start with a number, decode it as a path, and then reencode it, you get back where you started: because you’re starting with a number, you can do induction! Here are the cases for 0 and positives (negatives are a symmetric 6 lines of code):
encodeloop^ : (n : Int) > encode (loop^ n) ≃ n encodeloop^ Zero = Refl encodeloop^ (Pos Z) = app≃ substCloop encodeloop^ (Pos (S y)) = subst C (loop ∘ loop^ (Pos y)) Zero ≃〈 app≃ (subst∘ C loop (loop^ (Pos y)) ) 〉 subst C loop (subst C (loop^ (Pos y)) Zero) ≃〈 app≃ substCloop 〉 succ (subst C (loop^ (Pos y)) Zero) ≃〈 resp succ (encodeloop^ (Pos y)) 〉 succ (Pos y) ∎
In the 0 case it’s true by computation. In the 1 case it’s the lemma we proved above. For other positives, you need to apply functoriality, use the lemma above for the first subst C loop, and then use the IH for the rest.
The hard composite
The other direction is harder, because you start with a loop p:
stuck : {p : base ≃ base} > p ≃ loop^ (encode p) stuck = ?
and there is no obvious elimination rule for p : base ≃ base (this is where UIP/Streicher’s K come in “normal” dependent type theory, but those clearly aren’t the right thing here). Indeed, this whole proof can be seen as constructing an induction principle for p : base ≃ base.
The trick is to generalize the problem to consider not just loops, but arbitrary paths base ≃ x, so that we can use J. To set this up, we first show that loop^ extends to a function
decode : {a : S¹} > C a > base ≃ a decode {a} = S¹elim {\ x > C x > base ≃ x} loop^ (subst (\ x' → C x' → base ≃ x') loop loop^ ≃〈 subst→ C (Id base) loop loop^ 〉 (\ y > subst (Id base) loop (loop^ (subst C (! loop) y))) ≃〈 λ≃ (λ y → substIdpost loop (loop^ (subst C (! loop) y))) 〉 (\ y > loop ∘ (loop^ (subst C (! loop) y))) ≃〈 λ≃ (λ y → resp (λ x' → loop ∘ loop^ x') (app≃ substC!loop)) 〉 (\ y > loop ∘ (loop^ (pred y))) ≃〈 λ≃ shift 〉 (\ y → loop^ y) ∎) a
The proof is an S¹elim: When a is base, we use loop^. To show that this respects the loop, we apply the functorial action of (\ x' → C x' → base ≃ x'), then reduce subst C (! loop) to the predecessor function (analogous to subst C loop ≃ succ), and then apply shift:
shift : (n : Int) > (loop ∘ (loop^ (pred n))) ≃ loop^ n
The proof of shift is a simple induction on n (12 lines of code).
Important Point 2: At this point, we’re basically done! The reason is that, once we have
encode : {a : S¹} > base ≃ a > C a decode : {a : S¹} > C a > base ≃ a
we can generalize the theorem to
decodeencode : ∀ {a} > (p : base ≃ a) > decode (encode p) ≃ p
polymorphically in a. So unlike above, where p:base ≃ base, p has a free endpoint. Thus, we can use J to contract it to reflexivity:
decodeencode {a} p = jay1 (λ a' (p' : base ≃ a') → decode (encode p') ≃ p') p Refl
Here jay1 is the PaulinMohring J rule: it suffices to consider the case where a' is base and p' is Refl, in which case decode (encode Refl) reduces to Refl, and we’re done.
Analysis
Let me try an explanation for what’s going on here: The way it’s scattered around the code is a little funny, but I think the ingredients of an induction on the length of the path are hiding in decodeencode. decodeencode is tantamount to saying that every loop p:Id base base arises as loop^ n for some n (in particular, encode p). In decodeencode, we check the base case, when p is reflexivity. To define decode, the main lemma is shift, which is essentially saying that if p can be expressed as loop^ (pred n) then loop ∘ p can be expressed as loop^ n. This feels like an inductive step, where we show that the property is closed under going around the loop once. Somehow the general facts about inverses mean you don’t need to consider !loop explicitly. So: S¹elim tells you that the only inductive case you need to consider is going around the loop; J tells you that the only base case you need to consider is reflexivity. Cover these two cases, and you cover all paths!
Finally, it’s important to note that Mike’s proof proves slightly more than just that Id base base is equivalent to Int. It also proves that Id base x is equivalent to C x for all x. I’ve done half of that above; the other half would be to show that encodeloop^ generalizes to
encodedecode : ∀ {a} > (n : C a) > encode (decode n) ≃ n
I think you should be able to do this using an S¹elim, but I haven’t tried.
Overall, I wanted to write up this simplification to see if anyone has thoughts about what’s going on here, and because it might make it easier to formalize other results in algebraic topology in HoTT.
Agda code is here.
On the one hand, this sort of calculation is nice in showing how having enough classifying spaces to work with gives a rich topological language; on the other hand, they vaguely warn me off of wanting to assume univalence itself, because I’m not sure I always want it provable that the fundamental group of the circle is the integers — particularly not a copy of the integers on which I can program an equality decider. Sometimes I might want to localize at, say, the prime 5, or just invert 2. So it makes more sense to take the fundamental group of the circle as being the defacto ring — that it *is* essentially a ring I’m confident of being able to encode, with or without deciding what the ring is.
That’s an excellent point! More generally, the proper question to ask about an axiom is not “should I assume it?” but “when should I assume it?” Homotopy type theory with HITs and the univalence axiom is an internal language of (∞,1)toposes, so if you want a language which applies more generally than that, then you should assume fewer axioms.
For instance, homotopy type theory with dependent sums, products, and identity types, but no HITs, universes, or univalence, is an internal language for locally cartesian closed (∞,1)categories. I don’t know exactly what adding HITs without univalence corresponds to, though probably local presentability would suffice.
On the other hand, as far as I know, the (∞,1)category of spaces localized or completed at some set of primes is not even locally cartesian closed (is it?). An (∞,1)category with only finite limits still has a sort of homotopy type theory as its internal language, but that type theory has only dependent sums and identity types, no dependent products. This severely restricts the amount of homotopy theory that we can develop; for instance, none of the definitions like isContr or isEquiv can even be written down without using dependent products.
I am inclined to think that a more fruitful way to study localized and complete spaces with type theory will be to interpret type theory (with dependent products, HITs, and univalence) in the unlocalized category, and identify the local or complete types internally, such as I suggested here. But I haven’t actually tried to push that any further yet.
This is very nice!! I think this is an excellent example of how the typetheoretic notion of “path induction” gives a new perspective on old homotopytheoretic facts. My original proof was of course very strongly motivated by a classical proof in algebraic topology; this proof takes more seriously the idea of induction as a way to prove things about paths.
(I do feel that it’s not entirely fair to compare the lengths of proofs in Agda versus in Coq, especially when you are using definitionallycomputingonpoints HITs in Agda, which are not available in Coq (and, apparently, are at least a little questionable even in Agda). I bet that if you had to keep track of propositional computation rules for base, then your 100 lines would grow somewhat (though it would probably still be significantly less than 380). And, unfortunately, Agda proof terms are still pretty incomprehensible to me, compared to stepping through a Coq proof script. All of which makes me want to try implementing your proof in Coq.)
I agree that comparing lines of code in different programming languages makes little to no sense; for example, an average Agda line is much denser than a line in your Coq proof script. Any such statement automatically comes with a “take this with a grain of salt” disclaimer in my mind; sorry I didn’t put that in the text.
So it would be great to do it in Coq! I don’t think it would be hard: my encode is your cover_to_pathcirc, my decode is just transport with circle_cover. So you should be able to prove the “hard” composite using the definitions you have in essentially one path_induction. For the “easy” composite, my substCloop is your fiberloopaction, and other than that it’s just an induction using functoriality of transport, which is I think part of the tactics anyway?
There are things in yours that I didn’t need, like circle_cover_dfib and fiber_wind_action and the lemmas after line 230 or so. I also didn’t prove exactly wind_succ (and it looks like wind_pred is unused already?); shift is similar, but it uses pred on the left instead of succ on the right—I think our proofs in the second part of cover_to_pathcirc might be a little different.
Did you try simplifying your proof with Andrej’s new tactics? hott_simpl …