Lean is a new player in the field of proof assistants for Homotopy Type Theory. It is being developed by Leonardo de Moura working at Microsoft Research, and it is still under active development for the foreseeable future. The code is open source, and available on Github.
You can install it on Windows, OS X or Linux. It will come with a useful mode for Emacs, with syntax highlighting, on-the-fly syntax checking, autocompletion and many other features. There is also an online version of Lean which you can try in your browser. The on-line version is quite a bit slower than the native version and it takes a little while to load, but it is still useful to try out small code snippets. You are invited to test the code snippets in this post in the on-line version. You can run code by pressing shift+enter.
In this post I’ll first say more about the Lean proof assistant, and then talk about the considerations for the HoTT library of Lean (Lean has two libraries, the standard library and the HoTT library). I will also cover our approach to higher inductive types. Since Lean is not mature yet, things mentioned below can change in the future.
Update January 2017: the newest version of Lean currently doesn’t support HoTT, but there is a frozen version which does support HoTT. The newest version is available here, and the frozen version is available here. To use the frozen version, you will have to compile it from the source code yourself.
Lean
Examples
First let’s go through some examples. Don’t worry if you don’t fully understand these examples for now, I’ll go over the features in more detail below. You can use the commands check
, print
and eval
to ask Lean for information. I will give the output as a comment, which is a double dash in Lean. check
just gives the type of an expression.
check Σ(x y : ℕ), x + 5 = y -- Σ (x y : ℕ), x + 5 = y : Type₀
This states that this sigma-type lives in the lowest universe in Lean (since ℕ
and le
live in the lowest universe). Unicode can be input (in both the browser version and the Emacs version) using a backslash. For example Σ
is input by \Sigma
or \S
. In the browser version you have to hit space to convert it to Unicode.
check
is useful to see the type of a theorem (the curly braces indicate that those arguments are implicit).
check @nat.le_trans -- nat.le_trans : Π {n m k : ℕ}, n ≤ m → m ≤ k → n ≤ k
print
can show definitions.
open eq print inverse -- definition eq.inverse : Π {A : Type} {x y : A}, x = y → y = x := -- λ (A : Type) (x : A), eq.rec (refl x)
It prints which notation uses a particular symbol.
print × -- _ `×`:35 _:34 := prod #1 #0
And it prints inductive definitions.
print bool -- inductive bool : Type₀ -- constructors: -- bool.ff : bool -- bool.tt : bool
eval
evaluates an expression.
eval (5 + 3 : ℕ) -- 8
The Kernel
Lean is a proof assistant with as logic dependent type theory with inductive types and universes, just as Coq and Agda. It has a small kernel, which implements only the following components:
- Dependent lambda calculus
- Universe polymorphism in a hierarchy of
many universe levels
- Inductive types and inductive families of types, generating only the recursor for an inductive type.
In particular it does not contain
- A Termination checker
- Fixpoint operators
- Pattern matching
- Module management and overloading
Let’s discuss the above features in more detail. The function types are exactly as in the book with judgemental beta and eta rules. There are a lot of different notations for functions and function types, for example
open nat variables (A B : Type) (P : A → Type) (Q : A → B → Type) (f : A → A) check A -> B check ℕ → ℕ check Π (a : A), P a check Pi a, P a check ∀a b, Q a b check λ(n : ℕ), 2 * n + 3 check fun a, f (f a)
Lean supports universe polymorphism. So if you define the identity function as follows
definition my_id {A : Type} (a : A) : A := a
you actually get a function my_id.{u}
for every universe u
. You rarely have to write universes explicitly, but you can always give them explicitly if you want. For example:
definition my_id.{u} {A : Type.{u}} (a : A) : A := a set_option pp.universes true check sum.{7 5} -- sum.{7 5} : Type.{7} → Type.{5} → Type.{7}
The universes in Lean are not cumulative. However, with universe polymorphism universe cumulativity is rarely needed. In the cases where you would use universe cumulativity you can use the lift
map explicitly. There are only a handful of definitions or theorems (less than 10) currently in the HoTT library where lifts are needed (if it is not proving something about lifts itself). Some examples:
- In the proof that univalence implies function extensionality;
- In the proof that
Type.{u}
is not a set, given thatType.{0}
is not a set; - In the Yoneda Lemma;
- In the characterization of equality in sum types (see below for the pattern matching notation):
open sum definition sum.code.{u v} {A : Type.{u}} {B : Type.{v}} : A + B → A + B → Type.{max u v} | sum.code (inl a) (inl a') := lift (a = a') | sum.code (inr b) (inr b') := lift (b = b') | sum.code _ _ := lift empty
The first two examples only need lifts because we also decided that inductive types without parameters (empty
, unit
, bool
, nat
, etc.) should live in the lowest universe, instead of being universe polymorphic.
Lastly, the kernel contains inductive types. The syntax to declare them is very similar to Coq’s. For example, you can write
inductive my_nat := | zero : my_nat | succ : my_nat → my_nat
which defines the natural numbers. This gives the type my_nat
with two constructors my_nat.zero
and my_nat.succ
, a dependent recursor my_nat.rec
and two computation rules.
Lean also automatically defines a couple of other useful definitions, like the injectivity of the constructors, but this is done outside the kernel (for a full list you can execute print prefix my_nat
if you’ve installed Lean). Lean supports inductive families and mutually defined inductive types, but not induction-induction or induction-recursion.
There is special support for structures – inductive types with only one constructor. For these the projections are automatically generated, and we can extend structures with additional fields. This is also implemented outside the kernel. In the following example we extend the structure of groups to the structure of abelian groups.
import algebra.group open algebra structure my_abelian_group (A : Type) extends group A := (mul_comm : ∀a b, mul a b = mul b a) print my_abelian_group check @my_abelian_group.to_group -- abelian_group.to_group : Π {A : Type}, abelian_group A → group A
The result is a structure with all the fields of group
, but with one additional field. Also, the coercion from abelian_group
to group
is automatically defined.
The Lean kernel can be instantiated in multiple ways. There is a standard mode where the lowest universe Prop
is impredicative and has proof irrelevance. The standard mode also has built-in quotient types. Secondly, there is the HoTT mode without impredicative or proof irrelevant universes. The HoTT mode also has some support for HITs; see below.
Elaboration
On top of the kernel there is a powerful elaboration engine which
- Infers implicit universe variables
- Infers implicit arguments, using higher order unification
- Supports overloaded notation or declarations
- Inserts coercions
- Infers implicit arguments using type classes
- Convert readable proofs to proof terms
- Constructs terms using tactics
It does most of these things simultaneously, for example it can use the term constructed by type classes to find out implicit arguments for functions.
Let’s run through the above list in more detail.
1. As said before, universe variables are rarely needed explicitly, usually only in cases where it would be ambiguous when you don’t give them (for example sum.code
as defined above could also live in Type.{(max u v)+3}
if the universe levels were not given explicitly).
2. As in Coq and Agda, we can use binders with curly braces {...}
to indicate which arguments are left implicit. For example with the identity function above, if we write id (3 + 4)
this will be interpreted as @id _ (3 + 4)
. Then the placeholder _
will be filled by the elaborator to get @id ℕ (3 + 4)
. Lean also supports higher order unification. This allows, for example, to leave the type family of a transport implicit. For example (▸
denotes transport):
open eq variables (A : Type) (R : A → A → Type) variables (a b c : A) (f : A → A → A) example (r : R (f a a) (f a a)) (p : a = b) : R (f a b) (f b a) := p ▸ r
Or the following lemma about transport in sigma-types:
open sigma sigma.ops eq variables {A : Type} {B : A → Type} {C : Πa, B a → Type} {a₁ a₂ : A} definition my_sigma_transport (p : a₁ = a₂) (x : Σ(b : B a₁), C a₁ b) : p ▸ x = ⟨p ▸ x.1, p ▸D x.2⟩ := eq.rec (sigma.rec (λb c, idp) x) p set_option pp.notation false check @my_sigma_transport -- my_sigma_transport : -- Π {A : Type} {B : A → Type} {C : Π (a : A), B a → Type} {a₁ a₂ : A} (p : eq a₁ a₂) (x : sigma (C a₁)), -- eq (transport (λ (a : A), sigma (C a)) p x) (dpair (transport B p (pr₁ x)) (transportD C p (pr₁ x) (pr₂ x)))
Here the first transport is along the type family λa, Σ(b : B a), C a b
. The second transport is along the type family B
and the p ▸D
is a dependent transport which is a map C a₁ b → C a₂ (p ▸ b)
. The check
command shows these type families. Also in the proof we don’t have to give the type family of the inductions eq.rec
and sigma.rec
explicitly. However, for nested inductions higher-order unification becomes expensive quickly, and it is better to use pattern matching or the induction
tactic, discussed below.
3&4 Lean supports overloading and coercions. Some examples:
open eq is_equiv equiv equiv.ops example {A B : Type} (f : A ≃ B) {a : A} {b : B} (p : f⁻¹ b = a) : f a = b := (eq_of_inv_eq p)⁻¹
In this example f
is in the type of equivalences between A
and B
and is coerced into the function type. Also, ⁻¹
is overloaded to mean both function inverse and path inverse.
5. Lean has type classes, similar to Coq’s and Agda’s, but in Lean they are very tightly integrated into the elaboration process. We use type class inference for inverses of functions. We can write f⁻¹
for an equivalence f : A → B
, and then Lean will automatically try to find an instance of type is_equiv f
.
Here’s an example, where we prove that the type of natural transformations between two functors is a set.
import algebra.category.nat_trans open category functor is_trunc example {C D : Precategory} (F G : C ⇒ D) : is_hset (nat_trans F G) := is_trunc_equiv_closed 0 !nat_trans.sigma_char
In the proof we only have to show that the type of natural transformation is equivalent to a sigma-type (nat_trans.sigma_char
) and that truncatedness respects equivalences (is_trunc_equiv_closed
). The fact that the sigma-type is a set is then inferred by type class resolution.
The brackets [...]
specify that that argument is inferred by type class inference.
6. There are multiple ways to write readable proofs in Lean, which are then converted to proof terms by the elaborator. One thing you can do is define functions using pattern matching. We already saw an example above with sum.code
. In contrast to Coq and Agda, these expressions are not part of the kernel. Instead, they are ‘compiled down’ to basic recursors, keeping Lean’s kernel simple, safe, and, well, ‘lean.’ Here is a simple example. The print
command shows how inv
is defined internally.
open eq definition my_inv {A : Type} : Π{a b : A}, a = b → b = a | my_inv (idpath a) := idpath a print my_inv -- definition my_inv : Π {A : Type} {a b : A}, a = b → b = a := -- λ (A : Type) {a b : A} (a_1 : a = b), eq.cases_on a_1 (idpath a)
Here are some neat examples with vectors
.
open nat inductive vector (A : Type) : ℕ → Type := | nil {} : vector A zero | cons : Π {n}, A → vector A n → vector A (succ n) open vector -- some notation. The second line allows us to use -- [1, 3, 10] as notation for vectors notation a :: b := cons a b notation `[` l:(foldr `,` (h t, cons h t) nil `]`) := l variables {A B : Type} definition map (f : A → B) : Π {n : ℕ}, vector A n → vector B n | map [] := [] | map (a::v) := f a :: map v definition tail : Π {n : ℕ}, vector A (succ n) → vector A n | n (a::v) := v -- no need to specify "tail nil", because that case is -- excluded because of the type of tail definition diag : Π{n : ℕ}, vector (vector A n) n → vector A n | diag nil := nil | diag ((a :: v) :: M) := a :: diag (map tail M) -- no need to specify "diag (nil :: M)" eval diag [[(1 : ℕ), 2, 3], [4, 5, 6], [7, 8, 9]] -- we need to specify that these are natural numbers -- [1,5,9]
You can use calculations in proofs, for example in the following construction of the composition of two natural transformations:
import algebra.category open category functor nat_trans variables {C D : Precategory} {F G H : C ⇒ D} definition nt.compose (η : G ⟹ H) (θ : F ⟹ G) : F ⟹ H := nat_trans.mk (λ a, η a ∘ θ a) (λ a b f, calc H f ∘ (η a ∘ θ a) = (H f ∘ η a) ∘ θ a : by rewrite assoc ... = (η b ∘ G f) ∘ θ a : by rewrite naturality ... = η b ∘ (G f ∘ θ a) : by rewrite assoc ... = η b ∘ (θ b ∘ F f) : by rewrite naturality ... = (η b ∘ θ b) ∘ F f : by rewrite assoc)
There are various notations for using forward reasoning. For example the following proof is from the standard library. In the proof below we use the notation `p > 0`
, which is interpreted as the inhabitant of the type p > 0
in the current context. We also use the keyword this
, which is the name term constructed by the previous unnamed have
expression.
definition infinite_primes (n : nat) : {p | p ≥ n ∧ prime p} := let m := fact (n + 1) in have m ≥ 1, from le_of_lt_succ (succ_lt_succ (fact_pos _)), have m + 1 ≥ 2, from succ_le_succ this, obtain p `prime p` `p ∣ m + 1`, from sub_prime_and_dvd this, have p ≥ 2, from ge_two_of_prime `prime p`, have p > 0, from lt_of_succ_lt (lt_of_succ_le `p ≥ 2`), have p ≥ n, from by_contradiction (suppose ¬ p ≥ n, have p < n, from lt_of_not_ge this, have p ≤ n + 1, from le_of_lt (lt.step this), have p ∣ m, from dvd_fact `p > 0` this, have p ∣ 1, from dvd_of_dvd_add_right (!add.comm ▸ `p ∣ m + 1`) this, have p ≤ 1, from le_of_dvd zero_lt_one this, absurd (le.trans `2 ≤ p` `p ≤ 1`) dec_trivial), subtype.tag p (and.intro this `prime p`)
7. Lean has a tactic language, just as Coq. You can write
begin ...tactics... end
anywhere in a term to synthesize that subterm with tactics. You can also use by *tactic*
to apply a single tactic (which we used in the above calculation proof). The type of the subterm you want to synthesize is the goal and you can use tactics to solve the goal or apply backwards reasoning on the goal. For example if the goal is f x = f y
you can use the tactic apply ap f
to reduce the goal to x = y
. Another tactic is the exact
tactic, which means you give the term explicitly.
The proof language offers various mechanisms to pass back and forth between the two modes. You can begin using tactics anywhere a proof term is expected, and, conversely, you can enter structured proof terms while in tactic mode using the exact
tactic.
A very simple tactic proof is
open eq variables {A B C : Type} {a a' : A} example (g : B → C) (f : A → B) (p : a = a') : g (f a) = g (f a') := begin apply ap g, apply ap f, exact p end
which produces the proof term ap g (ap f p)
. In the Emacs mode of Lean you can see the intermediate goals by moving your cursor to the desired location and pressing Ctrl-C Ctrl-G
Lean has too many tactics to discuss here, although not as many as Coq. Here are some neat examples of tactics in Lean.
The cases
tactic can be used to destruct a term of an inductive type. In the following examples, it destructs the path p to reflexivity. In the second example it uses that succ
is injective, since it is the constructor of nat
, so that it can still destruct p
. It also doesn’t matter whether the free variable is on the left hand side or the right hand side of the equality. In the last example it uses that different constructors of an inductive type cannot be equal.
open nat eq example {A : Type} {x y : A} (p : x = y) : idp ⬝ p = p := begin cases p, reflexivity end example (n m l : ℕ) (p : succ n = succ (m + l)) : n = m + l := begin cases p; reflexivity end example (n : ℕ) (p : succ n = 0) : empty := by cases p
The rewrite
tactic is useful for doing a lot of rewrite rules. It is modeled after the rewrite tactic in SSReflect. For example in the following proof of Eckmann-Hilton we rewrite the goal 4 times with theorems to solve the goal. The notation ▸*
simplifies the goal similar to Coq’s simpl
, and -H
means to rewrite using H⁻¹
.
open eq theorem eckmann_hilton {A : Type} {a : A} (p q : idp = idp :> a = a) : p ⬝ q = q ⬝ p := begin rewrite [-whisker_right_idp p, -whisker_left_idp q, ▸*, idp_con, whisker_right_con_whisker_left p q] end
The induction
tactic performs induction. For example if the goal is P n
for a natural number n
(which need not be a variable), then you can use induction n
to obtain the two goals P 0
and P (k + 1)
assuming P k
. What’s really neat is that it supports user-defined recursors. So you can define the recursor of a HIT, tag it with the [recursor]
attribute, and then it can be used for the induction
tactic. Even more: you can arrange it in such a way that it uses the nondependent recursion principle whenever possible, and otherwise the dependent recursion principle.
import homotopy.circle types.int open circle equiv int eq pi definition circle_code (x : S¹) : Type₀ := begin induction x, -- uses the nondependent recursor { exact ℤ}, { apply ua, exact equiv_succ} -- equiv_succ : ℤ ≃ ℤ with underlying function succ end definition transport_circle_code_loop (a : ℤ) : transport circle_code loop a = succ a := ap10 !elim_type_loop a -- ! is the "dual" of @, i.e. it inserts -- placeholders for explicit arguments definition circle_decode {x : S¹} : circle_code x → base = x := begin induction x, -- uses the dependent recursor { exact power loop}, { apply arrow_pathover_left, intro b, apply concato_eq, apply pathover_eq_r, rewrite [power_con,transport_circle_code_loop]} end
In the future there will be more tactics, providing powerful automation such as a simplifier and a tableaux theorem prover.
The HoTT Library
I’ve been extensively working this year on the HoTT library of Lean, with the help of Jakob von Raumer and Ulrik Buchholtz. The HoTT library is coming along nicely. We have most things from the first 7 chapters of the HoTT book, and some category theory. In this file there is a summary what is in the HoTT library sorted by where it appears in the HoTT book.
I’ve developed the library partly by porting it from the Coq-HoTT library, since Coq’s syntax is pretty close to Lean. Other things I’ve proven using the proof in the book, or just by proving it myself.
In the library we are heavily using the cubical ideas Dan Licata wrote about earlier this year. For example we have the type of pathovers, which are heterogenous paths lying over another path. These can be written as b =[p] b'
(or pathover B b p b'
if you want to give the fibration explicitly). These are used for the recursion principle of HITs
definition circle.rec {P : circle → Type} (Pbase : P base) (Ploop : Pbase =[loop] Pbase) (x : circle) : P x
and for the equality in sigma’s
variables {A : Type} {B : A → Type} definition sigma_eq_equiv (u v : Σa, B a) : (u = v) ≃ (Σ(p : u.1 = v.1), u.2 =[p] v.2)
There are also squares and squareovers which are presentations of higher equalities. One neat example which shows the strength of these additional structures is the following example. I had the term and could fill every square in the diagram below. I wanted to rewrite this term to
.
If every square is formulated as a path between compositions of paths, then this rewriting takes a lot of work. You have to rewrite any individual square (assuming it is formulated as “top equals composition of the other three sides”), then you have to perform a lot of associativity steps to be able to cancel the sides which you get in the middle of the diagram.
Or you can formulate and prove a dedicated lemma for this rewrite step, which also takes quite some work.
However, because I formulated these two-dimensional paths as squares, I could first horizontally compose the four squares. This gives a square with as top and as bottom
. Then you can apply the theorem that the top of a square equals the composition of the other 3 sides, which gives exactly the desired rewrite rule. Similar simplifications by the use of pathovers and squares are occurring all over the library.
HITs
Lastly I want to talk about our way of handling HITs in Lean. In Coq and Agda the way to define a HIT is to make a type with the point constructors and then (inconsistently) assume that this type contains the right path constructors and the correct induction principle and computation rules. Then we forget we assumed something which was inconsistent. This is often called Dan Licata’s trick. This works well, but it’s not so clean since it assumes an inconsistency.
In Lean, we add two specific HITs as kernel extension. These HITs are the quotient and the -truncation. The formation rule, the constructors and the recursion principle are added as constants, and the computation rule on the points is added as a definitional equality to the kernel (the computation rule for paths is an axiom).
The quotient (not to be confused with a set-quotient) is the following HIT: (this is not Lean syntax)
HIT quotient {A : Type} (R : A → A → Type) := | i : A → quotient R | p : Π{a a' : A}, R a a' → i a = i a'
So in a quotient we specify the type of points, and we can add any type of paths to the quotient.
In Lean we have the following constants for quotients. The computation rule for points, rec_class_of
, is just defined as reflexivity, to illustrate that the reduction rule is added to the Lean kernel.
open quotient print quotient print class_of print eq_of_rel print quotient.rec print rec_class_of print rec_eq_of_rel
Using these HITs, we can define all HITs in Chapters 1-7 of the book. For example we can define the sequential colimit of a type family with functions
by taking the quotient of the type
with relation
defined as an inductive family
inductive R : (Σn, A n) → (Σn, A n) → Type := | Rmk : Π{n : ℕ} (a : A n), R ⟨n+1, f a⟩ ⟨n, a⟩
In similar ways we can define pushouts, suspensions, spheres and so on. But you can define more HITs with quotients. In my previous blog post I wrote how to construct the propositional truncation using just quotients. In fact, Egbert Rijke and I are working on a generalization of this construction to construct all -truncations, which means we can even drop
-truncations as a primitive HIT.
Quotients can even be used to construct HITs with 2-path constructors. I have formalized a way to construct quite general HITs with 2-constructors, as long as they are nonrecursive. The HIT I constructed has three constructors. The point constructor i
and path constructor p
are the same as for the quotient, but there is also a 2-path constructor, which can equate
- One path constructors
- Reflexivity
ap i p
wherei
is the point constructor andp
is a path inA
- concatenations and/or inverses of such paths
Formally, the defintion of the HIT is as follows. Given a type and a binary (type-valued) relation
on
. Let
be the “formal equivalence closure” of
, i.e. the following inductive family:
inductive T : A → A → Type := | of_rel : Π{a a'}, R a a' → T a a' | of_path : Π{a a'}, a = a' → T a a' | symm : Π{a a'}, T a a' → T a' a | trans : Π{a a' a''}, T a a' → T a' a'' → T a a''
Note that if we’re given a map we can extend it to a map
by interpreting e.g.
.
Now if you’re also given a “relation on ”, i.e. a family
, then we can construct the following type just using quotients
HIT two_quotient A R Q, i : A → two_quotient A R Q p : Π{a a' : A}, R a a' → i a = i a' r : Π{a a'} (r s : T a a'), Q r s → p* r = p* s
This two_quotient
allows to construct at least the following HITs:
- The torus (the formulation with two loops and a 2-constructor)
- The reduced suspension
- The groupoid quotient
So in conclusion, from quotients you can define the most commonly used HITs. You can (probably) not define every possible HIT, though, so for the other HITs you still have to use Dan Licata’s trick. I should note here that currently there is no private
modifier in Lean yet, so we cannot do Dan Licata’s trick very well in Lean now.
These are all the topics I want to talk about in this blog post. I tried to cover as much of the functionalities of Lean, but there are still features I haven’t talked about. For more information, you can look at the Lean tutorial. Keep in mind that the tutorial is written for the standard library, not the HoTT library, and you might want to skip the first couple of chapters explaining dependent type theory. The Quick Reference chapter at the end of the tutorial (or in pdf format) is also very useful. Also feel free to ask any questions in the comments, in the lean-user google group or in an issue on Github. Finally, if you’re willing to help with the HoTT library, that would be very much appreciated!
Thanks Floris — this is really terrific!
Thanks so much for this intro! I’m still making my way through the tutorial, but I have one brief question: does Lean have, or will it one day have, a tactic language that lets you write your own tactics, like Coq does?
You can already define your own custom tactics. However, this feature is still limited, since there is no way yet to do something like “match goal” as in Coq. This is something which is planned, though.
Here are some tactics defined by default:
https://github.com/leanprover/lean/blob/master/hott/init/tactic.hlean#L145-L151
Good to hear, thanks. It looks from your link as though the “tactic language” of lean is reflected into its ordinary type theory, so that defining a tactic is done by simply defining a term? This is a nice approach, one reason being that I hope it will lead to a more strongly typed tactic language; one of the most confusing things (to me) about Coq’s Ltac is its untyped nature and how it mixes terms, variables, tactics, and so on.
Could you explain how adding an abstract type of tactics preserves all the nice meta properties.
Have you seen Mtac?
http://plv.mpi-sws.org/mtac/
I haven’t tried it, but it sounds like something you might like.
No, I hadn’t. Thank you!
@ Mike: Exactly, defining a tactic is just defining a term of type
tactic
. Then if you apply that tactic, Lean will reduce your definition to a basic tactic and apply that (or raise an error if your definition does not reduce to a basic tactic). For exampledo 3 (apply and.intro)
reduces to
apply and.intro; apply and.intro; apply and.intro
and that results in some C++ code to be executed to apply
apply and.intro
three times.I should note here that the semicolon has a different meaning in Lean than in Coq.
t1; t2
in Lean is just applying tactict1
and then applying tactict2
. Since most tactics only do things with the first goal, this is not the semantics of Coq. The meaning oft1; t2
in Coq is the same ast1: t2
in Lean, so with a colon instead of a semicolon.@ Bas: The type
tactic
is just an inductive type with a single inhabitantbuiltin
:Every tactic is defined to be
builtin
. This is where the story ends in the type theory in Lean. This is all done in the file I linked to in my previous comment. On top of that there is some code associated with all the tacticst
defined in that file, which is executed if you write something like “by t
”.How does one use Licata’s trick for HITs in Lean? Is there a “private” modifier that can prevent the use of the forbidden induction principle outside the defining module?
There is currently no such “private” modifier, so you cannot use Licata’s trick very well in Lean yet. I’ve added this comment also to my blog post.
However, I hope I convinced you that for all basic HITs we don’t need Licata’s trick. But it’s too bad we cannot experiment with other HITs (unless you first give a construction using quotients).
What HITs are used in the HoTT libraries of Coq or Agda which do not fall in the list of HITs we can make in Lean already? Can you give some examples of HITs which
* are recursive, other than the n-truncations, or
* have 3-path constructors (or higher)?
3-path constructors would be a nightmare, I don’t think we’ve used them for anything yet. The main example of a recursive HIT other than truncations that we’ve used so far is the localization at a family of maps. There’s also spectrification, but I hope that that can also be constructed as a sequential colimit.
(Well, to be precise, the construction of localization that’s actually used in the HoTT/Coq library does use n-path constructors for all n, as described here. But that’s just to avoid funext; with funext it can be done using only (recursive) 1-path constructors.)
Good to hear that 3-path constructors are not used. I completely agree that they would be a nightmare to work with.
As you know, Egbert has a construction of (certain) localizations as a sequential colimit, and Egbert and I are trying to formalize that into Lean (this is still in progress). On the first sight, spectrification looks quite similar to localizations, in the sense that you want to make certain maps an equivalence. It’s good to hear that most HITs can (probably) be implemented in Lean using quotients.
Egbert’s localizations as a sequential colimit only work for localizations at omega-compact types, so I don’t really regard it as solving the problem.
Of course, there are also all the recursive HITs that appear in chapters 10-11 of the book: the cumulative hierarchy, the Cauchy reals, and the surreal numbers. I didn’t mention those at first because you said “used in the Coq and Agda libraries”, and while the Coq library contains the cumulative hierarchy and the surreals, they aren’t really “used” for anything else. However, I do think they’re all important too, and I expect more examples will be found in the future.
What would you say are Lean’s “selling points”? That is, suppose I’m choosing a proof assistant to use for a new project; why would I choose Lean?
For example, one reason to choose Coq is its powerful tactic language; another is its module system. One reason to choose Agda is its support for induction-induction and induction-recursion. (Of course, there are other reasons too in both cases; these are just examples.) What does Lean have that makes it stand out?
Some selling points:
* Lean has a very conservative kernel, which makes it less susceptible to inconsistency proofs which have been discovered Coq and Agda. So far no inconsistency have been found in Lean yet (granted, this is also explained by the fact it has been used much less).
* Lean has good support for readable proofs (at least compared to Coq), such as the
have ..., from ...,
andcalc
constructs I’ve given in my post.* Currently Leo de Moura (the Lean developer) and Daniel Selsam (a student at Stanford) are working together on a generic tableau theorem prover for Lean, similar to the “blast” tactic in Isabelle. Once they have implemented the blast tactic, this should be a huge selling point, to take care of (at least) the huge amount of tedious but basic proofs. Much more automation is planned for Lean in the future.
Thanks! I look forward to the automation. Do you know how HoTT-compatible it will be? (I’m still kind of sad that everything in Lean’s standard library has to be manually ported to its HoTT library.)
It’s worth noting that one can implement a certain amount of “readable proof” syntax in Coq and Agda in userspace just using mixfix syntax. The HoTT/Agda library, for instance, uses a syntax for chaining equalities very much like the Lean “calc”. We haven’t felt the need for anything like that in Coq (because rewrite is generally easier), but we could probably implement it about as well there.
The automation should be fully HoTT-compatible. But there is a difference between “every feature it has in the standard library works in the HoTT library” and “every feature useful for HoTT is implemented.”
But I’m not exactly sure how the automation would be used in different ways in HoTT and the standard library. Of course, using the proof irrelevant Prop, you can sometimes do more in the standard library. But there is a feature planned to identify all elements in a mere proposition, as if they were definitionally equal (that it, Lean will automatically cast along the equalities between terms in a mere proposition). This is also useful in the standard library if we have a type which has at most one element. I do not know the exact details, though.
Does the mixfix notation in Agda work only for equality, or also for other relations? In Lean, you can do it for any relation, which you have previously specified to be transitive (using the [trans] attribute). So you can also do it for equivalences, homotopies, functors (composing them), natural transformations, and so on.
Example:
Identifying all elements in a proposition definitionally implies UIP, so it’s not HoTT-compatible. For any
, we have
and
living in
, which is a mere proposition; but if
and
are definitionally equal then
. But you said “as if” they were definitionally equal, so maybe what’s planned is a way to work around that?
The mixfix notation in the HoTT/Agda library as defined works only for equality, but one could of course define similar notations for other relations. It would be more work than simply declaring them as transitive, but it ought to work.
I didn’t mean that we would definitionally identify all elements. Maybe I shouldn’t have expanded on that point, because I don’t know exactly how it is going to work. What I imagine is this. Suppose
and
is a mere proposition. If the current goal contains
and some automation procedure has a rewrite rule
, then this will match with the current goal, and to apply the rule it will transport along the equality
.
To respond to your comment about porting the libraries: it would indeed be great if this could be done automatically, but this is not very realistic. First of all, originally we planned to have the HoTT library and the standard library be one single library. But even in this case, the theorems about natural numbers would be very different for both components. For example, in the standard library you have a Prop-valued theorem such as
while in the HoTT library this would be the type-valued variant
. Moreover, in the standard library
is a map into Prop, while in the HoTT library $\le$ is a map into Type which is provably a mere proposition. Also, in the HoTT library, you might want a definition of (say) commutativity of addition which computes in a good way, while in the standard library this is not important. All these small differences make an automatic port hard.
Ah, I see what you mean about automation of prop-transport. That makes sense; thanks for explaining!
I still don’t understand the difficulty of an automatic port. True, “Prop” means something different in the two libraries, but it seems that for most practical purposes, the two “Prop”s behave very similarly, to the point where the proofs of most theorems ought to be almost the same, if corresponding lemmas have the same names. The theorem
makes sense in both cases (in HoTT,
means the (-1)-truncation of
), and the different true theorem
also makes sense in both cases (at least, it does in Coq; Lean's
might need a universe lift or something). Typeclasses ought to carry through the prop-ness of props automatically for the most part, without the need to modify the proofs much.
I’m not saying it would be impossible to write a program porting the file automatically. But I do think that doing a manual port of *all* files five times will take less time than writing a program to do it automatically. There are just many small differences in the standard library and the HoTT library making this tricky. For example, in the HoTT library we use
as the sum of two types, which we use much less often in the standard library. Hence in the HoTT library this notation is always available, but in the standard library you have to open a namespace for it. Similarly, the theorem
for paths, which in the standard library
for elements in a group.
inv_inv
in the HoTT library meansinv_inv
meansWell, those are choices of conventions that could be made differently. Do you have any reason to think that there’s an upper bound (like five) on the number of times you’ll want to port files? I would think that as the standard library grows, you would want to keep incorporating its new features into the HoTT library — or at least you would if it were easy to do so.
That’s true. We could do things differently to make the port easier. But that might require compromises which would be inconvenient for other reasons. I don’t have a good reason to believe we only need 5 ports. But I do think that every individual file will slowly settle down, so that re-porting it after some point is not necessary anymore.
Even though there is no automatic script to port, we have a simple script which uses regular expressions to translate most notations and declarations from the standard library to the HoTT library. Nothing sophisticated, but using that it only takes a couple of minutes to port a file (usually less).
But I do think you’re underestimating how much time it takes to write a fully-automatic script, and whether updating the script might take more time than just porting it manually. Another example of a difference between the libraries: sometimes we just want to give a different proof in the HoTT library than in the standard library. Compare the standard library proof that
0 + n = n
(which is more readable) with the HoTT library proof (which has better computational content):https://github.com/leanprover/lean/blob/master/library/data/nat/basic.lean#L111
https://github.com/leanprover/lean/blob/master/hott/types/nat/basic.hlean#L112
That said, I don’t find this discussion particularly interesting. Personally, I’m incapable of writing a fully automatic script, and I think that the Lean developers have better ways to spend their time. However, I am willing to port files from the standard library to the HoTT library, especially if the standard library does something which is also useful for HoTT.
Note that I do not disagree that it would be great to have a fully-automatic script.
Ok, fair enough. (-:
(It ought also to be possible to declare a single mixfix notation that would work for any transitive relation, but it wouldn’t look as pretty, since you’d have to do something like choose a particular symbol that would stand in for the “=” regardless of what the relation was, and then pass the relation in question as a parameter somewhere else.)
By the way, you can also combine different relations in Lean. The following is a proof of
0 < 8
. The notationdec_trivial
runs the decision procedure for a decidable proposition (or type).Nice!!
In addition to these, here are some of the things I like:
* Lean’s elaborator is quite powerful. In particular, the handling of type classes is well-integrated with the elaboration process. This makes it possible to work with algebraic structures and the algebraic hierarchy in a natural way.
* I find Lean’s Emacs mode very pleasant and enjoyable to use.
* When proofs and definitions get complicated enough, there are performance problems in all systems. It is hard to find head-to-head comparisons, but I am sure Lean stacks up very well in this regard. For example, parallelism is fundamentally built-in to the system.
* Lean’s syntax is very nice.
* The features are generally well designed: namespaces, sectioning notions, annotations, etc.
* The function definition package is very nice.
* The mechanisms for defining and extending structures are very nice.
* The rewrite tactic is very nice (though not as strong as SSReflect’s).
* The standard library is designed to support both constructive and classical reasoning, as well as interactions between the two, and it does this in a nice way. (We should also be able to take advantage of this in HoTT, i.e. support classical reasoning where appropriate.)
The system is quite new, and it is exciting to see it evolve. We have high hopes for powerful automation, code extraction, a fast evaluator, and a powerful library and so on. Only time will tell how well these hopes pan out, but the current system provides a solid basis for growth.
Thanks for that list! I can’t help noticing a lot of entries on it of the form “X is very nice”. Can you be more specific about any of them? For instance, is there something particular that you can do with type classes in Lean but not in Coq?
Mike, I’m sorry to be slow to reply! I didn’t notice your response.
Regarding type classes, I can’t be definitive about the advantages over Coq, because I have never used type classes in Coq. I have used axiomatic type classes in Isabelle and canonical structures in SSReflect, though. Isabelle’s type classes manage a complex algebraic hierarchy very efficiently, but the problem there is simpler, since structures are not first class objects; you can use them to manage concrete groups, rings, and fields, but they do not work for abstract algebra. Also, the things you can do are very restricted; e.g. a type class cannot depend on more than one type parameter. SSReflect gives you “real” abstract algebra, but canonical structures are hard to work with, and I don’t think Coq’s mechanisms are as efficient. At least, in the SSReflect library, the algebraic hierarchy is not as elaborate as in Isabelle.
What makes type classes in Lean powerful is that class resolution is an integral part of the elaboration algorithm. (The algorithm is described here: http://arxiv.org/abs/1505.04324, but we are working on an updated and expanded version of that paper.) You don’t want class resolution to fire too early, since the constraints you are trying to solve may have metavariables (holes arising from other implicit arguments) that need to be solved first. On the other hand, you don’t want to solve them too late, because the solutions provide information that can be used to solve other constraints. I have been told that Coq’s algorithm separates the elaboration and class resolution phase, i.e. first it elaborates, then sends all the remaining class constraints to a separate class inference algorithm. I have been told that that often causes problems. Lean’s elaboration algorithm does not separate the two, and solves class resolution constraints when appropriate, as part of the same process.
There were “performance” issues that Leo had to address. There was a problem with slowdowns with long chains of coercions, and Leo managed to solve that. He also caches results, so that once a problem is solved, the same problem doesn’t have to be solved again and again.
The net effect is that it simply works. Much of our development of the algebraic hierarchy was modeled after Isabelle’s; we have everything from semigroups and additive structures and various kinds of orders all the way up to ordered rings and ordered fields. Setting it up and declaring instances was easy and natural. Everything works the way you expect it to, and it is a pleasure to use. I will have to leave it to others, however, to weigh in as to whether one can say the same about Coq.
As far as the other features I like, the best I can do is refer you to the tutorial:
https://leanprover.github.io/tutorial
Click to access tutorial.pdf
The function definition package is described in Chapter 7, mechanisms for defining structures are in Chapter 10, the rewrite tactic is described in Chapter 11.6. Various aspects of sections, namespaces, etc. are described in Chapter 5 and 8, and the Emacs mode is described in Chapter 5.
Even though it is a really minor thing, I find tab completion in Emacs mode to be extremely helpful. We have adopted naming conventions that makes it easy to guess the prefix of a theorem you want to apply, and then hitting tab gives you a list of options. It is surprisingly effective; I rarely have to look around to find what I need.
I hope that helps… Let me know if I can expand on anything.
Popping out some comment nesting levels, I’m curious what is “more computational” about the hott-library proof that
— and why we care, given that
is a set, so that this proof inhabits a proposition? Some propositions certainly have computational importance, like isequiv, but an equality in
doesn’t have “ingredients” that can be extracted like the inverse of an equivalence. My only guess is that maybe transporting along this proof can be reduced to simpler transports somehow and thus is easier to reason about?
In the HoTT-library proof the proof
zero_add (succ n)
reduces toap succ (zero_add n)
. This is not the case in the standard library proof, where transports are used. This indeed matters if you transport over this equality, or take dependent paths over this equality.Here is an example where this matters for the proof
and maps
. Then we might want to define the composite
and then prove that
for
. Except this needs to be a dependent path, since the LHS lives in
and the right hand side in
. So we have to transport over
succ_add n m : (n + 1) + m = (n + m) + 1
. Suppose we have a sequence of typessucc_add
, and the proof becomes simpler ifsucc_add
has nice computational properties (similar tozero_add
). This is what happens here in the formalization project of Egbert and me:https://github.com/EgbertRijke/sequential_colimits/blob/master/sequence.hlean#L106-L114
Of course, we can also define
are mere propositions. However, later we prove properties about
rep_f
without the computation content ofsucc_add
, for example by transporting along the fact that equalities inrep_f
, which will become more complicated if we would do this.I see.
I bet that that could be fixed though. For one thing, I don’t find !nat.add_zero ▸ !nat.add_zero to be any more “readable” than reflexivity. Wouldn’t it be even more readable to write base case of the standard library proof like this?
In any case, though, there shouldn’t be a problem with the base case, since the concatenation of two reflexivities is again reflexivity, and transport along reflexivity is the identity, both judgmentaly.
For the induction step, I see two problems: concatenations with reflexivity (add_zero and add_succ are just names for reflexivity), and occurrence of transports rather than aps. It seems to me that the latter could be solved by defining ap in terms of transport. For the former, wouldn’t it be possible to modify calc so that if one step is reflexivity then it doesn’t insert any unnecessary concatenations?
Yes, I agree that all these things are possible to change. (Most likely, at least. I haven’t seen the code for
calc
.)I'm no sure if your last feature is desirable, though. Suppose I write the following, then I might not want to normalize the first step to reflexivity
(Of course it is possible add some check to see whether it is too expensive to normalize the term)
My thought was that the user would give some indication of which equalities should be omitted as reflexivity, like writing ≡ instead of =.
That is a good idea!
Update. You can now run the HoTT library of Lean in your browser:
https://leanprover.github.io/tutorial/?live&hott
Is HoTT consistent with the following assumption?
Proof extensionality: forall P Q: Prop, (P Q) -> (P = Q).
What do you mean by “(P Q)”?
If the (P Q) was supposed to be
, then yes, it is definitely consistent in HoTT. Moreover, it is provable in HoTT, since it’s a special case of univalence (because two propositions which imply each other are equivalent as types).
Yes. (P Q) means (P Q). I don’t know why it does not print out properly.
Well, I think I just have a difficulty of typing that symbol.
To see how to type math symbols, go here.
I noticed many of the links are broken, now that you are using a fork of Lean.
Thanks for the comment. I’ve fixed the links.
Good news for HoTT in Lean 3! There’s work[1] to port the Lean 2 library by addressing[2] the problem of singleton elimination, which means more verbose proofs but doesn’t require changes to Lean.
[1] https://github.com/leanprover/lean/wiki/HoTT-Library
[2] https://github.com/gebner/hott3#design