A Formalized Interpreter

I’d like to announce a result that might be interesting: an interpreter for a very small dependent type system in Coq, assuming uniqueness of identity proofs (UIP). Because it assumes UIP, it’s not immediately compatible with HoTT, but it seems straightforward to port it to HoTT by explicitly restricting the semantics to h-sets.

The Problem:
In his earlier blog post, Mike Shulman pointed out that a good foundational system for math should be able to handle its own metatheory. In particular, a foundational type theory should be able to interpret itself, modulo size issues. This would be the analogue of set theory defining its own standard model. Mike also pointed out that if HoTT can interpret even a small fragment of itself, then we have a definition of semisimplicial types, by interpreting a syntax representation of the type of n-truncated semisimplicial types, for all n. The same approach seems able to solve many other coherence problems that arise with non-truncated objects.

Unfortunately, but perhaps unsurprisingly, defining an interpreter also seems to run into coherence problems. Mike had them, and he discussed them in his post. I also ran into coherence problems, which is why I resorted to UIP. More on that below. But before you even get up to the issue of coherence, there’s a tricky aspect to defining an interpreter that I should discuss, since it’s the problem that I do seem to have solved.

Dependent Types Systems are in a Knot:
An annoying thing about type theory is that everything you’re allowed to say has to make sense. I find it annoying because when I’m trying to figure out something new, I am typically unable to make sense at first; I have to play around with some plausible-seeming nonsense until I understand all the details to get right before it technically makes sense. Type theory does not provide a setting for this early development to take place. Luckily there’s paper and mental imagery for that.

When writing an interpreter, the annoyance of everything needing to make sense becomes an actual challenge. From a naïve programming point of view, interpreters should be easy: you analyze the syntax, and in each case you build the appropriate form of object. Done. But in a type system, you’re not building anything unless the expression to build it is well-typed. Interpreted objects will have various types, so to interpret a term as an object of the appropriate type, we need to first interpret that type. And to interpret open terms and types, we need an environment to interpret the variables. But an environment is an element of a context, considered as a type. So we need to interpret contexts first, so that we can interpret terms and types as maps out of the appropriate context. But types have terms in them, and contexts have types in them, so there’s no right order in which to separately define interpreters for contexts, types, and terms, one after another. And the usual style of mutual recursive definition can’t handle one mutually defined function being used in the type of another one. Dependent type theory is tied up in this crazy knot.

It seems that we haven’t yet figured out how to untie this knot, because apparently no one has yet written an interpreter for dependent type theory within dependent type theory. Not if the input is truly just plain old syntax and typing derivations (“raw syntax”).

The Language:
The language to interpret was designed to be the easiest-possible language that still has n-truncated semisimplicial types. There is one universe, and pi types. The universe is Tarski-style, in that there’s an El operator to lift universe elements to types. But there are no constructors for type codes. There is application, but no lambda. Also, function terms can only be applied to variables. Thus, a term is just the application of a head variable to a list of argument variables.

The language has no redexes, so there is no need for reduction rules or equality rules. Capture-avoiding substitution is nonetheless defined as usual, and is used in the typing rules. Well actually, substitution is a bit different from usual, since we only need to substitute variables for variables.

The language is interpreted, feature-for-feature into Coq. In other words, we use a fragment of Coq’s type theory as the model of the object language. No setoids or logical relations or anything. Contexts are interpreted as left-nested sigmas, judgments (with subject omitted) are interpreted as (dependent) function types, and so a derivation is interpreted as a function. This seems to be the sensible notion of “standard interpretation” for a Church-style dependent type system. It’s analogous to the tagless interpreters already known for simply-typed languages.

Status:
The development of the interpreter itself is done, but it could use prettying up, and I haven’t done anything with it yet. The code is available online. I would welcome any constructive criticism (Classical criticism isn’t good enough. 😉 ), ideas for simple improvements, discussion of related work, etc. I would especially welcome discussions about what’s going wrong that’s preventing us from working with coherence of non-truncated types, and what to do about it.

How it Works:
My interpreter is based on a strongly-typed representation of dependently-typed terms and types; I’ll explain the background. Recall that type theory is tied in a knot. Most of the successful attempts to partially untie the knot use inductive-inductive definitions of the language to interpret. Mike spent most of his post talking about trying to use this approach for HoTT and semisimplicial types. The elimination rule for inductive-inductive types basically gives you (almost?) exactly what you want in order to interpret contexts, types, and terms mutually. But inductive-inductive types are not regular syntax types. They’re kind of like syntax, but terms in this style are intrinsically well-typed, rather than being proven well-typed with a separate derivation. The unfortunate result is that constructing an inductive-inductive representation of an object is about as tricky as just directly constructing the object. And because of that, it seems hard just to interpret from raw syntax to inductive-inductive representations. And of course, it’s hard to define inductive-inductive representations of semisimplicial types.

My interpreter doesn’t use that kind of strongly-typed representation. A major nuisance with the inductive-inductive approach is the need for explicit conversions between represented types that are judgmentally equal in the object language. In fact, for HoTT, this seems to be more than just a nuisance because now we have to worry about judgmental coherences to keep these explicit “judgmental transports” from getting in the way. And yes, you get an infinite regress where you seem to need ever higher coherences.

A Totally Outrageous Solution:
A different kind of strongly-typed representation was developed in Conor McBride’s paper Outrageous but Meaningful Coincidences. The “Outrageous” approach is essentially to use the metalanguage’s types to classify the object language’s terms. Amazingly, it works: you get another way to represent intrinsically well-typed terms that doesn’t use judgmental equality in the object language, effectively using the metalanguage’s judgmental equality instead. This does away entirely with any issue of judgmental coherence. And this appropriation of the metalanguage equality just so happens to be exactly what we want for a standard interpretation. Conor’s paper used inductive-recursive types to develop the representation, but there’s a trick for imitating certain inductive-recursive types with inductive families. Inductive families are available in Coq, so I decided to try writing an interpreter in Coq, based on Conor’s outrageous structures. Indeed, the interpreter I came up with uses many inductive families based on Conor’s type definitions.

We still have a knot to untie, since we’re starting from raw syntax. How do inductive families help us? Well we need to figure out types for the interpretation functions such that we can actually try to define them by recursion. Using the type interpreter in the type of the term interpreter is too circular because types have terms in them. What if we try to just interpret terms as pairs? Some type with some element of that type? The induction hypothesis would be too weak; things won’t fit together.

In set theory, a function is just a relation that’s functional. What I did was first define interpretation relations, then prove that for all inputs, there’s at most one output (determinism), then prove that for all inputs, there’s at least one output (totality). So basically I defined some functional relations. The actual type theoretic functions are hiding in the constructive totality proofs. This may seem cumbersome and unhelpful, but in fact it’s merely cumbersome. The relations give us the strengthened induction hypotheses we need to make things fit together. Almost. It didn’t solve coherence.

Inductive-recursive types, and especially their inductive family imitations, are kind of like inductively-defined subsets. The outrageous representation is inductively defining the subset of the metalanguage’s objects that are expressible in the object language. The represented terms are the proofs that the corresponding object is expressible. We can forget the proof, just keeping the object; that’s how outrageous terms are “interpreted”. To get from outrageous representations to the interpretation relations we need, we simply add an index for the corresponding raw syntax. That is, our outrageous terms are now proofs that some raw term is interpreted as a certain object of the metalanguage. We do not need to deal with type derivations here; the relation only deals with well-typed raw syntax. In fact, the definition of the interpretation relations looks an awful lot like the typing rules. Only when proving totality do we need to use derivations.

So in summary, the interpreter is totality proofs for a relation-ized outrageous representation of the language.

Odds and Ends:
In fact, why do we actually need to prove determinism? What we really want is the constructive totality proofs. Well, we can get through many of the cases of totality without needing determinism. The induction hypothesis gives us what we need. But the cases that correspond to typing rules which involve a type equality check get stuck. For my subset, that’s just the application rule, in order to check the argument type against the function domain. We are stuck because we have two hypotheses telling us what the same syntactic type interprets to, and we need to know that those interpretations are the same, which is exactly what determinism would tell us.

Typing variables requires weakening a type from the context, and typing application requires substitution. Handling this requires a semantic notion of substitutions, and proofs that interpretation respects substitutions. In other words, interpreting a syntactically substituted raw term is the same as interpreting the original term, then semantically substituting. (This would be a commutative square, but the interpreter is still just a relation.) These proofs take the form of an implementation of substitutions for the outrageous representation. Although I eventually got this working fine, it’s actually the most complicated part of the development.

For me, the least intuitive aspect of the outrageous representation is that it wraps up the metalanguage types in a universe construction. Why would you need a universe construction only to then wrap the types up in another, more specific layer of structure for expressibility? In fact there are several simplifications you could make to the outrageous representation if you were thinking of it merely as intrinsically valid syntax. But it turns out you need all that structure for various lemmas ultimately used by the totality proofs. (I don’t know how Conor anticipated the need for it all.) In particular, the universe construction is vital for getting anywhere with the determinism proofs. Without it, the determinism proofs’ induction hypotheses are too weak, because you can’t do enough with equalities on constructed types. For example, given equal function types, we would like equal domains and codomains (injectivity). And given equal (type * term) pairs, when the types are the universe, we would like the terms to be equal as types (disjointness).

The Coherence Problem:
Unfortunately, the application case causes trouble again in the determinism proof, and this time I couldn’t think of any trick. The problem is once again that the domain and argument type need to agree. But this time it’s a coherence problem. We actually have two reasons why those types are connected, and various other paths we need to contract lie over one or the other. But we don’t have the coherence between the type paths that would allow as to contract everything in order to solve the goal with reflexivity. This is where I assumed UIP to finish the proof.

The universe construction is not enough to tell us about an arbitrary type because of type variables. Semisimplicial types are full of type variables, and the problem is to allow arbitrary types for those variables. To avoid demanding any additional structure on abstract types, the El constructor just lifts a plain Type into the universe construction. We have no way of analyzing a path in an abstract type; we need any paths we deal with to be coherent “by construction”.

But the annoying thing is: they are coherent by construction! All the interpreter started with was raw syntax; we constructed all the paths ourselves. And since we didn’t use any HoTT axioms, we’re consistent with UIP, so we couldn’t possibly have constructed disconnected parallel paths. Sadly, intentional type theory (ITT) lacks the ability to express strict equality, so in our induction hypotheses, we’ve forgotten that we built everything to fit together on the nose.

All type families in ITT respect weak equality, so with induction predicates we can only characterize things up to weak equality. Judgmental equality, on the other hand, is strict. In ITT, we only have judgmental equalities expressing computation by unfolding of definitions. But this definitional equality ought to be enough in some sense. We’re defining an interpreter, and it builds things that fit together on the nose by definition. The problem seems to be that we need to know this while we’re still defining it, to handle the tough cases. ITT doesn’t seem to provide any way to use the defining equations of a recursive function while defining it, to help use the recursive call. So it seems to me like ITT currently has no way to define a standard interpretation function. But I have no rigorous argument.

Possible Solutions:
For solving the coherence problem, we don’t have to wait until we understand what’s going wrong in ITT. We can extend the system to ensure that the desired constructions can be carried out. Here are some ideas I know of:

HTS:
If I understand correctly, one of the motivations for Vladimir Voevodsky’s proposed HTS is that it would overcome the kind of coherence problems that come up with semisimplicial types. Hopefully it would allow non-truncated interpreters to be defined as well. The reason why HTS is expected to solve the problem was discussed on the HoTT Google group. HTS provides a universe of non-fibrant types, where type families need not respect weak equality. In this universe is a strict equality type. This type can be used in an induction predicate to remember that a construction fits together on the nose while you build it. But the finishing move is to somehow get a fibrant function out of the induction even though strict equality is not fibrant. HTS will allow this because many inductive types, including the type of natural numbers, are fibrant and also allowed to eliminate into non-fibrant types. Once the induction is done, the strict equalities can be forgotten, yielding a fibrant function.

Adding a whole ‘nother universe is a pretty heavy solution just to get some new constructions, important as they may be. But the more serious problem is that, if Mike’s concerns are correct, non-fibrant elimination for nats is not justified by enough models of HoTT. The ideal solution to the coherence problem will allow a construction of semisimplicial types (and other things) that works uniformly for as many models as possible. It’s not yet clear that non-fibrant elimination accomplishes that.

Limits of Towers:
On the Google group, subject “Infinitary type theory”, Mike has asked about infinitary type theory, proposing infinitary rules that axiomatize limits of externally-defined towers. Because the definitions are external, they wouldn’t run into the usual coherence problems of certain internal towers. However, Mike doesn’t have a specific proposal yet for finitely presenting the infinitary terms, AFAIK. The details could turn out to be complex. If you want to know more about this idea, ask Mike!

Primitive Interpreter:
Assuming the existence of a sufficiently complete standard interpretation function could do the trick. It would be kind of a nuisance though, since you’d need to define your coherent structures as syntax, and prove them well-typed to interpret them. Also, internal interpretation functions would be awfully complicated things for all models to have to provide. Jason Gross has pointed out a paper describing a primitive interpreter for Agda. It’s not ready-made for HoTT, but that’s the kind of thing I’m thinking of.

Credits:
It was Mike’s observation that the problems of interpreting dependent type systems and defining semisimplicial types (both in HoTT) are related, and both unsolved. If it weren’t for his post, I wouldn’t be here thinking about HoTT.

The cleverest parts of my interpreter are adapted from Conor McBride’s paper (Outrageous but Meaningful Coincidences). You should read it! I have not discussed the outrageous representation in much detail. Also, many of the ingredients of the outrageous representation have their own story, and Conor provides the citations.

I’ve been talking with others about the interpreter problem on and off for several months, mainly on the comments area of Mike’s post. Mike and Jason Gross have been especially active in this discussion. Thanks to this I had a lot of fun, and learned a few things.

Future directions:
I’m wondering what I should do next, in the collective/aggregate opinion of the HoTT community. Although the main reason I wrote an interpreter was not for HoTT, I am interested in HoTT, and I would like to help if it isn’t too big a detour, or if someone would hire me.

Even without solving the coherence problems, interpretation functions would be useful because they are necessary to fully formalize models of HoTT, working in HoTT or any other dependent type system. They are also useful for dependently typed generic programming, and for justifying powerful forms of reflective proof systems, which is what got me interested in the problem.

Here are some ideas for what I might do:

  • Spend some time fixing up style issues with the current code, in case people have many such issues.
  • Prove things (what things?) about the interpreter.
  • Define functions (what functions?) using the interpreter.
  • Port the interpreter to HoTT Coq, using explicit h-sets.
  • Expand the subset of type theory handled by the interpreter, sticking to the standard semantics.
  • Try to generalize the interpreter to use other models for the semantics.
  • Try an alternative approach to interpreters based on taking a limit of restricted interpreters.

Request for Advice:
I am hoping to publish a paper about this kind of interpreter, but I’ve never published papers before. I would really appreciate opinions and advice on how to go about this, like:

  • What journals or conferences would be good for this formalized interpreter written in Coq? How do I pick one?
  • Is this interpreter a significant enough result to be published?
  • How do I go about tracking down the papers I should cite?
  • What nasty surprises do I need to deal with to publish something?
This entry was posted in Code, Foundations, Programming. Bookmark the permalink.

77 Responses to A Formalized Interpreter

  1. Jason Gross says:

    In terms of next steps: You could try porting it to Andromeda/tt (https://github.com/andrejbauer/andromeda), which is an implementation of HTS (approximately). I’m not sure how active it is at the moment, though. (I suspect, at the very least, trying to port the interpreter to that would be a good stress test for Andromeda.) I might try my hand at it at some point, if you don’t want to.

    I can’t seem to get your code to compile with Coq 8.4pl3. I get errors like:
    File “Chained.v”, characters 9348-9348: Syntax error
    File “./TreeSyntax.v”, line 481, characters 7-19:
    Error: The reference MinMax.min_l was not found in the current environment.
    File “./Chained.v”, line 93, characters 47-52:
    Error: The reference unzip was not found in the current environment.

    Am I missing something?

    • Matt Oliveri says:

      I haven’t been keeping up with the HTS implementation(s). Is Andromeda a new name for Brazil? Oh yeah, actually I think I saw that discussion, where even Mars was too close for Martin.

      Porting to that would be interesting too, and it might even avoid UIP, if our understanding of HTS is right. I’ll look at Andromeda and decide if I want to do it. Of course, you’re welcome to try it yourself either way.

      I can’t seem to get your code to compile with Coq 8.4pl3.

      Yeah, sorry. The code repository is still kind of a mess. I’m not surprised that Chained.v didn’t work; it hasn’t been updated since I rearranged some things. I intend to put a readme on the repository explaining what should work at least.

      TreeSyntax.v should compile, but I’m using Coq 8.3pl1, so I guess they changed the MinMax library. You should be able to compile up to TreeItrp.v if you can figure out the library and syntax incompatibilities.

  2. My PhD thesis has been looking into this sort of interpreter to be used for computational reflection and I’ve made a lot of progress building fairly large reflective tactics using a much simpler version of an interpreter that does not handle dependent types but does handle, simple types and some polymorphism and type functions. My current code related to this project is on github (https://github.com/gmalecha/mirror-core).

    I have yet to dig into your code in depth, but from reading your previous posts I think that it could be interesting to apply your interpreter in this setting; essentially using it as a base for building scalable automation in a system like Coq (it also works in other theories).

    • Matt Oliveri says:

      Well for now, the kind of interpreter described in this post wouldn’t work for computational reflection, without some kind of extension to Coq. It won’t actually reduce because it uses a UIP axiom. Even if it weren’t for that, all the hoops I jumped through to “untie the knot” will probably make it run horrendously slow.

      I’ve been thinking about what kind of extension to type theory would allow this kind of interpreter to be written in an algorithmically-acceptable way, but I’m not sure yet.

  3. Andrea Vezzosi says:

    It would make things clearer to add a summary of of the main datatypes you are using and the type of the totality and determinism proofs.

    Other than avoiding induction-recursion, is the extra raw term index the main difference with McBride’s work?

    • Matt Oliveri says:

      It would make things clearer to add a summary of of the main datatypes you are using and the type of the totality and determinism proofs.

      I was wondering about that. But since there’s quite a few of them, it would’ve made the post quite a bit longer. Here are the types of the determinism (*Uniq) and totality (*Total) proofs, in case it helps:

      spItrpUniq
           : forall (G : Ctx) (F : G -> Typ) (la : list nat) 
               (T1 T2 : G -> Typ) (la1 : forall g : G, F g -> T1 g)
               (la2 : forall g : G, F g -> T2 g),
             SimpParamItrp G F la T1 la1 ->
             SimpParamItrp G F la T2 la2 ->
             existT (fun T : G -> Typ => forall g : G, F g -> T g) T1 la1 =
             existT (fun T : G -> Typ => forall g : G, F g -> T g) T2 la2
      
      tfItrpUniq
           : forall (F : treeFam) (G : Ctx) (F1 F2 : G -> Typ),
             TreeFamItrp G F F1 -> TreeFamItrp G F F2 -> F1 = F2
      
      spItrpTotal
           : forall (G : list treeFam) (F : treeFam) (la : list nat) (T : treeFam),
             SimpParamGood G F la T ->
             forall (G' : Ctx) (F' : G' -> Typ),
             TreeCtxItrp G G' -> TreeFamItrp G' F F' -> ExSimpParamItrp G' F' la T
      
      tfItrpTotal
           : forall (G : list treeFam) (F : treeFam) (G' : Ctx) (C : Type),
             TreeFamGoodPG G F ->
             TreeCtxItrp G G' ->
             (forall F' : G' -> Typ, TreeFamItrp G' F F' -> C) -> C
      

      Other than avoiding induction-recursion, is the extra raw term index the main difference with McBride’s work?

      Yes, for the outrageous representation part. And also that the represented language is different. But the main difference is that I have a formalized raw term interpreter.

      • saizan says:

        They do help, thanks!

        So if I understand correctly, tf* are for types and sp* for terms of a given type.
        In spItrpTotal the term being interpreted is T, while G/G’ and F/F’ are the context and type in raw and semantic form.

        What is “la” though? And is the “SimpParamGood G F la T” premise a proof that T is well-typed that only makes use of the raw syntax?

        • Matt Oliveri says:

          So if I understand correctly, tf* are for types and sp* for terms of a given type.

          Yes, essentially. I did something weird though, which was to type an argument list separately from the head of the application. These sp* proofs are actually about argument lists. “la” stands for “List of Arguments”.

          You are right about X/X’ meaning syntax/semantics in the *Total types. In spItrpTotal, T is the type of the application and F is the type of the head.

          “SimpParamGood G F la T” is “In context G, applying a head of type F to the arguments la will have type T.” Yes, that’s a derivation for raw syntax.

  4. Jason Gross says:

    I decided to take a stab at trying to remove UIP, and I think the essential lemma to prove is that @typPi A B = @typPi A' B' and { H : A = A' & tr (fun A : Typ => A -> Typ) H' B = B' } are equivalent types. The functions in each direction are relatively straightforward to define, but proving the necessary inverse law sucked me in for the last 5 hours or so. I managed to get it down to (in the HoTT library notation) a goal of the form

      A : Typ
      B : A -> Typ
      A' : Typ
      B' : A' -> Typ
      H : typPi A B = typPi A' B'
      ============================
       transportD (fun A0 : Typ => A0 -> Typ)
         (fun (A'0 : Typ) (t : A'0 -> Typ) => typPi A B = typPi A'0 t)
         (ap typDom H) B 1 =
       H @
       ap (typPi A')
         ((apD typCdm H) ^ @
          transport_compose (fun T : Typ => T -> Typ) typDom H B)
    
    

    But I can’t seem to get anywhere productive from here.

    • Jason Gross says:

      If I admit that bit, then, using functional extensionality, I can get to a point where it seems I need UIP on transporting (ctxProj a1 g) across typDom applied to the induction hypothesis. But TypR isn’t an hSet even if you replace all the Types with hSets, right? You have El and Uv, and there’s no way those are going to satisfy UIP, I think. So I’m unconvinced you’ll be able to translate your interpreter to HoTT.

      • Matt Oliveri says:

        Hmm, I think you’re right, because the universe of h-sets is only a groupoid. (h-groupoid?) Would anything go wrong if I set-truncated the universe of h-sets?

        Alternatively, for this language, I could limit function domains to be small types, which wouldn’t involve the universe, so they’d be h-sets without needing truncation.

        But yeah, it looks like porting to HoTT will be trickier than I thought.

        • Jason Gross says:

          Do you mean, if you assumed it as an axiom, or if you truncated TypR? The first contradicts univalence (bool is an hSet), and I wouldn’t know about the latter. You or Mike would probably know better than I would. I don’t think restricting yourself to small types would be enough, though; @typPiS bool (fun _ => bool) = @typPiS bool (fun _ => bool) would still contain at least 4 inhabitants, I think.

          • Jason Gross says:

            Sorry, I’m confusing Typ and TypeS. But I’m still unconvinced you can avoid having too many paths floating around. (And, if I understand correctly, it’s not the paths inside of the domain that are bothersome, but the automorphisms of the domain itself.)

            • Matt Oliveri says:

              And, if I understand correctly, it’s not the paths inside of the domain that are bothersome, but the automorphisms of the domain itself.

              Ah, you’re right. Silly me. Lacking strict equality is so annoying.

          • Matt Oliveri says:

            I’m saying we’d change the universe construction so that it’s adding structure to h-sets, not arbitrary types. To do this, we’d use the set-truncation of the universe of h-sets. (We would not assume that that universe was already an h-set.) I think this would be enough to make the whole model and interpreter set-level, but if not then we truncate some more. This way we don’t have to worry about what paths (isomorphisms) we get, because we truncated away the difference. Does this make sense?

        • Mike Shulman says:

          Generally I think that when we get past propositions and sets, it’s better to use numerical indices: the universe of (h-)sets (a.k.a. “0-types”) is a 1-type.

      • Matt Oliveri says:

        (This is a reply to Mike.)

        Come to think of it, how would you bind a variable of a “type” from a truncated universe? It seems like you’d need to change everything to continuation passing style because of all the truncation elims. At that point we’ve crossed the line and it isn’t anything like a standard interpretation anymore. Is this the problem you were envisioning?

        I had assumed there was some easy-to-use fragment of HoTT where all work is done at set level, but now it seems like there isn’t. Is that true? Is that bad?

        • Jason Gross says:

          I had assumed there was some easy-to-use fragment of HoTT where all work is done at set level, but now it seems like there isn’t. Is that true? Is that bad?

          What do you mean? In the HoTT library, we explicitly pass in univalence (and funext) as hypotheses to theorems that need them. So any theorem that doesn’t have a univalence hypothesis is valid in the interpretation of Coq into Sets.

          • Matt Oliveri says:

            The interpretation I was thinking of would be assuming a set truncation type constructor, and the associated rules. It’s not enough to be consistent with the sets interpretation.

            I probably shouldn’t have asked for a “fragment”. What I’m asking for is an encoding/embedding of ITT + UIP into HoTT.

          • Matt Oliveri says:

            The interpretation I was thinking of…

            That was ambiguous. I mean:

            The modification of my interpreter I was thinking of…

        • Mike Shulman says:

          Yes, that’s basically the problem I had in mind. It’s correct that you can’t interpret ITT+UIP (which sometimes goes by the name ETT) with universes into HoTT by taking the universes to be the actual ones (since those are not sets) or their set-truncations (at least as far as I know, due to the problem you mention). The idea that there might be universes that are themselves sets is questionable from a HoTT point of view. Is that bad? Well, that depends on your point of view; it certainly makes some things more difficult.

          It’s possible one could construct such an interpretation by interpreting the universes of ETT as inductive-recursive universes (or perhaps an inductive-family replacement thereof) in HoTT. You couldn’t inject the entire real universe into the IR universe and have the latter stay a set (since then all the non-set-ness of the real universe would come along with it), but if you were content to have a universe that’s closed under the type forming operations, it might work.

          • Matt Oliveri says:

            ITT+UIP (which sometimes goes by the name ETT)

            They’re not the same though. But I can see why the difference probably doesn’t matter in this context.

            at least as far as I know, due to the problem you mention

            I thought about it some more, and I think the real problem is that the truncation eliminator will give you an arbitrary type, regardless of what you eliminated. So it’s not just a matter of plumbing. And the truncation constructor isn’t injective, so there don’t seem to be inversion-like workarounds.

            The idea that there might be universes that are themselves sets is questionable from a HoTT point of view.

            This is not intuitive for me. I only want the universe of sets as a set. Are you saying that it can’t be defined or that you think it’s inconsistent?

            but if you were content to have a universe that’s closed under the type forming operations, it might work.

            Well, no, I wouldn’t be content with that. Now that Jason set me straight about which paths need to be unique, I’m not expecting to be content with any HoTT-consistent interpreter based on mine, unless it can use strict equality.

            I read about Hugo Herbelin’s development of semisimplicial types, and I seem to remember that he claimed to have a (very complicated) way to handle finite higher dimensions. Maybe that could be adapted for an interpreter. But I definitely don’t want to do that.

            Maybe we should edit the post to point out that porting to HoTT isn’t so easy after all.

          • Mike Shulman says:

            the truncation eliminator will give you an arbitrary type, regardless of what you eliminated

            What are you talking about?

            I only want the universe of sets as a set. Are you saying that it can’t be defined or that you think it’s inconsistent?

            It depends on what you mean by “the universe of sets”. The universe of sets in the obvious sense is just not a set; it’s a 1-type. And its 0-truncation isn’t really a universe of sets; I think it’s better thought of as something like “the set of cardinalities”.

            It seems to me that what you would need in order to treat some set as “a universe of sets” is that it maps to “the” universe of sets, so that it’s a sort of “set of names for sets”. Such a map can’t be split surjective, i.e. there can’t be a way to assign a name to every set, because then “the” universe of sets would be a retract of a set and hence itself a set. It’s consistent for such a map to be surjective, in the propositionally-truncated sense, since such a surjective map does exist in the simplicial model. But I don’t know any way to define such a surjective map, and I wouldn’t be surprised if there were a model in which no such map existed. (There are models that contain 1-types which do not admit any surjective map from a set, but I don’t know whether the universe can be one of them.)

            • Matt Oliveri says:

              What are you talking about?

              Don’t worry about it. Describing the set-truncation of the universe of sets as “the set of cardinalities” is a better intuition for why it won’t work.

          • Matt Oliveri says:

            Such a map can’t be split surjective, i.e. there can’t be a way to assign a name to every set, because then “the” universe of sets would be a retract of a set and hence itself a set.

            You mean an internal function that assigns a name to every set? Is this what you would need to actually talk (in HoTT) about sets as elements of a set? (Guess: yes)

            It’s consistent for such a map to be surjective, in the propositionally-truncated sense, since such a surjective map does exist in the simplicial model.

            What about in the constructive but set-theoretic sense? In other words, in the set-theoretic model, can you assign names to all representations of h-sets? It seems like the answer is obviously yes, by using those representations as the names. So the question is whether that set of names can be a HoTT type. (Guess: no)

            • Mike Shulman says:

              You mean an internal function that assigns a name to every set?

              Yes.

              Is this what you would need to actually talk (in HoTT) about sets as elements of a set?

              Well, it might depend on what you mean by “talk about sets as elements of a set”. But yes, for some things you might want to do, at least, I think you’d need this.

              What about in the constructive but set-theoretic sense?

              I don’t understand. What do you mean by “the set-theoretic model” and “names” and “representations”?

            • Matt Oliveri says:

              “in the set-theoretic model”: working externally
              “names”: Well what did you mean? I thought we meant set elements that designate sets.
              “representations”: the set-theoretic objects that the model uses for HoTT objects

            • Mike Shulman says:

              By “names” I meant elements belonging to an h-set that surjects onto the universe of h-sets. They’d be completely internal to HoTT.

              I still don’t know what you mean by “the set-theoretic model”. “Working externally” sounds like you’re just talking about HoTT in the metatheory, but later you referred to “the set-theoretic objects that the model uses for HoTT objects” which seems to imply that you actually have some model of it in mind. Are you thinking of the simplicial-set model that assigns a simplicial set to every HoTT-type?

            • Matt Oliveri says:

              Are you thinking of the simplicial-set model that assigns a simplicial set to every HoTT-type?

              No. But it looks like what I’m thinking of wouldn’t help.

    • Matt Oliveri says:

      I think that type equivalence would contradict univalence. A path between typPi objects essentially has three paths: the paths for the domain and codomain Typs, but also the path for the plain pi types. The definition of the universe construction guarantees that the plain type path has the right endpoints, but that’s it I think.

      So consider a path of type typPi (typEl unit) (fun _=>typEl bool) = typPi (typEl unit) (fun _=>typEl bool). Make the domain and codomain paths reflexivity, but use univalence to have the plain type path (of type unit->bool = unit->bool) negate the boolean.

      If the directions of this hypothetical equivalence are what I think, then going from the typPi path to the pair of paths and back will throw out the negating path and replace it with reflexivity. But these cannot be equal.

      • Mike Shulman says:

        I don’t think it’s that simple: since Typ is a Sigma-type, the paths between the domain and codomain Typs will involve a transport along the path for the underlying plain Pi types.

        My first thought would be to try an encode-decode. Since TypS is an ordinary inductive type, it should be possible to characterize its path-spaces recursively with encode-decode, and this should be the clause for typPiS.

        • Matt Oliveri says:

          Shoot, you’re right; I imagined the wrong situation. I don’t know what to think about Jason’s equivalence for now.
          How about you work with Jason on this? It looks like you guys are better at it than me.

          • Jason Gross says:

            Nooooo, I was hoping you’d pick it up, so that it wouldn’t eat all of my time. 🙂 I tend to get way too addicted to problems like these, and lose a day here and there trying to classify the path space of a type, or trying to formulate rules for commuting match statements.

          • Matt Oliveri says:

            Sorry, I need to make a readme, and try my own harebrained idea to eliminate UIP, and decide whether to try Andromeda, and…

          • Mike Shulman says:

            Unfortunately, I don’t have the time at the moment to do more than holler advice from the sidelines, and won’t for quite a while. But I do think encode-decode is worth pursuing, if anyone ever feels like it.

  5. Matt Oliveri says:

    I added a readme. You can see the typing rules and get an overview of the code organization there.

  6. Alexande Kuklev says:

    Have you read the paper about Positive Induction-Recursion by N. Ghani, L. Malatesta and F. Nordvall Forsberg?
    To generalize interpreter and representation to HoTT you would morally need some kind of higher induction-recursion. The notion of positive induction-recursion seems to have nice categorical properties and might turn out to be extensible to infty-gruppoids. In particular, you cannot define universe closed under Pi-Types by means of positive induction-recursion, unless you perform a trick that doesn’t work well in ITT + UIP, but might work well in HoTT, leading to a full fledged univalent universe (not just to an h-set truncation thereof), see pp. 8-9 of https://personal.cis.strath.ac.uk/fredrik.nordvall-forsberg/papers/positive_calco2013.pdf

    • Matt Oliveri says:

      Thanks for the reference. I hope Mike has the time to take a look, if he hasn’t already. Which type theory features make sense in homotopy models is his department.

      But I’m not sure why you think higher induction-recursion would be needed; the syntax is always plain “lower” inductive types, and the (lower) inductive family adaptation of induction-recursion still seems to apply when the ambient universe is univalent. Would higher induction-recursion work around the coherence problem somehow?

    • Mike Shulman says:

      My impression is that positive IR would be most relevant to directed (homotopy) type theory, not to HoTT as it stands now. Maybe I’m wrong (I haven’t puzzled through their formal definitions), but positive IR over groupoids (which is what they end up using for Pi-types) doesn’t seem to me to be much different from ordinary IR, especially in the presence of univalence.

      Higher (ordinary) IR does allow us to define a univalent universe, as here. However, like Matt, it’s not clear to me why you would expect to need higher IR for an interpreter, since the syntax is automatically an h-set.

  7. Matt Oliveri says:

    Hey guys, here’s what I’m trying to do to eliminate UIP. I’m not that hopeful, but I’d like to hear your thoughts.

    I’m trying to prove a slightly strengthened induction predicate for the spItrpUniq proof. The idea is to try to exploit the naturality of the induction hypothesis. Here’s the new lemma:

    Inductive ExSimpParamItrp G F la : Type :=
    	exSimpParamItrp T' la' : SimpParamItrp G F la T' la'->ExSimpParamItrp G F la.
    Implicit Arguments exSimpParamItrp [G F la].
    
    Lemma spItrpProp G F la : forall Tla1 Tla2:ExSimpParamItrp G F la,Tla1 = Tla2.
    

    Aside from some modified plumbing, it’s practically the same as the old lemma:

    Lemma spItrpUniq G F la : forall T1 T2 la1 la2,SimpParamItrp G F la T1 la1->SimpParamItrp G F la T2 la2->
    (existT (fun T:G->Typ=>forall g,F g->T g) T1 la1 = existT _ T2 la2).
    

    The difference is now we have a path for the SimpParamItrp proofs too. The analogous addition didn’t make atCtxUniq harder, so I’m hoping spItrpProp won’t be harder than spItrpUniq.

    But more importantly, I’m hoping it might be easier, because spItrpProp is saying that the interpretability of an argument list is a mere proposition. This gives me some standard tricks to try with naturality, like that props are sets, etc. I’m hoping that this somehow lets me get the coherence between paths that I’m missing in the application case (of the proof of spItrpProp). But it’s not obvious to me how to do that.

    How ’bout you guys? Any clever tricks? Any intuition for what coherence this is buying me, if any? Any intuition for why interpretability cannot possibly be a mere proposition?

  8. Matt Oliveri says:

    Major update to the repository, FYI. Still uses UIP though… Although I did get it down to the exactly one place it’s really needed. (Before it was used twice.)

    • Gregory Malecha says:

      I downloaded the code and tried to build it but it seems that what is in the google code repository is incomplete. For example, the file Chained.v ends in “Inductive Chain : nat->Type->Type := (end of file)”.

  9. Alexander Ruliov says:

    Hello. I’m don’t understand, about what you are talking here, but currently I’m want to create own language with dependent types and maybe I need some help.

    Currently I have absolutely nothing except feelings, like what it should be. I’m introduced notion of classes, that like a typeclasses in Haskell, but works on univalence (I say, that all implementations of concrete class must be isomorphic to all another). For example:

    “`
    class Nat:
    zero : Nat
    succ : Nat -> Nat

    … here also axioms like succIsNotZero : x -> Eq (succ x) zero -> False and etc …

    induction : P:(Nat -> U) -> P zero -> (a:Nat -> P a -> P (succ a)) -> x:Nat -> P x
    inductionOnZero : P -> pz -> ps -> Eq (induction P pz ps zero) pz
    inductionOnSucc : P -> pz -> ps -> x -> Eq (induction P pz ps (succ x)) (ps x (induction P pz ps x))
    “`

    And this class Nat can be “implemented” by concrete type N = 1 + N and number of functions, that has type of zero, succ, induction, that substitutes Nat with type N = 1 + N. Where + (sum) and 1 (unit type) are classes too, but that has no implementation in the theory itself and provided just as axioms.

    And now I’m working on describing class, that represents a Type itself. I’m want to describe this class, then understand, what is a classes itself (looks like they can be described by class, that parametrized by set of required implementation and works like a monad, that runs all computations in context of class and adds requirements to the parameters and to evaluate it and get pure value, you must present all requirements). And if I will can describe Type in terms of my Type and in this described Type describe Type… I will write an interpreter for this (and will work on equivalent translations of programs, to build an compiler and optimizer).

    I’m have a problem now. I’m can build this theory, if I throw away recursive types. Without them all is almost trivial. I’m maybe even can build this theory, if I allow recursive types, that point to yourself and don’t use themselves in dependent types (like a N = 1 + N or G = G -> 0).

    But problems begins, when I want to provide ability to declare chained types, or types, that can use themselves in dependencies. Like:

    “`
    A = Nat -> B -> Nat
    B = Nat -> A -> Nat
    “`

    Or:

    “`
    A = 1 + (a:A -> induction (_ => U) Nat (f => Unit) a)
    “`

    And chained:

    “`
    A = B -> U
    B = 1 + (a:A -> induction (_ => U) Nat (f => Unit) a)
    “`

    And… I’m don’t know, how to cope with them.

    Currenly I’m realized, that my previous thoughts, that Types and Values are from different worlds was wrong and only Values are exists. And Types can be distinguished from Values by introducing class IsType, that have constructors only for values, that are types, like:

    “`
    class Value Nat
    void : Value one
    unit : Value one
    sum : n -> a:Value n -> IsType a -> b:Value n -> IsType b -> Value n

    class IsType n:Nat (Value n)
    voidIsType : IsType one void
    unitIsType : IsType one unit
    sumIsType : …

    “`

    And also I’m see, that chained types can be described, if I will add to Value parameter, that holds array and index, that represent “this type”, and add special type “hole”, that has index in this array too.

    … but what to do with recursive types and their using in dependent types, I’m don’t know. And don’t know anybody, who have interest in this too and can help with this. I see, that I need only time, to solve this problems, but it will take year or more and I think, that here already exists a solution to cope with this.

    Thanks. Sorry for my English, it’s not my native language, I hope, that it not as bad, as I feel it. Thanks again.

    • Matt Oliveri says:

      For some reason, I’m having trouble making a full reply to your comment…

    • Matt Oliveri says:

      Hello, Alexander Ruliov. I can’t say I completely follow where you’re coming from, but I want to help anyone who’s trying to understand possible uses of dependent types in programming, so I’ll try.

      What do you mean that classes should be like Haskell typeclasses, but “work on univalence”? Is there some problem, as it is, with the combination of typeclasses and univalence? I haven’t actually used typeclasses before, so I don’t know the details, but my understanding is that they’re vaguely like module types, with syntactic tricks for ad-hoc polymorphism on top. And modules are essentially dependent records, which work great with univalence. (Let me know if I need to brush up on typeclasses a little to help here.)

      Your “Nat” class seems a lot like the universal characterization of natural numbers as an inductive type. In 1-categories these kinds of universal constructions already characterize a type (object of the category) up to isomorphism. In HoTT the details are different, but I gather that it still basically does what it should. I seem to remember the HoTT book has sections proving the uniqueness, up to equivalence, of types with some universal properties.

      In general though, instances of the same typeclass need not be equivalent. For example, you wouldn’t expect all monads to be equivalent.

      With a class (type?) of types, you seem to be getting at something like a universe construction. In fact, universe constructions are universal constructions, which is an unfortunate almost-pun. The easy way to do (many) universe constructions is with inductive-recursive definitions, which you can try in Agda. There have got to be plenty of explanations online. There’s also a post on this blog about an interaction between universe constructions and univalence.

      But you’re trying to throw general recursive types into the mix. Most type theorists study type systems that are normalizing, and these systems will not have general recursive types. You should think carefully before throwing in general recursion. It totally changes the semantics of the type system, but in a subtle way, since the syntax is so similar to the restricted fixed points allowed by inductive types. The big example of what changes is that now types do not make for very good propositions, since all types are inhabited, so all “propositions” would be “provable”, including ones that are intuitively just false. Even without props as types, you get, for example, mysterious, non-terminating nats which break the induction principle. And that’s all before you even get into the types themselves being recursively-defined!

      • Jason Gross says:

        Regarding type classes: a type class in Coq is just a record type, together with the ability to tell the type inference engine “when you can’t figure out an inhabitant of this record type by unification, run this arbitrary tactic”.

      • rulexec says:

        > What do you mean that classes should be like Haskell typeclasses, but “work on univalence”? Is there some problem, as it is, with the combination of typeclasses and univalence?

        No, I have no problems with they at current stage.

        > Your “Nat” class seems a lot like the universal characterization of natural numbers as an inductive type. In 1-categories these kinds of universal constructions already characterize a type (object of the category) up to isomorphism.

        Characterizes, yes, but what will happen, if I create such class:

        class NotGood
        false : False

        Where False is 0-type. It will characterize a type, one rule of such returns false? I use this classes to describe types, yes. By only giving types of functions, but not functions itself. And to check, is some class is inhabited, I require to write “implementation”, that gets some “implemented” classes, substitutes class name by this choosed “data carrier type” and provides all functions of class.

        Anyway, nevermind about this, I have no problems with this and this is only my way to describe types without actual definition.

        > In general though, instances of the same typeclass need not be equivalent. For example, you wouldn’t expect all monads to be equivalent.

        No, my instances are equivalent. For example, look at Monoid. Monoid is a tuple (A:U, f:(A -> A -> A), IsAssociative f, a:A, (x:A -> Eq (f x a) x)) (some type A, binary operation f, proof of associativity, some element of A, proof, that this element is neutral to f).

        And I’m not saying “class Monoid U”, I’m saying “class Monoid A:U f:(A -> A -> A) (IsAssociative f)”. Because it’s evident, that for type A can be many associative binary operations, that have neutral element and form a monoid. But type and concrete associative binary operation can form only one monoid. So, all instances of “Monoid A:U f:(A -> A -> A) (IsAssociative f)” will be equivalent. And monads can be specialized, to satisfy this requirement, too.

        > You should think carefully before throwing in general recursion. It totally changes the semantics of the type system, but in a subtle way, since the syntax is so similar to the restricted fixed points allowed by inductive types. The big example of what changes is that now types do not make for very good propositions, since all types are inhabited, so all “propositions” would be “provable”, including ones that are intuitively just false. Even without props as types, you get, for example, mysterious, non-terminating nats which break the induction principle. And that’s all before you even get into the types themselves being recursively-defined!

        I will reply to this in next comment.

        • Matt Oliveri says:

          class NotGood
          false : False

          Where False is 0-type. It will characterize a type, one rule of such returns false? I use this classes to describe types, yes. By only giving types of functions, but not functions itself. And to check, is some class is inhabited, I require to write “implementation”, that gets some “implemented” classes, substitutes class name by this choosed “data carrier type” and provides all functions of class.

          Right, so the class NotGood wouldn’t have any instances. No harm done.

          And I’m not saying “class Monoid U”, I’m saying “class Monoid A:U f:(A -> A -> A) (IsAssociative f)”. Because it’s evident, that for type A can be many associative binary operations, that have neutral element and form a monoid. But type and concrete associative binary operation can form only one monoid.

          I don’t understand what you mean here.

          So, all instances of “Monoid A:U f:(A -> A -> A) (IsAssociative f)” will be equivalent.

          I’m pretty sure that’s wrong. What about the monoid of nats with addition, versus the one-element monoid?

    • Matt Oliveri says:

      While non-termination and Turing completeness are important for real programming languages, throwing general recursion into a dependent type system is fraught with peril. Nuprl did it one way, Trellys is doing it a slightly different way… Both fall short of handling the example recursive type definitions you gave. Cayenne did something quite different, and could maybe do what you want. (I forget.) But it’s useless as a logic, and I don’t think anyone still thinks that’s a good approach to a dependently-typed programming language.

      I don’t want to discourage you; there’s got to be something like dependent types that can reason about all sorts of general recursive programs. It’s just that there isn’t yet a consensus about how much general recursion is really important (Need general recursive types? Fixed point induction?) or the best way to combine it with the benefits of dependent types.

      In fact, it seems like there isn’t even a consensus on what the benefits of dependent typing is, for programmers. To me, while it seems like a cool, productive way to do program verification, it doesn’t seem illuminating as an implementation technique. Almost the opposite: implementing dependent types seems like a can of worms that we’re still sorting through. But probably most people thinking about dependent types think the benefits are more far-reaching than I do. I like refinement types. We basically already know what programs are; how can types help us reason about them?

      I would be happy to talk more about the goals of your language, and what ideas might be helpful for it.

      • Alexander Ruliov says:

        > While non-termination and Turing completeness are important for real programming languages, throwing general recursion into a dependent type system is fraught with peril.

        No-no-no, I don’t want general recursion. … More precisely I want general recursion, but only in types, not in “values”. I want to be able define any recursive type. But not to be able define any recursive value/function. Even more, I want to retain finite induction principle on both types and values, that it always terminates.

        And looks like that it is possible to introduce such types (small example in the end of this comment)

        > In fact, it seems like there isn’t even a consensus on what the benefits of dependent typing is, for programmers. To me, while it seems like a cool, productive way to do program verification, it doesn’t seem illuminating as an implementation technique. Almost the opposite: implementing dependent types seems like a can of worms that we’re still sorting through. But probably most people thinking about dependent types think the benefits are more far-reaching than I do. I like refinement types. We basically already know what programs are; how can types help us reason about them?

        I don’t know, how about other programmers, but I just want to be able write formal specifications of programs and check, that some programs follow some specification. And also believe, that besides of bug-free software it will give more effective optimizations, since compiler will know absolutely all about code, what it do, what it should do, what it should not do and do equivalent transformations more freely, than in current time. I think, that even if it will very hard to write program in dependent types, it will worth anyway, for first result will be final, because it will have no bugs (if specification have no problems itself), and for second it will be possible give task to code something to anybody, who wants. Because he cannot even compile wrong code. He can or produce result, or nothing at all.

        Or all my small work is useless. But I don’t know, what to do else, except of return to “normal programming” and grow “horizontally”, learning different concrete libraries/frameworks. But I want to grasp different concepts and paradigms and challenge unsolved problems, not to just read docs and solve monotonous infinite tasks.

        ————————————————–

        class Type
        unit : Type
        sum : Type -> Type -> Type
        hole : Type

        induction : P:(Type -> U) -> P unit -> (a:Type -> P a -> b:Type -> P b -> P (sum a b)) -> P hole -> x:Type -> P x

        class Value Type
        instance : T:Type -> induction (_ => (Type -> U -> U))
        (_ W => W)
        (a pa b pb => Q _ => ((pa Q (Value (sum a b))), (pb Q (Value (sum a b))))
        (Q W => Value Q -> W)
        T T (Value T)

        induction : … some much crazier stuff …

        Example:

        nat = sum unit hole

        zero = first (instance nat)
        one = (second (instance nat)) zero
        two = (second (instance nat)) one

        What here happens:

        sum unit hole — trivial, constructs some Type element

        instance nat — resolves to function instance of class Value, which type is determined by induction on passed argument.

        And by induction on Type it has type (Value (sum unit hole), Value nat -> Value (sum unit hole)).
        first and second is functions to extract values from pairs.

        So, first (instance nat) just returns element of Value unit. But second (instance nat) returns function from Value nat to Value (sum unit hole). Only one Value nat, that we have is zero (that Value (sum unit hole), but nat = sum unit hole by definition).

        … and it’s looks like this can work. And defines recursive type, which elements we even can construct (I’m not written induction on values, but seems, that it’s possible too).

        • Alexander Ruliov says:

          Oh, how to insert blockquotes?

          • Matt Oliveri says:

            WordPress markup is HTML-ish. Not everything works in comments, but <blockquote> does, and a lot of other basic formatting elements.

        • Matt Oliveri says:

          No-no-no, I don’t want general recursion. … More precisely I want general recursion, but only in types, not in “values”. I want to be able define any recursive type. But not to be able define any recursive value/function.

          If you have general recursive types, then the Y combinator becomes well-typed, so you effectively have general recursive values.

          f : T -> T
          G = G -> T
          Y_f = (\x:G->f (x x)) (\x:G->f (x x)) : T
          Y_f = f Y_f

          See, we can apply functions to themselves, which is bad news for termination. In particular, for T = False and f = \x->x, Y_f “proves” False.

          But I’m confused; why would you want recursive types but not recursive values? I associate general recursion with programming, where general recursive functions are important.

          I just want to be able write formal specifications of programs and check, that some programs follow some specification.

          There are a lot of approaches to that. Dependent types are a relative newcomer.

          And also believe, that besides of bug-free software it will give more effective optimizations, since compiler will know absolutely all about code, what it do, what it should do, what it should not do and do equivalent transformations more freely, than in current time.

          I used to think specifications would be important for optimizers, but I’m not so sure anymore. It’s not the case that all information about a program is automatically useful for optimizations. Nor is it easy for the optimizer to pick out the useful parts from such an expressive language as formal specs. I figure being able to check code against its spec will let people do more daring optimizations, but that’s an indirect benefit. Anyway you’re probably commenting on the wrong blog to talk to experts about code optimization. 🙂

          I think, that even if it will very hard to write program in dependent types, it will worth anyway, for first result will be final, because it will have no bugs (if specification have no problems itself), and for second it will be possible give task to code something to anybody, who wants. Because he cannot even compile wrong code. He can or produce result, or nothing at all.

          Now there’s an advantage of verification I wish I saw more often! Software companies spend so much time vetting programmers, when it should be proof checkers vetting programs.

          And looks like that it is possible to introduce such types (small example in the end of this comment)

          I’ll take a look at this later.

          • Alexander Ruliov says:

            > If you have general recursive types, then the Y combinator becomes well-typed, so you effectively have general recursive values.

            Ok, I’m see, this is definitely not that I want. So, I’m need some rules, that describes subset of general recursion types, that not lead to general recursion values.

            • Matt Oliveri says:

              It’s not that you can’t combine general recursion and logic. It’s just that to do so, you need to make a hard distinction between types of programs and types of mathematical values. You can look at Nuprl’s “bar types” and/or Trellys for examples of this in dependent type systems. And any approach to program verification where the logic is a separate language distinguishes math vs. program types by default.

              For the math side (without general recursion), you might want to take a look at the paper “The Gentle Art of Levitation” for a generic (like your Type and Value classes) description of inductive families, which are an expressive form of data type which are logically consistent.

        • Matt Oliveri says:

          class Type
          unit : Type
          sum : Type -> Type -> Type
          hole : Type

          [snip]

          … and it’s looks like this can work. And defines recursive type, which elements we even can construct (I’m not written induction on values, but seems, that it’s possible too).

          Interesting. I’m not clear on why you’re using a class for Type. It looks like it should be an inductive type. In fact, I’m assuming you want the inductive computation rules, which you left out, otherwise the example won’t type check. (And there might be a problem for univalence with assuming judgmental equalities.) But it looks like with Value, you have a generic signature for simple recursive types. (If it works; I didn’t check very carefully.) But this does not actually give you any new types; you’ve just assumed them. So I’m still not clear on what you’re trying to do with all this.

          • Alexander Ruliov says:

            Why not gives? This universe is very weak, but has infinitely many types, like:

            1, 1 + 1, 1 + 1 + 1, 1 + 1 + 1 + …, A = A + 1, A = 1 + A, A = 1 + 1 + A, A = …

            Easy to introduce false, product, maybe even arrow, somehow provide ability to “close recursion”, to be able get finished type (N = 1 + N) and then construct N + N (in current universe if you try this, you will get N = (1 + N) + (1 + N), actually).

            Ok, thanks for the help, now I’m see, that I’m don’t want general recursive types and must to find border, that as close as possible to general recursive types, but don’t pass they into finite inductive types.

            • Matt Oliveri says:

              Why not gives? This universe is very weak, but has infinitely many types, like:

              1, 1 + 1, 1 + 1 + 1, 1 + 1 + 1 + …, A = A + 1, A = 1 + A, A = 1 + 1 + A, A = …

              The problem isn’t the universe you described, it’s the fact that Value is a typeclass, so you need an instance of it before it’s useful. To provide an instance of Value (including implementing the “instance” function) is precisely to implement all those types you can name with the universe.

              …Unless you mean for the Value class to have an instance by assumption. But that seems like the hard way to add type constructors to your language. (And of course if you assume recursive types, then you’re inconsistent.)

    • Matt Oliveri says:

      If you cannot be dissuaded from adding general recursive types, there is a paper (which I’m mysteriously unable to link to) about Type:Type by Erik Palmgren which shows how you can define general recursive universe constructions, which can handle pretty much any type constructions one could imagine. But at what cost? Inconsistency, undecidable type checking (in case that bothers you), weird, domain-theoretic semantics, and an interaction with univalence which is anyone’s guess.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s