EBook Content
JeanLouis Krivine
LAMBDACALCULUS TYPES AND MODELS Translated from french by René Cori
To my daughter
Contents Introduction
5
1 Substitution and betaconversion Simple substitution . . . . . . . . . . Alphaequivalence and substitution . Betaconversion . . . . . . . . . . . . Etaconversion . . . . . . . . . . . . .
. . . .
2 Representation of recursive functions Head normal forms . . . . . . . . . . . Representable functions . . . . . . . . . Fixed point combinators . . . . . . . . The second fixed point theorem . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
7 8 12 18 24
. . . .
29 29 31 34 37
3 Intersection type systems 41 System DΩ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 System D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Typings for normal terms . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4 Normalization and standardization Typings for normalizable terms . . . . Strong normalization . . . . . . . . . βI reduction . . . . . . . . . . . . . The λI calculus . . . . . . . . . . . . βηreduction . . . . . . . . . . . . . The finite developments theorem . . . The standardization theorem . . . . . 5 The Böhm theorem
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
61 61 68 70 72 74 77 81 87
3
4
CONTENTS
6 Combinatory logic . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
95 95 98 101 105
7 Models of lambdacalculus Functional models . . . . . . . . . . . . . . Spaces of continuous increasing functions . Spaces of initial segments . . . . . . . . . . Applications . . . . . . . . . . . . . . . . . Retractions . . . . . . . . . . . . . . . . . . Qualitative domains and stable functions .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
111 111 116 117 125 130 134
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
145 145 146 150 153 159
. . . . .
165 165 172 179 182 185
Combinatory algebras . . Extensionality axioms . . Curry’s equations . . . . Translation of λcalculus
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
8 System F Definition of system F types . . . Typing rules for system F . . . . . The strong normalization theorem Data types in system F . . . . . . Positive second order quantifiers .
. . . . .
. . . . .
9 Second order functional arithmetic Second order predicate calculus . . . System F A 2 . . . . . . . . . . . . . . . Realizability . . . . . . . . . . . . . . Data types . . . . . . . . . . . . . . . Programming in F A 2 . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
10 Representable functions in system F 193 Gödel’s ¬translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Undecidability of strong normalization . . . . . . . . . . . . . . . . . . . . 199 Bibliography
203
INTRODUCTION The lambdacalculus was invented in the early 1930’s, by A. Church, and has been considerably developed since then. This book is an introduction to some aspects of the theory today : pure lambdacalculus, combinatory logic, semantics (models) of lambdacalculus, type systems. All these areas will be dealt with, only partially, of course, but in such a way, I think, as to illustrate their interdependence, and the essential unity of the subject. No specific knowledge is required from the reader, but some familiarity with mathematical logic is expected ; in chapter 2, the concept of recursive function is used ; parts of chapters 6 and 7, as well as chapter 9, involve elementary topics in predicate calculus and model theory. For about fifteen years, the typed lambdacalculus has provoked a great deal of interest, because of its close connections with programming languages, and of the link that it establishes between the concept of program and that of intuitionistic proof : this is known as the “ CurryHoward correspondence ”. After the first type system, which was Curry’s, many others appeared : for example, de Bruijn’s Automath system, Girard’s system F , MartinLöf’s theory of intuitionistic types, CoquandHuet’s theory of constructions, Constable’s Nuprl system... This book will first introduce Coppo and Dezani’s intersection type system. Here it will be called “ system DΩ ”, and will be used to prove some fundamental theorems of pure lambdacalculus. It is also connected with denotational semantics : in Engeler and Scott’s models, the interpretation of a term is essentially the set of its types. Next, Girard’s system F of second order types will be considered, together with a simple extension, denoted by F A 2 (second order functional arithmetic). These types have a very transparent logical structure, and a great expressive power. They allow the CurryHoward correspondence to be seen clearly, as well as the possibilities, and the difficulties, of using these systems as programming languages. A programming language is a tool for writing a program in machine language (which is called the object code), in such a way as to keep control, as far as possible, on what will be done during its execution. To do so, the primi5
6
Lambdacalculus, types and models
tive method would be to write directly, in one column, machine language, and, alongside, comments indicating what the corresponding instructions are supposed to do. The result of this is called a “ source program ”. Here, the aim of the “ compilation ”, which transforms the source program into an object code, will be to get rid of the comments. Such a language is said to be primitive, or “ low level ”, because the computer does not deal with the comments at all ; they are entirely intended for the programmer. In a higher level language, part of these comments would be checked by the computer, and the remainder left for the programmer ; the “ mechanized ” part of the comments is then called a “ typing ”. A language is considered high level if the type system is rich. In such a case, the aim of the compilation would be, first of all, to check the types, then, as before, to get rid of them, along with the rest of the comments. The typed lambdacalculus can be used as a mathematical model for this situation ; the role of the machine language is played by the pure lambdacalculus. The type systems that are then considered are, in general, much more rich than those of the actual programming languages ; in fact, the types could almost be complete specifications of the programs, while the type checking (compilation) would be a “ program proof ”. These remarks are sufficient to explain the great interest there would be in constructing a programming language based on typed lambdacalculus ; but the problems, theoretical and practical, of such an enterprise are far from being fully resolved. This book is the product of a D.E.A. (postgraduate) course at the University of Paris 7. I would like to thank the students and researchers of the “ Equipe de Logique ” of Paris 7, for their comments and their contributions to the early versions of the manuscript, in particular Marouan Ajlani, René Cori, JeanYves Girard and Michel Parigot. Finally, it gives me much pleasure to dedicate this book to my daughter Sonia. Paris, 1990 I want to thank also Darij Grinberg and Robert Solovay, who have corrected errors in the proofs of corollary 1.3 and theorem 7.16. Paris, 2011
Chapter 1 Substitution and betaconversion The terms of the λcalculus (also called λterms) are finite sequences formed with the following symbols : variables x, y, . . . (the set of variables is assumed to be countable), left and right parenthesis, and the letter λ. They are obtained by applying, a finite number of times, the following rules : • any variable x is a λterm ; • whenever t and u are λterms, then so is (t )u ; • whenever t is a λterm and x is a variable, then λx t is a λterm. The set of all terms of the λcalculus will be denoted by L. The term (t )u should be thought of as “ t applied to u ” ; it will also be denoted by t u if there is no ambiguity ; the term (. . . (((t )u 1 )u 2 ) . . .)u k will also be written (t )u 1 u 2 . . . u k or t u 1 u 2 . . . u k . Thus, for example, (t )uv, (t u)v and t uv denote the same term. By convention, when k = 0, (t )u 1 u 2 . . . u k will denote the term t . The free occurrences of a variable x in a term t are defined, by induction, as follows : if t is the variable x, then the occurrence of x in t is free ; if t = (u)v, then the free occurrences of x in t are those of x in u and v; if t = λy u, the free occurrences of x in t are those of x in u, except if x = y ; in that case, no occurrence of x in t is free. A free variable in t is a variable which has at least one free occurrence in t . A term which has no free variable is called a closed term. A bound variable in t is a variable which occurs in t just after the symbol λ. 7
Lambdacalculus, types and models
8
1. Simple substitution Let t , t 1 , . . . , t k be terms and x 1 , . . . , x k distinct variables ; we define the term t as the result of the replacement of every free occurrence of x i in t by t i (1 ≤ i ≤ k). The definition is by induction on t , as follows : if t = x i (1 ≤ i ≤ k), then t = t i ; if t is a variable 6= x 1 , . . . , x k , then t = t ; if t = (u)v, then t = (u)v ; if t = λx i u (1 ≤ i ≤ k), then t = λx i u ; if t = λx u, with x 6= x 1 , . . . , x k , then t = λx u. Such a substitution will be called a simple one, in order to distinguish it from the substitution defined further on, which needs a change of bound variables. Simple substitution corresponds, in computer science, to the notion of macroinstruction. It is also called substitution with capture of variables. With the notation t , it is understood that x 1 , . . . , x k are distinct variables. Moreover, their order does not matter ; in other words : t = t for any permutation σ of {1, . . . , k}. The proof is immediate by induction on the length of t ; also immediate is the following : If t 1 , . . . , t k are variables, then the term t has the same length as t . Lemma 1.1. If the variable x 1 is not free in the term t of L, then : t = t . Proof by induction on t . The result is clear when t is either a variable or a term of the form (u)v. Now suppose t = λx u ; then : if x = x 1 , then : t = λx 1 u = t ; if x = x i with i 6= 1, say x = x k , then : t = λx k u = λx k u (by induction hypothesis, since x 1 is not free in u) = t ; if x 6= x 1 , . . . , x k , then : t = λx u = λx u (by induction hypothesis, since x 1 is not free in u) = t . Q.E.D.
Chapter 1. Substitution and betaconversion
9
Remark. Usually, in textbooks on λcalculus (for example in [Bar84]), the simple substitution is considered for only one variable. In a substitution such as t , the term t is then called a context or a term with holes ; the free occurrences of the variable x in t are called holes and denoted by [ ]. The term t is then denoted as t [u] and is called the result of the “ substitution of the term u in the holes of the context t ”.
The major problem about simple substitution is that it is not stable under composition ; if you consider two substitutions : and then the application t 7→ t is not, in general, given by a substitution. For instance, we have : y = x and z = z for every variable z 6= y. Thus, if the operation was a substitution, it would be . But this is false, because λy x = λy y and λy x = λy x. In the following lemma, we give a partial answer to this problem. The definitive answer is given in the next section, with a new kind of substitution, which is stable by composition. Lemma 1.2. Let {x 1 , . . . , x m }, {y 1 , . . . , y n } be two finite sets of variables, and suppose that their common elements are x 1 = y 1 , . . . , x k = y k . Let t , t 1 , . . . , t m , u 1 , . . . , u n be terms of L, and assume that no free variable of t 1 , . . . , t m is bound in t . Then : t 0 = t , 0 where t i = t i . Proof by induction on the length of t : i) t is a variable : the possible cases are t = x i (1 ≤ i ≤ m), t = y j (k + 1 ≤ j ≤ n), or t is another variable. In each of them, the result is immediate. ii) t = (u)v ; the result is obvious, by applying the induction hypothesis to u and v. iii) t = λx u ; we first observe that the result follows immediately from the induction hypothesis for u, if x 6= x 1 , . . . , x m , y 1 , . . . , y n . If x = x i (1 ≤ i ≤ k), say x 1 , then : t = λx 1 u. Since x 1 = y 1 , we have : t = λx 1 u. By the induction hypothesis for u, we get : u 00 = u 00 with t i = t i .
Lambdacalculus, types and models
10
But, since x 1 = y 1 is bound in t , by hypothesis, it is not a free variable of t i . From lemma 1.1, it follows that t i00 = t i = t i0 . Therefore : t 0 = λx 1 u 0 0 = t . If x = x i (k + 1 ≤ i ≤ m), say x m , then : t = λx m u, and since x m 6= y 1 , . . . , y n , we get : t = λx m u. By the induction hypothesis for u, we get : u 0 = u, Therefore t 0 = λx m u 0 0 = t . If x = y j (k + 1 ≤ j ≤ n), say y n , then : t = λy n u, since y n 6= x 1 , . . . , x m . Therefore t = λy n u. By the induction hypothesis for u, we get : u 00 = u, 00 with t i = t i . But, since y n is bound in t , by hypothesis, it is not a free variable of t i . From lemma 1.1, it follows that t i00 = t i = t i0 . Therefore : t 0 = λy n u 0 = t . Q.E.D.
Corollary 1.3. Let t , t 1 , . . . , t m be λterms, and {x 1 , . . . , x m }, {y 1 , . . . , y m } two sets of variables such that none of the y i ’s occur in t . Then : t = t . Suppose that x 1 , . . . , x k ∉ {y 1 , . . . , y m } and x k+1 , . . . , x m ∈ {y 1 , . . . , y m }. Then x k+1 , . . . , x m are not free in t and therefore, by lemma 1.1, we have : t = t . The two sets {x 1 , . . . , x k } and {y 1 , . . . , y m } are disjoint, and the variables y 1 , . . . , y m are not bound in t . Therefore, by lemma 1.2, we have :
Chapter 1. Substitution and betaconversion
11
t = t . But y 1 , . . . , y m are not free in t , and therefore, by lemma 1.1 : t = t . Now x k+1 , . . . , x m are not free in t ; thus, again by lemma 1.1 : t = t . Q.E.D.
Let R be a binary relation on L ; we will say that R is λcompatible if it is reflexive and satisfies : t R t 0 ⇒ λx t R λx t 0 ; t R t 0 , u R u 0 ⇒ (t )u R (t 0 )u 0 . Remark. A binary relation R is λcompatible if and only if : x R x for each variable x ; t R t 0 ⇒ λx t R λx t 0 ; t R t 0 , u R u 0 ⇒ (t )u R (t 0 )u 0 for all terms t , u, t 0 , u 0 . Indeed, t R t is easily proved, by induction on the length of t .
Lemma 1.4. If R is λcompatible and t 1 R t 10 , . . . , t k R t k0 , then : t R t . Immediate proof by induction on the length of t . Q.E.D.
Proposition 1.5. Let R be a binary relation on L. Then, the least λcompatible binary relation ρ containing R is defined by the following condition : (1) t ρ t 0 ⇔ there exists terms T, t 1 , . . . , t k , t 10 , . . . , t k0 and distinct variables x 1 , . . . , x k such that t i R t i0 (1 ≤ i ≤ k) and t = T , t 0 = T . Let ρ 0 be the least λcompatible binary relation containing R, and ρ the relation defined by condition (1) above. It follows from the previous lemma that ρ 0 ⊃ ρ. It is easy to see that ρ ⊃ R (take T = x 1 ). It thus remains to prove that ρ is λcompatible. By taking k = 0 in condition (1), we see that ρ is reflexive. Suppose t = T , t 0 = T . Let y 1 , . . . , y k be distinct variables not occurring in T . Let V = T . Then, it follows from corollary 1.3 that t = V and t 0 = V . Thus the distinct variables x 1 , . . . , x k in condition (1) can be arbitrarily chosen, except in some finite set. Now suppose t ρt 0 and uρu 0 ; then : t = T , t 0 = T with t i R t i0 ; u = U , u 0 = U with u j Ru 0j . By the previous remark, we can assume that x 1 , . . . , x k , y 1 , . . . , y l are distinct, different from x, and also that none of the x i ’s occur in U , and none of the y j ’s occur in T . Therefore :
Lambdacalculus, types and models
12
λx t = (λx T ), λx t 0 = (λx T ) which proves that λx t ρ λx t 0 . Also, by lemma 1.1 : t = T , t 0 = T (since none of the y j ’s occur in T ) ; and similarly : u = U , u 0 = U (since none of the x i ’s occur in U ). Let V = (T )U ; then (t )u = V , (t 0 )u 0 = V and thus (t )u ρ (t 0 )u 0 . Q.E.D.
2. Alphaequivalence and substitution We will now define an equivalence relation on the set L of all λterms. It is called αequivalence, and denoted by ≡. Intuitively, t ≡ t 0 means that t 0 is obtained from t by renaming the bound variables in t ; more precisely, t ≡ t 0 if and only if t and t 0 have the same sequence of symbols (when all variables are considered equal), the same free occurrences of the same variables, and if each λ binds the same occurrences of variables in t and in t 0 . We define t ≡ t 0 , on L, by induction on the length of t , by the following clauses : if t is a variable, then t ≡ t 0 if and only if t = t 0 ; if t = (u)v, then t ≡ t 0 if and only if t 0 = (u 0 )v 0 , with u ≡ u 0 and v ≡ v 0 ; if t = λx u, then t ≡ t 0 if and only if t 0 = λx 0 u 0 , with u ≡ u 0 for all variables y except a finite number. (Note that u has the same length as u, thus is shorter than t , which guarantees the correctness of the inductive definition). Proposition 1.6. If t ≡ t 0 , then t and t 0 have the same length and the same free variables. The proof is done by induction on the length of t . The cases when t is a variable, or t = uv are trivial. Suppose now that t = λx u and therefore t 0 = λx 0 u 0 . Thus, we have : u ≡ u 0 for every variable y except a finite number.
Chapter 1. Substitution and betaconversion
13
We choose a variable y 6= x, x 0 which, moreover, does not appear (free or bound) in u, u 0 . Let U (resp. U 0 ) be the set of free variables of u (resp. u 0 ). The set V of free variables of u is U if x ∉ U and (U \ {x}) ∪ {y} if x ∈ U . Also, the set V 0 of free variables of u 0 is U 0 if x 0 ∉ U 0 and (U 0 \ {x 0 }) ∪ {y} if x 0 ∈ U 0 . Now, we have V = V 0 , by the induction hypothesis. If x ∉ U , we have y ∉ V , thus y ∉ V 0 and x 0 ∉ U 0 . Thus U = V = V 0 = U 0 and λx u, λx 0 u 0 have the same set of free variables, which is U . If x ∈ U , then y ∈ V , thus y ∈ V 0 and therefore x 0 ∈ U 0 . The set of free variables of λx u (resp. λx 0 u 0 ) is U \ {x} = V \ {y} (resp. U 0 \ {x 0 } = V 0 \ {y}). Since V = V 0 , it is, once again, the same set. Q.E.D.
The relation ≡ is an equivalence relation on L. Indeed, the proof of the three following properties is trivial, by induction on t : t ≡ t ; t ≡ t 0 ⇒ t 0 ≡ t ; t ≡ t 0 , t 0 ≡ t 00 ⇒ t ≡ t 00 . Proposition 1.7. Let t , t 0 , t 1 , t 10 . . . , t k , t k0 be λterms, and x 1 , . . . , x k distinct variables. If t ≡ t 0 , t 1 ≡ t 10 , . . . , t k ≡ t k0 and if no free variable in t 1 , . . . , t k is bound in t , t 0 , then t ≡ t 0 . Note that, since t ≡ t 0 , t and t 0 have the same free variables. Thus it can be assumed that x 1 , . . . , x k are free in t and t 0 ; indeed, if x 1 , . . . , x l are those x i variables which are free in t and t 0 , then, by lemma 1.1 : t = t and t 0 = t 0 . Also, since t i ≡ t i0 , t i and t i0 have the same free variables. Therefore, no free variable in t 1 , t 10 , . . . , t k , t k0 is bound in t , t 0 . The proof of the proposition proceeds by induction on t . The result is immediate if t is a variable, or t = (u)v. Suppose t = λx u. Then t 0 = λx 0 u 0 and u ≡ u 0 for all variables y except a finite number. Since x 1 , . . . , x k are free in t and t 0 , x and x 0 are different from x 1 , . . . , x k . Thus t = λx u and t 0 = λx 0 u 0 . Hence it is sufficient to show that : u ≡ u 0 for all variables y except a finite number. Therefore, we may assume that y 6= x 1 , . . . , x k . Since x, x 0 are respectively bound in t , t 0 , they are not free in t 1 , . . . , t k , t 10 , . . . , t k0 ; thus, it follows from lemma 1.2 that u = u and u 0 = u 0 . Since y 6= x 1 , . . . , x k , we get, applying again lemma 1.2 :
Lambdacalculus, types and models
14
u = u and u 0 = u 0 and therefore : u = u and u 0 = u 0 . Now, since u ≡ u 0 for all variables y except a finite number, and u is shorter than t , the induction hypothesis gives : u ≡ u 0 , thus : u ≡ u 0 for all variables y except a finite number. Q.E.D.
Corollary 1.8. The relation ≡ is λcompatible. Suppose t ≡ t 0 . We need to prove that λx t ≡ λx t 0 , that is to say : t ≡ t 0 for all variables y except a finite number. But this follows from proposition 1.7, provided that y is not a bound variable in t or in t 0 . Q.E.D.
Corollary 1.9. If t , t 1 , . . . , t k , t 10 , . . . , t k0 are terms, and x 1 , . . . , x k are distinct variables, then : t 1 ≡ t 10 , . . . , t k ≡ t k0 ⇒ t ≡ t . This follows from corollary 1.8 and lemma 1.4. Q.E.D.
However, note that it is not true that u ≡ u 0 ⇒ u ≡ u 0 . For example, λy x ≡ λz x, while λy x = λy y 6≡ λz x = λz y. Lemma 1.10. λx t ≡ λy t whenever y is a variable which does not occur in t . By corollary 1.3, t = t for any variable z, since y does not occur in t . Hence the result follows from the definition of ≡. Q.E.D.
Lemma 1.11. Let t be a term, and x 1 , . . . , x k be variables. Then there exists a term t 0 , t 0 ≡ t , such that none of x 1 , . . . , x k are bound in t 0 . The proof is by induction on t . The result is immediate if t is a variable, or if t = (u)v. If t = λx u, then, by induction hypothesis, there exists a term u 0 , u 0 ≡ u, in which none of x 1 , . . . , x k are bound. By the previous lemma, t ≡ λx u 0 ≡ λy u 0 with y 6= x 1 , . . . , x k . Thus it is sufficient to take t 0 = λy u 0 . Q.E.D.
Chapter 1. Substitution and betaconversion
15
From now on, αequivalent terms will be identified ; hence we will deal with the quotient set L/≡ ; it is denoted by Λ. For each variable x, its equivalence class will still be denoted by x (it is actually {x}). Furthermore, the operations t , u 7→ (t )u and t , x 7→ λx t are compatible with ≡ and are therefore defined in Λ. Moreover, if t ≡ t 0 , then t and t 0 have the same free variables. Hence it is possible to define the free variables of a member of Λ. Consider terms t , t 1 , . . . , t k ∈ Λ and distinct variables x 1 , . . . , x k . Then the term t [t 1 /x 1 , . . . , t k /x k ] ∈ Λ (being the result of the replacement of every free occurrence of x i in t by t i , for i = 1, . . . , k) is defined as follows : let t , t 1 , . . . , t k be terms of L, the equivalence classes of which are respectively t , t 1 , . . . , t k . By lemma 1.11, we may assume that no bound variable of t is free in t 1 , . . . , t k . Then t [t 1 /x 1 , . . . , t k /x k ] is defined as the equivalence class of t . Indeed, by proposition 1.7, this equivalence class does not depend on the choice of t , t 1 , . . . , t k . So the substitution operation t , t 1 , . . . , t k 7→ t [t 1 /x 1 , . . . , t k /x k ] is well defined in Λ. It corresponds to the replacement of the free occurrences of x i in t by t i (1 ≤ i ≤ k), provided that a representative of t has been chosen such that no free variable in t 1 , . . . , t k is bound in it. The substitution operation satisfies the following lemmas, already stated for the simple substitution : Lemma 1.12. If the variable x 1 is not free in the term t of Λ, then : t [t 1 /x 1 , . . . , t k /x k ] = t [t 2 /x 2 , . . . , t k /x k ]. Immediate from lemma 1.1 and the definition of t [t 1 /x 1 , . . . , t k /x k ]. Q.E.D.
The following lemma shows that the substitution behaves much better in Λ than in L (compare with lemma 1.2). In particular, it shows that the composition of two substitutions gives a substitution. Lemma 1.13. Let {x 1 , . . . , x m }, {y 1 , . . . , y n } be two finite sets of variables, and suppose that their common elements are x 1 = y 1 , . . . , x k = y k . Let t , t 1 , . . . , t m , u 1 , . . . , u n be terms of Λ. Then : 0 t [t 1 /x 1 , . . . , t m /x m ][u 1 /y 1 , . . . , u n /y n ] = t [t 10 /x 1 , . . . , t m /x m , u k+1 /y k+1 , . . . , u n /y n ] where t i0 = t i [u 1 /y 1 , . . . , u n /y n ]. Let t , t 1 , . . . , t m , u 1 , . . . , u n be some representatives of t , t 1 , . . . , t m , u 1 , . . . , u n . By lemma 1.11, we may assume that no bound variable of t is free in t 1 , . . . , t m , u 1 , . . . , u n , and that no bound variable of t 1 , . . . , t m is free in u 1 , . . . , u n .
Lambdacalculus, types and models
16
From lemma 1.2, we get : t = t where t 0i = t i . The first member is a representative of t [t 1 /x 1 , . . . , t m /x m ][u 1 /y 1 , . . . , u n /y n ], because t is a representative of t [t 1 /x 1 , . . . , t m /x m ], and there is no bound variable of this term which is free in u 1 , . . . , u n . The second member 0 is a representative of t [t 10 /x 1 , . . . , t m /x m , u k+1 /y k+1 , . . . , u n /y n ], since no bound 0 0 variable of t is free in t 1 , . . . , t m , u k+1 , . . . , u n . Q.E.D.
Corollary 1.14. Any free variable of t [t 1 /x 1 , . . . , t m /x m ] is free in t or in t 1 or . . . or in t m . Let x be a variable which is not free in t , t 1 , . . . , t m . By lemma 1.13, we have t [t 1 /x 1 , . . . , t m /x m ][y/x] = t [t 1 /x 1 , . . . , t m /x m ] for any variable y. This shows that x is not free in t [t 1 /x 1 , . . . , t m /x m ]. Q.E.D.
Lemma 1.15. Let x, x 0 be variables and u, u 0 ∈ Λ be such that λx u = λx 0 u 0 . Then u[t /x] = u 0 [t /x 0 ] for every t ∈ Λ. Let u, u 0 ∈ L be representatives of u, u 0 . Then λx u ≡ λx 0 u 0 and, by definition of the αequivalence, we have u ≡ u 0 for every variable y but a finite number. If we suppose that y is not bound in u, u 0 , we see that u[y/x] = u 0 [y/x 0 ] for every variable y but a finite number ; therefore u[y/x][t /y] = u 0 [y/x 0 ][t /y]. If we suppose that y is different from x, x 0 , then, by lemma 1.13, we get : u[t /x, t /y] = u 0 [t /x 0 , t /y]. Assume now that y is not free in u, u 0 ; then, by lemma 1.12, we obtain u[t /x] = u 0 [t /x 0 ]. Q.E.D.
Proposition 1.16. Let t ∈ Λ such that t = λx u. Then, for every variable x 0 which is not free in t , there exists a unique u 0 ∈ Λ such that t = λx 0 u 0 ; it is given by u 0 = u[x 0 /x]. Remark. Clearly, if x 0 is a free variable of t , we cannot have t = λx 0 u 0 .
If λx u = λx 0 u 0 , then u[x 0 /x] = u 0 [x 0 /x 0 ] = u 0 by lemma 1.15. We prove now that, if u 0 = u[x 0 /x], then λx u = λx 0 u 0 . We may assume that x and x 0 are different, the result being trivial otherwise. Let u be a representative of u, in which the variable x 0 is not bound. Then u 0 = u is a representative of u 0 . It is sufficient to show that λx u ≡ λx 0 u 0 , that is to say u ≡ u 0 for every variable y but a finite number. Now u 0 = u. By corollary 1.3, we get :
Chapter 1. Substitution and betaconversion
17
u = u since the variable x 0 does not occur in u : indeed, it is not bound in u by hypothesis, and it is not free in u, because it is not free in t = λx u. Q.E.D.
We can now give the following inductive definition of the operation of substitution [t 1 /x 1 , . . . , t k /x k ], which is useful for inductive reasoning : x i [t 1 /x 1 , . . . , t k /x k ] = t i for 1 ≤ i ≤ k ; if x is a variable different from x 1 , . . . , x k , then x[t 1 /x 1 , . . . , t k /x k ] = x ; if t = uv, then t [t 1 /x 1 , . . . , t k /x k ] = (u[t 1 /x 1 , . . . , t k /x k ])v[t 1 /x 1 , . . . , t k /x k ] ; if t = λx u, we may assume that x is not free in t 1 , . . . , t k and different from x 1 , . . . , x k (proposition 1.16). Then t [t 1 /x 1 , . . . , t k /x k ] = λx(u[t 1 /x 1 , . . . , t k /x k ]). We need only to prove the last case : let u, t 1 , . . . , t k be representatives of u, t 1 , . . . , t k , such that no free variable of t 1 , . . . , t k is bound in u. Then t = λx u is a representative of t ; and τ = t is a representative of t [t 1 /x 1 , . . . , t k /x k ], since the bound variables of t are x and the bound variables of u, and x is not free in t 1 , . . . , t k . Now τ = λx u since x 6= x 1 , . . . , x k . The result follows, because u is a representative of u[t 1 /x 1 , . . . , t k /x k ]. We now define the notion of λcompatibility on Λ : if R is a binary relation on Λ, we will say that R is λcompatible if it satisfies : x R x for each variable x ; t R t 0 ⇒ λx t R λx t 0 ; t R t 0 , u R u 0 ⇒ (t )u R (t 0 )u 0 . A λcompatible relation is necessarily reflexive. Indeed, we have : Lemma 1.17. If R is λcompatible and t 1 R t 10 , . . . , t k R t k0 , then : t [t 1 /x 1 , . . . , t k /x k ] R t [t 10 /x 1 , . . . , t k0 /x k ]. Immediate proof by induction on the length of t . Q.E.D.
Lambdacalculus, types and models
18
3. Betaconversion Let R be a binary relation, on an arbitrary set E ; the least transitive and reflexive binary relation which contains R is obviously the relation R 0 defined as follows : t R 0 u ⇔ there exist a finite sequence t = v 0 , v 1 , . . . , v n−1 , v n = u of elements of E such that v i R v i +1 (0 ≤ i < n). R 0 is called the transitive closure of R. We say that the binary relation R on E satisfies the ChurchRosser (C.R.) property if and only if : for every t , u, u 0 ∈ E such that t R u and t R u 0 , there exists some v ∈ E such that u R v and u 0 R v. Lemma 1.18. Let R be a binary relation which satisfies the ChurchRosser property. Then the transitive closure of R also satisfies it. Let R 0 be that transitive closure. We will first prove the following property : t R 0 u, t R u 0 ⇒ for some v, u R v and u 0 R 0 v. t R 0 u means that there exists a sequence t = v 0 , v 1 , . . . , v n−1 , v n = u such that v i R v i +1 (0 ≤ i < n). The proof is by induction on n ; the case n = 1 is just the hypothesis of the lemma. Now since t R 0 v n−1 and t R u 0 , for some w, v n−1 R w and u 0 R 0 w. But v n−1 R u, so u R v and w R v for some v (C.R. property for R). Therefore u 0 R 0 v, which gives the result. Now we can prove the lemma : the assumption is t R 0 u and t R 0 u 0 , so there exists a sequence : t = v 0 , v 1 , . . . , v n−1 , v n = u 0 such that v i R v i +1 (0 ≤ i < n). The proof is by induction on n : the case n = 1 has just been settled. Since t R 0 u and t R 0 v n−1 , by induction hypothesis, we have u R 0 w and v n−1 R 0 w for some w. Now v n−1 R u 0 , so, by the previous property, w R v and u 0 R 0 v for some v. Thus u R 0 v. Q.E.D.
In the following, we consider binary relations on the set Λ of λterms. Proposition 1.19. If t , u, t 0 , u 0 ∈ Λ and (λx u)t = (λx 0 u 0 )t 0 , then u[t /x] = u 0 [t 0 /x 0 ]. This is the same as lemma 1.15, since (λx u)t = (λx 0 u 0 )t 0 if and only if t = t 0 and λx u = λx 0 u 0 . Q.E.D.
A term of the form (λx u)t is called a redex, u[t /x] is called its contractum. Proposition 1.19 shows that this notion is correctly defined on Λ.
Chapter 1. Substitution and betaconversion
19
A binary relation β0 will now be defined on Λ ; t β0 t 0 should be read as : “ t 0 is obtained by contracting a redex (or by a βreduction) in t ”. The definition is by induction on t : if t is a variable, then there is no t 0 such that t β0 t 0 ; if t = (u)v, then t β0 t 0 if and only if either t 0 = (u)v 0 with v β0 v 0 , or t 0 = (u 0 )v with u β0 u 0 , or else u = λx w and t 0 = w[v/x] ; if t = λx u, then t β0 t 0 if and only if t 0 = λx u 0 , with u β0 u 0 . We must check that, in this last case, the definition of β0 does not depend on the choice of the bound variable x. We show this by induction on the length of t , simultaneously with the following proposition 1.20. We first remark, from the definition of β0 and corollary 1.14, that whenever t β0 t 0 , any free variable in t 0 is also free in t . Proposition 1.20. If t β0 t 0 then t [t 1 /x 1 , . . . , t k /x k ] β0 t 0 [t 1 /x 1 , . . . , t k /x k ]. For the sake of brevity, we use the notation tˆ for t [t 1 /x 1 , . . . , t k /x k ]. It follows from the definition of β0 that the different possibilities for t , t 0 are : i) t = (u)v and t 0 = (u)v 0 , with v β0 v 0 . Then, by induction hypothesis, we get vˆ β0 vˆ 0 ; hence the result, by definition of β0 . ii) t = (u)v and t 0 = (u 0 )v, with u β0 u 0 . Same proof. iii) t = (λx u)v and t 0 = u[v/x]. By proposition 1.16, we may assume that x is not free in t 1 , . . . , t k and different from x 1 , . . . , x k . ˆ t 1 /x 1 , . . . , t k /x k ] (by lemma 1.13) = Then tˆ0 = u[v/x][t 1 /x 1 , . . . , t k /x k ] = u[v/x, ˆ ˆ v/x]. ˆ u[t 1 /x 1 , . . . , t k /x k ][v/x] (by lemma 1.13 and the choice of x) = u[ ˆ v, ˆ and therefore tˆ β0 tˆ0 . Now tˆ = (λx u) iv) t = λx u, t 0 = λx u 0 , and u β0 u 0 . Let us check first that the definition of β0 in this case does not depend on the choice of the bound variable x. Let y be a variable which is not free in t (and thus also not free in t 0 ). By the induction hypothesis, we have u[y/x] β0 u 0 [y/x], and therefore λy u[y/x] β0 λy u 0 [y/x] which is the desired result. Again, we may assume that x is not free in t 1 , . . . , t k and different from x 1 , . . . , x k . Then, by induction hypothesis, we get uˆ β0 uˆ 0 , and therefore λx uˆ β0 λx uˆ 0 . Finally, by the choice of x, this is the same as : (λx u)[t 1 /x 1 , . . . , t k /x k ] β0 (λx u 0 )[t 1 /x 1 , . . . , t k /x k ]. Q.E.D.
The βconversion is the least binary relation β on Λ, which is reflexive, transitive, and contains β0 . Thus, we have : t β t 0 ⇔ there exists a sequence t = t 0 , t 1 , . . . , t n−1 , t n = t 0 such that t i β0 t i +1 for 0 ≤ i ≤ n − 1 (n ≥ 0).
Lambdacalculus, types and models
20
Therefore, whenever t β t 0 , any free variable in t 0 is also free in t . The next two propositions give two simple characterizations of β. Proposition 1.21. The βconversion is the least transitive λcompatible binary relation β such that (λx u)t β u[t /x] for all terms t , u and variable x. Clearly, t β0 t 0 , u β0 u 0 ⇒ λx t β0 λx t 0 and (u)t β (u 0 )t 0 . Hence β is λcompatible. Conversely, if R is a λcompatible binary relation and if (λx u)t R u[t /x] for all terms t , u, then it follows immediately from the definition of β0 that R ⊃ β0 (prove t β0 t 0 ⇒ t R t 0 by induction on t ). So, if R is transitive, then R ⊃ β. Q.E.D.
Proposition 1.22. β is the transitive closure of the binary relation ρ defined on Λ by : u ρ u 0 ⇔ there exist a term v and redexes t 1 , . . . , t k with contractums t 10 , . . . , t k0 such that u = v[t 1 /x 1 , . . . , t k /x k ], u 0 = v[t 10 /x 1 , . . . , t k0 /x k ]. Since β is λcompatible, it follows from lemma 1.17 that β ⊃ ρ, and therefore β contains the transitive closure of ρ. Conversely, the transitive closure of ρ clearly contains β0 , and therefore contains β. Q.E.D.
Proposition 1.23. If t β t 0 , t 1 β t 10 , . . . , t k β t k0 then : t [t 1 /x 1 , . . . , t k /x k ] β t 0 [t 10 /x 1 , . . . , t k0 /x k ]. Since β is λcompatible, we have, by lemma 1.17 : t [t 1 /x 1 , . . . , t k /x k ] β t [t 10 /x 1 , . . . , t k0 /x k ]. Then, we get t [t 10 /x 1 , . . . , t k0 /x k ] β t 0 [t 10 /x 1 , . . . , t k0 /x k ] by proposition 1.20. Q.E.D.
A term t is said to be normal, or to be in normal form, if it contains no redex. So the normal terms are those which are obtained by applying, a finite number of times, the following rules : any variable x is a normal term ; whenever t is normal, so is λx t ; if t , u are normal and if the first symbol in t is not λ, then (t )u is normal. This definition yields, immediately, the following properties : A term is normal if and only if it is of the form λx 1 . . . λx k (x)t 1 . . . t n (with k, n ≥ 0), where x is a variable and t 1 , . . . , t n are normal terms. A term t is normal if and only if there is no term t 0 such that t β0 t 0 . Thus a normal term is “ minimal ” with respect to β, which means that, whenever t is normal, t β t 0 ⇒ t = t 0 . However the converse is not true : take t = (λx(x)x)λx(x)x, then t β t 0 ⇒ t = t 0 although t is not normal.
Chapter 1. Substitution and betaconversion
21
A term t is said to be normalizable if t β t 0 for some normal term t 0 . A term t is said to be strongly normalizable if there is no infinite sequence t = t 0 , t 1 , . . . , t n , . . . such that t i β0 t i +1 for all i ≥ 0 (the term t is then obviously normalizable). For instance, λx x is a normal term, (λx(x)x)λx x is strongly normalizable, (λx y)ω is normalizable but not strongly, and ω = (λx(x)x)λx(x)x is not normalizable at all. For normalizable terms, the problem of the uniqueness of the normal form arises. It is solved by the following theorem : Theorem 1.24 (ChurchRosser). The βconversion satisfies the property of ChurchRosser. This yields the uniqueness of the normal form : if t β t 1 , t β t 2 , with t 1 , t 2 normal, then, according to the theorem, there exists a term t 3 such that t 1 β t 3 , t 2 β t 3 . Thus t 1 = t 3 = t 2 . In order to prove that β satisfies the ChurchRosser property, it is sufficient to exhibit a binary relation ρ on Λ which satisfies the ChurchRosser property and has the βconversion as its transitive closure. One could think of taking ρ to be the “ reflexive closure ” of β0 , which would be defined by x ρ y ⇔ x = y or x β0 y. But this relation ρ does not satisfy the ChurchRosser property : for example, if t = (λx(x)x)r , where r is a redex with contractum r 0 , u = (r )r and v = (λx(x)x)r 0 , then t β0 u and t β0 v, while there is no term w such that u β0 w and v β0 w. A suitable definition of ρ is as the least λcompatible binary relation on Λ such that t ρ t 0 , u ρ u 0 ⇒ (λx u)t ρ u 0 [t 0 /x]. To prove that β ⊃ ρ, it is enough to see that t β t 0 , u β u 0 ⇒ (λx u)t β u 0 [t 0 /x] ; now : (λx u)t β (λx u 0 )t 0 (since β is λcompatible) and (λx u 0 )t 0 β u 0 [t 0 /x] ; then the expected result follows, by transitivity. Therefore, β contains the transitive closure ρ 0 of ρ. But of course ρ ⊃ β0 , so ρ 0 ⊃ β. Hence β is the transitive closure of ρ. It thus remains to prove that ρ satisfies the ChurchRosser property. By definition, ρ is the set of all pairs of terms obtained by applying, a finite number of times, the following rules : 1. x ρ x for each variable x ; 2. t ρ t 0 ⇒ λx t ρ λx t 0 ; 3. t ρ t 0 and u ρ u 0 ⇒ (t )u ρ (t 0 )u 0 ; 4. t ρ t 0 and u ρ u 0 ⇒ (λx t )u ρ t 0 [u 0 /x].
Lambdacalculus, types and models
22
Lemma 1.25. i) If x ρ t 0 , where x is a variable, then t 0 = x. ii) If λx u ρ t 0 , then t 0 = λx u 0 , and u ρ u 0 . iii) If (u)v ρ t 0 , then either t 0 = (u 0 )v 0 with u ρ u 0 and v ρ v 0 or u = λx w and t 0 = w 0 [v 0 /x] with v ρ v 0 and w ρ w 0 . i) x ρ t 0 could only be obtained by applying rule 1, hence t 0 = x. ii) Consider the last rule applied to obtain λx u ρ t 0 ; the form of the term on the left shows that it is necessarily rule 2 ; the result then follows. iii) Same method : the last rule applied to obtain (u)v ρ t 0 is 3 or 4 ; this yields the conclusion. Q.E.D.
Lemma 1.26. Whenever t ρ t 0 and u ρ u 0 , then t [u/x] ρ t 0 [u 0 /x]. The proof proceeds by induction on the length of the derivation of t ρ t 0 by means of rules 1, 2, 3, 4 ; consider the last rule used : If it is rule 1, then t = t 0 is a variable, and the result is trivial. If it is rule 2, then t = λy v, t 0 = λy v 0 and v ρ v 0 . By proposition 1.16, we may assume that y is different from x and is not free in u, u 0 . Since u ρ u 0 , the induction hypothesis implies v[u/x] ρ v 0 [u 0 /x] ; hence λy v[u/x] ρ λy v 0 [u 0 /x] (rule 2), that is to say t [u/x] ρ t 0 [u 0 /x]. If it is rule 3, then t = (v)w and t 0 = (v 0 )w 0 , with v ρ v 0 and w ρ w 0 . Thus, by induction hypothesis, v[u/x] ρ v 0 [u 0 /x] and w[u/x] ρ w 0 [u 0 /x]. Therefore, by applying rule 3, we obtain (v[u/x])w[u/x] ρ (v 0 [u 0 /x])w 0 [u 0 /x] that is t [u/x] ρ t 0 [u 0 /x]. If it is rule 4, then t = (λy v)w and t 0 = v 0 [w 0 /y], with v ρ v 0 and w ρ w 0 . We assume that y is not free in u, u 0 , and is different from x. By induction hypothesis, we have v[u/x] ρ v 0 [u 0 /x] and w[u/x] ρ w 0 [u 0 /x]. By rule 4, we get : (λy v[u/x])w[u/x] ρ v 0 [u 0 /x][w 0 [u 0 /x]/y].
(?)
Now λy v[u/x] = (λy v)[u/x], by hypothesis on y. It follows that : t [u/x] = (λy v[u/x])w[u/x]. On the other hand, we have t 0 [u 0 /x] = v 0 [w 0 /y][u 0 /x] = v 0 [w 0 [u 0 /x]/y, u 0 /x] (by lemma 1.13) = v 0 [u 0 /x][w 0 [u 0 /x]/y] (again by lemma 1.13, since the variable y is not free in u 0 ). Then, (?) gives the wanted result : t [u/x] ρ t 0 [u 0 /x]. Q.E.D.
Now the proof of the ChurchRosser property for ρ can be completed. So we assume that t 0 ρ t 1 , t 0 ρ t 2 , and we look for a term t 3 such that t 1 ρ t 3 , t 2 ρ t 3 . The proof is by induction on the length of t 0 . If t 0 is a variable, then by lemma 1.25(i), t 0 = t 1 = t 2 ; take t 3 = t 0 .
Chapter 1. Substitution and betaconversion
23
If t 0 = λx u 0 , then, since t 0 ρ t 1 , t 0 ρ t 2 , by lemma 1.25(ii), we have : t 1 = λx u 1 , t 2 = λx u 2 and u 0 ρ u 1 , u 0 ρ u 2 . By induction hypothesis, u 1 ρ u 3 and u 2 ρ u 3 hold for some term u 3 . Hence it is sufficient to take t 3 = λx u 3 . If t 0 = (u 0 )v 0 , then, since t 0 ρ t 1 , t 0 ρ t 2 , by lemma 1.25(iii), the different possible cases are : a) t 1 = (u 1 )v 1 , t 2 = (u 2 )v 2 with u 0 ρ u 1 , v 0 ρ v 1 , u 0 ρ u 2 , v 0 ρ v 2 . By induction hypothesis, u 1 ρ u 3 , u 2 ρ u 3 , v 1 ρ v 3 , v 2 ρ v 3 hold for some u 3 and v 3 . Hence it is sufficient to take t 3 = (u 3 )v 3 . b) t 1 = (u 1 )v 1 , with u 0 ρ u 1 , v 0 ρ v 1 ; u 0 = λx w 0 ; t 2 = w 2 [v 2 /x], with v 0 ρ v 2 , w 0 ρ w 2 . Since u 0 ρ u 1 , by lemma 1.25(ii), we have u 1 = λx w 1 , for some w 1 such that w 0 ρ w 1 . Thus t 1 = (λx w 1 )v 1 . Since v 0 ρ v 1 , v 0 ρ v 2 , and w 0 ρ w 1 , w 0 ρ w 2 , the induction hypothesis gives : v 1 ρ v 3 , v 2 ρ v 3 , and w 1 ρ w 3 , w 2 ρ w 3 for some v 3 and w 3 . Hence, by rule 4, we get (λx w 1 )v 1 ρ w 3 [v 3 /x], that is t 1 ρ w 3 [v 3 /x]. Now, by lemma 1.26, we get w 2 [v 2 /x] ρ w 3 [v 3 /x]. Therefore we obtain the expected result by taking t 3 = w 3 [v 3 /x]. c) u 0 = λx w 0 , t 1 = w 1 [v 1 /x], t 2 = w 2 [v 2 /x] and we have : v0 ρ v1, v0 ρ v2, w0 ρ w1, w0 ρ w2. By induction hypothesis, v 1 ρ v 3 , v 2 ρ v 3 , w 1 ρ w 3 , w 2 ρ w 3 hold for some v 3 and w 3 . Hence, by lemma 1.26, w 1 [v 1 /x]ρ w 3 [v 3 /x], w 2 [v 2 /x]ρ w 3 [v 3 /x], that is to say t 1 ρ w 3 [v 3 /x], t 2 ρ w 3 [v 3 /x]. The result follows by taking t 3 = w 3 [v 3 /x]. Q.E.D.
Remark. The intuitive meaning of the relation ρ is the following : t ρ t 0 holds if and only if t 0 is obtained from t by contracting several redexes occurring in t . For example, (λx(x)x)λx x ρ(λx x)λx x ; a new redex has been created, but it cannot be contracted ; (λx(x)x)λx x ρ λx x does not hold. In other words, t ρ t 0 means that t and t 0 are constructed simultaneously : for t the steps of the construction are those described in the definition of terms, while for t 0 , the same rules are applied, except that the following alternative is allowed : whenever t = (λx u)v, t 0 can be taken either as (λx u 0 )v 0 or as u 0 [v 0 /x]. This is what lemma 1.25 expresses.
βequivalence The βequivalence (denoted by 'β ) is defined as the least equivalence relation which contains β0 (or β, which comes to the same thing). In other words : t 'β t 0 ⇔ there exists a sequence (t = t 1 ), t 2 , . . . , t n−1 , (t n = t 0 ), such that t i β0 t i +1 or t i +1 β0 t i for 1 ≤ i < n. t 'β t 0 should be read as : t is βequivalent to t 0 .
Lambdacalculus, types and models
24
Proposition 1.27. t 'β t 0 if and only if there exists a term u such that t β u and t 0 β u. The condition is obviously sufficient. For the purpose of proving that it is necessary, consider the relation ' defined by : t ' t 0 ⇔ t β u and t 0 β u for some term u. This relation contains β, and is reflexive and symmetric. It is also transitive, for if t ' t 0 , t 0 ' t 00 , then t β u, t 0 β u, and t 0 β v, t 00 β v for suitable u and v. By theorem 1.24 (ChurchRosser’s theorem), u β w and v β w hold for some term w ; thus t β w, t 00 β w. Hence ' is an equivalence relation which contains β, so it also contains 'β . Q.E.D.
Therefore, a nonnormalizable term cannot be βequivalent to a normal term.
4. Etaconversion Proposition 1.28. If λx(t )x = λx 0 (t 0 )x 0 and x is not free in t , then t = t 0 . By proposition 1.16, we get t 0 x 0 = (t x)[x 0 /x] which is t x 0 since x is not free in t . Therefore t = t 0 . Q.E.D.
A term of the form λx(t )x, where x is not free in t , is called an ηredex, its contractum being t . A term of either of the forms (λx t )u, λy(v)y (where y is not free in v) will be called a βηredex. We now define a binary relation η 0 on Λ ; t η 0 t 0 should be read as “ t 0 is obtained by contracting an ηredex (or by an ηreduction) in the term t ”. The definition is given by induction on t , as for β0 : if t is a variable, then there is no t 0 such that t η 0 t 0 ; if t = λx u, then t η 0 t 0 if and only if : either t 0 = λx u 0 , with u η 0 u 0 , or u = (t 0 )x, with x not free in t 0 ; if t = (u)v, then t η 0 , t 0 if and only if : either t 0 = (u 0 )v with u η 0 u 0 or t 0 = (u)v 0 with v η 0 v 0 . The relation t βη 0 t 0 (which means : “ t 0 is obtained from t by contracting a βηredex ”) is defined as : t β0 t 0 or t η 0 t 0 . The ηconversion (resp. the βηconversion) is defined as the least binary relation η (resp. βη) on Λ which is reflexive, transitive, and contains η 0 (resp. βη 0 ). Proposition 1.29. The βηconversion is the least transitive λcompatible binary relation βη such that (λx t )u βη t [u/x] and λy(v)y βη v whenever y is not free in v.
Chapter 1. Substitution and betaconversion
25
The proof is similar to that of proposition 1.21 (which is the analogue for β). Q.E.D.
It can be proved, as for β, that βη is the transitive closure of the binary relation ρ defined on Λ by : u ρ u 0 ⇔ there exist a term v, and redexes t 1 , . . . , t k with contractums t 10 , . . . , t k0 such that u = v[t 1 /x 1 , . . . , t k /x k ], u 0 = v[t 10 /x 1 , . . . , t k0 /x k ]. Similarly : if t βη t 0 , then every free variable in t 0 is also free in t . Proposition 1.30. If t βη 0 t 0 then t [t 1 /x 1 , . . . , t k /x k ] βη 0 t 0 [t 1 /x 1 , . . . , t k /x k ]. The proof is by induction on the length of t . For the sake of brevity, we use the notation tˆ for t [t 1 /x 1 , . . . , t k /x k ]. It follows from the definition of βη 0 that the different possibilities for t , t 0 are : i) t = λx u, t 0 = λx u 0 , and uβη 0 u 0 . ii) t = (u)v and t 0 = (u 0 )v, with uβη 0 u 0 . iii) t = (u)v and t 0 = (u)v 0 , with v βη 0 v 0 . iv) t = (λx u)v and t 0 = u[v/x]. v) t = λx(t 0 )x, with x not free in t 0 . Cases i) to iv) are settled exactly as in proposition 1.20. In case v), assume that x is not free in t 1 , . . . , t k and different from x 1 , . . . , x k . Then tˆ = λx(tˆ0 )x, and therefore tˆ βη 0 tˆ0 . Q.E.D.
Proposition 1.31. If t βη t 0 , t 1 βη t 10 , . . . , t k βη t k0 then t [t 1 /x 1 , . . . , t k /x k ] βη t 0 [t 10 /x 1 , . . . , t k0 /x k ]. Since βη is λcompatible, we have t [t 1 /x 1 , . . . , t k /x k ] βη t [t 10 /x 1 , . . . , t k0 /x k ], by lemma 1.17. Then, we get t [t 10 /x 1 , . . . , t k0 /x k ] βη t 0 [t 10 /x 1 , . . . , t k0 /x k ] by proposition 1.30. Q.E.D.
A term t is said to be βηnormal if it contains no βηredex. So the βηnormal terms are those obtained by applying, a finite number of times, the following rules : any variable x is a βηnormal term ; whenever t is βηnormal, then so is λx t , except if t = (t 0 )x, with x not free in t 0 ; whenever t , u are βηnormal, then so is (t )u, except if the first symbol in t is λ. Theorem 1.32. The βηconversion satisfies the ChurchRosser property. The proof is on the same lines as for the βconversion. Here ρ is defined as the least λcompatible binary relation on Λ such that :
Lambdacalculus, types and models
26
t ρ t 0 , u ρ u 0 ⇒ (λx t )u ρ t 0 [u 0 /x] ; t ρ t 0 ⇒ λx(t )x ρ t 0 whenever x is not free in t . The first thing to be proved is : βη ⊃ ρ. For that purpose, note that t βη t 0 , u βη u 0 ⇒ (λx t )u βη t 0 [u 0 /x] ; indeed, since βη is λcompatible, we have (λx t )u βη (λx t 0 )u 0 and, on the other hand, (λx t 0 )u 0 βη t 0 [u 0 /x] ; the result then follows, by transitivity. Now we show that t βη t 0 ⇒ λx(t )x βη t 0 if x is not free in t ; this is immediate, by transitivity, since λx(t )x βη t . Therefore βη is the transitive closure of ρ. It thus remains to prove that ρ satisfies the ChurchRosser property. By definition, ρ is the set of all pairs of terms obtained by applying, a finite number of times, the following rules : 1. x ρ x for each variable x ; 2. t ρ t 0 ⇒ λx t ρ λx t 0 ; 3. t ρ t 0 and u ρ u 0 ⇒ (t )u ρ (t 0 )u 0 ; 4. t ρ t 0 , u ρ u 0 ⇒ (λx t )u ρ t 0 [u 0 /x] ; 5. t ρ t 0 ⇒ λx(t )x ρ t 0 whenever x is not free in t . The following lemmas are the analogues of lemmas 1.25 and 1.26. Lemma 1.33. i) If x ρ t 0 , where x is a variable, then t 0 = x. ii) If λx u ρ t 0 , then either t 0 = λx u 0 and u ρ u 0 , or u = (t )x and t ρ t 0 , with x not free in t . iii) If (u)v ρ t 0 , then either t 0 = (u 0 )v 0 with u ρ u 0 and v ρ v 0 , or u = λx w and 0 t = w 0 [v 0 /x] with v ρ v 0 and w ρ w 0 . Same proof as for lemma 1.25. Q.E.D.
Lemma 1.34. Whenever t ρ t 0 and u ρ u 0 , then t [u/x] ρ t 0 [u 0 /x]. The proof proceeds by induction on the length of the derivation of t ρ t 0 by means of rules 1 through 5 ; consider the last rule used : if it is one of rules 1, 2, 3, 4, then the proof is the same as in lemma 1.26 ; if it is rule 5, then t = λy(v)y and v ρ t 0 , with y not free in v. We may assume that y is not free in u and different from x. By induction hypothesis, v[u/x]ρ t 0 [u 0 /x], then, by applying rule 5, we obtain λy(v[u/x])y ρ t 0 [u 0 /x] (since y is not free in v[u/x]), that is t [u/x]ρ t 0 [u 0 /x]. Q.E.D.
Now the proof of the ChurchRosser property for ρ can be completed. So we assume that t 0 ρ t 1 , t 0 ρ t 2 , and we look for a term t 3 such that t 1 ρ t 3 , t 2 ρ t 3 . The proof is by induction on the length of t 0 .
Chapter 1. Substitution and betaconversion
27
If t 0 has length 1, then it is a variable ; hence, by lemma 1.33, t 0 = t 1 = t 2 ; take t 3 = t 0 . If t 0 = λx u 0 , then, since t 0 ρ t 1 , t 0 ρ t 2 , by lemma 1.33, the different possible cases are : a) t 1 = λx u 1 , t 2 = λx u 2 , and u 0 ρ u 1 , u 0 ρ u 2 . By induction hypothesis, u 1 ρ u 3 and u 2 ρ u 3 hold for some term u 3 . Then it is sufficient to take t 3 = λx u 3 . b) t 1 = λx u 1 , and u 0 ρ u 1 ; u 0 = (t 00 )x, with x not free in t 00 , and t 00 ρ t 2 . According to lemma 1.33, since u 0 ρ u 1 and u 0 = (t 00 )x, there are two possibilities for u 1 : i) u 1 = (t 10 )x, with t 00 ρ t 10 . Now t 00 ρ t 2 , thus, by induction hypothesis, t 10 ρ t 3 and t 2 ρ t 3 hold for some term t 3 . Note that, since t 00 ρ t 10 , all free variables in t 10 are also free in t 00 , so x is not free in t 10 . Hence, by rule 5, λx(t 10 )x ρ t 3 , that is t1 ρ t3 . ii) t 00 = λy u 00 , u 1 = u 10 [x/y] and u 00 ρ u 10 . By proposition 1.16, we may choose for y any variable which is not free in t 00 , x for example. Then u 1 = u 10 and u 00 ρ u 1 . Since ρ is λcompatible, λx u 00 ρ λx u 1 , that is t 00 ρ t 1 . Since t 00 ρ t 2 , there exists, by induction hypothesis, a term t 3 such that t 1 ρ t 3 , t 2 ρ t 3 . c) u 0 = (t 00 )x, with x not free in t 00 , and t 00 ρ t 1 , t 00 ρ t 2 . The conclusion follows immediately from the induction hypothesis, since t 00 is shorter than t 0 . If t 0 = (v 0 )u 0 , then, since t 0 ρ t 1 , t 0 ρ t 2 , by lemma 1.33, the different possible cases are : a) t 1 = (v 1 )u 1 , t 2 = (v 2 )u 2 with u 0 ρ u 1 , v 0 ρ v 1 , u 0 ρ u 2 , v 0 ρ v 2 . By induction hypothesis, u 1 ρ u 3 , u 2 ρ u 3 , v 1 ρ v 3 , v 2 ρ v 3 hold for some u 3 and v 3 . Then it is sufficient to take t 3 = (v 3 )u 3 . b) t 1 = (v 1 )u 1 , with u 0 ρ u 1 , v 0 ρ v 1 ; v 0 ≡ λx w 0 , t 2 = w 2 [u 2 /x], with u 0 ρ u 2 , w 0 ρ w 2 . Since v 0 ρ v 1 , and v 0 = λx w 0 , by lemma 1.33, the different possible cases are : i) v 1 = λx w 1 , with w 0 ρ w 1 . Then t 1 = (λx w 1 )u 1 . Since u 0 ρ u 1 , u 0 ρ u 2 , and w 0 ρ w 1 , w 0 ρ w 2 , by induction hypothesis, u 1 ρ u 3 , u 2 ρ u 3 , and w 1 ρ w 3 , w 2 ρ w 3 hold for some u 3 , w 3 . Thus, by rule 4, (λx w 1 )u 1 ρ w 3 [u 3 /x], that is t 1 ρ w 3 [u 3 /x]. Hence, by lemma 1.34, w 2 [u 2 /x] ρ w 3 [u 3 /x]. The expected result is then obtained by taking t 3 = w 3 [u 3 /x]. ii) w 0 = (v 00 )x, with x not free in v 00 , and v 00 ρ v 1 . Then (v 00 )x ρ w 2 ; since u 0 ρ u 2 , it follows from lemma 1.34 that ((v 00 )x)[u 0 /x]ρ w 2 [u 2 /x]. But x is not free in v 00 , so this is equivalent to (v 00 )u 0 ρ t 2 . Now v 00 ρ v 1 and u 0 ρ u 1 . Thus (v 00 )u 0 ρ (v 1 )u 1 , in other words : (v 00 )u 0 ρ t 1 . Since (v 00 )u 0 is shorter than t 0 (because v 0 = λx(v 00 )x), there exists, by induction hypothesis, a term t 3 such that t 1 ρ t 3 , t 2 ρ t 3 . c) v 0 = λx w 0 , t 1 = w 1 [u 1 /x], t 2 = w 2 [u 2 /x], with u 0 ρ u 1 , u 0 ρ u 2 , w 0 ρ w 1 and w 0 ρ w 2 . By induction hypothesis, u 1 ρ u 3 , u 2 ρ u 3 , w 1 ρ w 3 , w 2 ρ w 3 hold
Lambdacalculus, types and models
28
for some u 3 and w 3 . Thus, by lemma 1.34, we have w 1 [u 1 /x] ρ w 3 [u 3 /x], w 2 [u 2 /x] ρ w 3 [u 3 /x], that is to say t 1 ρ w 3 [u 3 /x], t 2 ρ w 3 [u 3 /x]. The result follows by taking t 3 = w 3 [u 3 /x]. Q.E.D.
The βηequivalence (denoted by 'βη ) is defined as the least equivalence relation which contains βη. In other words : t 'βη t 0 ⇔ there exists a sequence t = t 1 , t 2 , . . . , t n−1 , t n = t 0 , such that either t i βη t i +1 or t i +1 βη t i , for 1 ≤ i < n. As for the βequivalence, it follows from ChurchRosser’s theorem that : Proposition 1.35. t 'βη t 0 ⇔ t βη u and t 0 βη u for some term u. The relation 'βη satisfies the “ extensionality axiom ”, that is to say : If (t )u 'βη (t 0 )u holds for all u, then t 'βη t 0 . Indeed, it is enough to take u as a variable x which does not occur in t , t 0 . Since 'βη is λcompatible, we have λx(t )x 'βη λx(t 0 )x ; therefore, by ηreduction, t 'βη t 0 .
References for chapter 1 [Bar84], [Chu41], [Hin86]. (The references are in the bibliography at the end of the book).
Chapter 2 Representation of recursive functions 1. Head normal forms In every λterm, each subsequence of the form “ (λ ” corresponds to a unique redex (this is obvious since redexes are terms of the form (λx t )u). This allows us to define, in any non normal term t , the leftmost redex in t . Let t 0 be the term obtained from t by contracting that leftmost redex : we say that t 0 is obtained from t by a leftmost βreduction. Let t be an arbitrary λterm. With t we associate a (finite or infinite) sequence of terms t 0 , t 1 , . . . , t n , . . . such that t 0 = t , and t n+1 is obtained from t n by a leftmost βreduction (if t n is normal, then the sequence ends with t n ). We call it “ the sequence obtained from t by leftmost βreduction ” ; it is uniquely determined by t . The following theorem will be proved in chapter 4 (theorem 4.13) : Theorem 2.1. If t is a normalizable term, then the sequence obtained from t by leftmost βreduction terminates with the normal form of t . We see that this theorem provides a “ normalizing strategy ”, which can be used for any normalizable term. The next proposition is simply a remark about the form of the λterms : Proposition 2.2. Every term of the λcalculus can be written, in a unique way, in the form λx 1 . . . λx m (v)t 1 . . . t n , where x 1 , . . . , x m are variables, v is either a variable or a redex (v = (λx t )u) and t 1 , . . . , t n are terms (m, n ≥ 0). Recall that (v)t 1 . . . t n denotes the term (. . . ((v)t 1 ) . . .)t n . 29
Lambdacalculus, types and models
30
We prove the proposition by induction on the length of the considered term τ : the result is clear if τ is a variable. If τ = λx τ0 , then τ0 is determined by τ, and can be written in a unique way in the indicated form, by induction hypothesis ; thus the same holds for τ. If τ = (w)v, then v and w are determined by τ. If w starts with λ, then τ is a redex, so it is of the second form, and not of the first one. If w does not start with λ, then, by induction hypothesis, w = (w 0 )t 1 . . . t n , where w 0 is a variable or a redex ; thus τ = (w 0 )t 1 . . . t n v, which is in one and only one of the indicated forms. Q.E.D.
Definitions. A term τ is a head normal form (or in head normal form) if it is of the first form indicated in proposition 2.2, namely if : τ = λx 1 . . . λx m (x)t 1 . . . t n , where x is a variable. In the second case, if τ = λx 1 . . . λx m (λx u)t t 1 . . . t n , then the redex (λx u)t is called the head redex of τ. The head redex of a term τ, when it exists (namely when τ is not a head normal form), is clearly the leftmost redex in τ. It follows from proposition 2.2 that a term t is normal if and only if it is a head normal form : τ = λx 1 . . . λx m (x)t 1 . . . t n , where t 1 , . . . , t n are normal terms. In other words, a term is normal if and only if it is “ hereditarily in head normal form ”. The head reduction of a term τ is defined as the (finite or infinite) sequence of terms τ0 , τ1 , . . . , τn , . . . such that τ0 = τ, and τn+1 is obtained from τn by a βreduction of the head redex of τn if such a redex exists ; if not, τn is in head normal form, and the sequence ends with τn . The weak head reduction of a term τ is the initial part of its head reduction which stops as soon as we get a λterm which begins with a λ. In other words, we reduce the head redex only if there is no λ in front of it. Notation. We will write t Â u (resp. t Âw u) whenever u is obtained from t by a sequence of head βreductions (resp. weak head βreductions). For example, we have (λx x)λx(λy y)z Âw λx(λy y)z Â λx z. A λterm t is said to be solvable if, for any term u, there exist variables x 1 , . . . , x k and terms u 1 , . . . , u k , v 1 , . . . , v l , (k, l ≥ 0) such that : i) (t [u 1 /x 1 , . . . , u k /x k ])v 1 . . . v l 'β u. We have the following equivalent definitions : (ii) t is solvable if and only if there exist variables x 1 , . . . , x k and terms u 1 , . . . , u k , v 1 , . . . , v l such that (t [u 1 /x 1 , . . . , u k /x k ])v 1 . . . v l 'β I (I is the term λx x). (iii) t is solvable if and only if, given any variable x which does not occur in t , there exist terms u 1 , . . . , u k , v 1 , . . . , v l such that :
Chapter 2. Representation of recursive functions
31
(t [u 1 /x 1 , . . . , u k /x k ])v 1 . . . v l 'β x. Obviously, (i) ⇒ (ii) ⇒ (iii). Now if (t [u 1 /x 1 , . . . , u k /x k ])v 1 . . . v l 'β x, then : (t [u 1 /x 1 , . . . , u k /x k ][u/x])v 10 . . . v l0 'β u, and therefore (t [u 10 /x 1 , . . . , u k0 /x k ])v 10 . . . v l0 'β u, 0 0 where u i = u i [u/x], v j = v j [u/x] ; so we also have (iii) ⇒ (i). Remarks. The following properties are immediate : 1. Let t be a closed term. Then t is solvable if and only if there exist terms v 1 , . . . , v l such that (t )v 1 . . . v l 'β I . 2. A term t is solvable if and only if its closure t¯ is solvable (the closure of t is, by definition, the term t¯ = λx 1 . . . λx n t , where x 1 , . . . , x n are the free variables occurring in t ). 3. If (t )v is a solvable term, then t is solvable. 4. Of course, the head normal form of a term needs not be unique. Nevertheless : If a term t has a head normal form t 0 = λx 1 . . . λx k (x)u 1 . . . u n , then any head normal form of t can be written λx 1 . . . λx k (x)u 10 . . . u n0 , with u i 'β u i0 . Indeed, let t 1 = λy 1 . . . λy l (y)v 1 . . . v p be another head normal form of t . By the ChurchRosser theorem 1.24, there exists a term t 2 which can be obtained by βreduction from t 0 as well as from t 1 . Now, in t 0 (resp. t 1 ) all possible βreductions have to be made in u 1 , . . . , u n (resp. v 1 , . . . , v p ). Hence : t 2 ≡ λx 1 . . . λx k (x)u 10 . . . u n0 ≡ λy 1 . . . λy l (y)v 10 . . . v p0 with u i β u i0 , v j β v 0j . This yields the expected result.
The following theorem will be proved in chapter 4 (theorem 4.9) : Theorem 2.3. For every λterm t , the following conditions are equivalent : i) t is solvable ; ii) t is βequivalent to a head normal form ; iii) the head reduction of t terminates (with a head normal form).
2. Representable functions We define the Booleans : 0 = λxλy y and 1 = λxλy x. Then, for all terms t , u, ((0)t )u can be reduced (by head reduction) to u, while ((1)t )u can be reduced to t . Given two terms t , u and an integer k, let (t )k u denote the term (t ) . . . (t )u (with k occurrences of t ) ; in particular, (t )0 u = u. Beware : the expression (t )k alone is not a λterm. We define the term k = λ f λx( f )k x ; k is called “ the numeral (or integer) k of the λcalculus ” (also known as Church numeral k, or Church integer k). Notice that the Boolean 0 is the same term as the numeral 0, while the Boolean 1 is different from the numeral 1.
32
Lambdacalculus, types and models
Let ϕ be a partial function defined on Nn , with values either in N or in {0, 1}. Given a λterm Φ, we say that Φ represents (resp. strongly represents) the function ϕ if, for all k 1 , . . . , k n ∈ N : if ϕ(k 1 , . . . , k n ) is undefined, then (Φ)k 1 . . . k n is not normalizable (resp. not solvable) ; if ϕ(k 1 , . . . , k n ) = k, then (Φ)k 1 . . . k n is βequivalent to k (or to k, in case the range of ϕ is {0, 1}). Clearly, for total functions, these two notions of representation are equivalent. Theorem 2.4. Every partial recursive function from Nk to N is (strongly) representable by a term of the λcalculus. Recall the definition of the class of partial recursive functions. Given f 1 , . . . , f k , partial functions from Nn to N, and g , partial function from Nk to N, the partial function h, from Nn to N, obtained by composition, is defined as follows : h(p 1 , . . . , p n ) = g ( f 1 (p 1 , . . . , p n ), . . . , f k (p 1 , . . . , p n )) if f 1 (p 1 , . . . , p n ), . . . , f k (p 1 , . . . , p n ) are all defined, and h(p 1 , . . . , p n ) is undefined otherwise. Let h be a partial function from N to N. If there exists an integer p such that h(p) = 0 and h(q) is defined and different from 0 for all q < p, then we denote that integer p by µn{h(n) = 0} ; otherwise µn{h(n) = 0} is undefined. We call minimization the operation which associates, with each partial function f from Nk+1 to N, the partial function g , from Nk to N, such that : g (n 1 , . . . , n k ) = µn{ f (n 1 , . . . , n k , n) = 0}. The class of partial recursive functions is the least class of partial functions, with arguments and values in N, closed under composition and minimization, and containing : the one argument constant function 0 and successor function ; the two arguments addition, multiplication, and characteristic function of the binary relation x ≤ y ; and the projections P nk , defined by P nk (x 1 , . . . , x n ) = x k . So it is sufficient to prove that the class of partial functions which are strongly representable by a term of the λcalculus satisfies these properties. The constant function 0 is represented by the term λd 0. The successor function on N is represented by the term : suc = λnλ f λx((n) f )( f )x. The addition and the multiplication (functions from N2 to N) are respectively represented by the terms λmλnλ f λx((m) f )((n) f )x and λmλnλ f (m)(n) f . The characteristic function of the binary relation m ≤ n on N is represented by the term M = λmλn(((m)A)λd 1)((n)A)λd 0, where A = λ f λg (g ) f . The function P nk is represented by the term λx 1 . . . λx n x k .
Chapter 2. Representation of recursive functions
33
b ; so we have : From now on, we denote the term (suc)n 0 by n b 'β n, and (suc)n b = n + 1. n
Representation of composite functions Given any two λterms t , u, and a variable x with no free occurrence in t , u, the term λx(t )(u)x is denoted by t ◦u. Lemma 2.5. (λg g ◦ s)k h Â λx(h)(s)k x for all closed terms s, h and every integer k ≥ 1. Recall that t Â u means that u is obtained from t by a sequence of head βreductions. We prove the lemma by induction on k. The case k = 1 is clear. Assume the result for k ; then (λg g ◦ s)k+1 h = (λg g ◦ s)k (λg g ◦ s)h Â λx((λg g ◦ s)h)(s)k x (by induction hypothesis, applied with (λg g ◦ s)h instead of h) Â λx(h ◦ s)(s)k x ≡ λx(λy(h)(s)y)(s)k x Â λx(h)(s)k+1 x. Q.E.D.
Lemma 2.6. Let Φ, ν be two terms. Define [Φ, ν] = ( ((ν)λg g ◦ suc)Φ )0. Then : if ν is not solvable, then neither is [Φ, ν] ; if ν 'β n (Church numeral), then [Φ, ν] 'β (Φ)n ; and if Φ is not solvable, then neither is [Φ, ν]. The first statement follows from remark 3, page 31. If ν 'β n, then : (ν)λg g ◦ suc 'β (n)λg g ◦ suc = (λ f λh( f )n h)λg g ◦ suc 'β λh(λg g ◦ suc)n h. By lemma 2.5, this term gives, by head reduction, λhλx(h)(suc)n x. Hence [Φ, ν] 'β (Φ)(suc)n 0 'β (Φ)n. Therefore, if Φ is not solvable, then neither is [Φ, ν] (remark 3, page 31). Q.E.D.
The term [Φ, ν1 , . . . , νk ] is defined, for k ≥ 2, by induction on k : [Φ, ν1 , . . . , νk ] = [ [Φ, ν1 , . . . , νk−1 ], νk ]. Lemma 2.7. Let Φ, ν1 , . . . , νk be terms such that each νi is either βequivalent to a Church numeral, or not solvable. Then : if one of the ν0i s is not solvable, then neither is [Φ, ν1 , . . . , νk ] ; if νi 'β n i (1 ≤ i ≤ k), then [Φ, ν1 , . . . , νk ] 'β (Φ)n 1 . . . n k . The proof is by induction on k : let Ψ = [Φ, ν1 , . . . , νk−1 ] ; then : [Φ, ν1 , . . . , νk ] = [Ψ, νk ]. If νk is not solvable, then, by lemma 2.6, neither is [Ψ, νk ]. If νk is solvable (and βequivalent to a Church numeral), and if one of the νi ’s (1 ≤ i ≤ k − 1) is not
Lambdacalculus, types and models
34
solvable, then Ψ is not solvable (induction hypothesis), and hence neither is [Ψ, νk ] (lemma 2.6). Finally, if νi 'β n i (1 ≤ i ≤ k), then, by induction hypothesis, Ψ 'β (Φ)n 1 . . . n k−1 ; therefore, [Ψ, νk ] 'β (Φ)n 1 . . . n k (lemma 2.6). Q.E.D.
Proposition 2.8. Let f 1 , . . . , f k be partial functions from Nn to N, and g a partial function from Nk to N. Assume that these functions are all strongly representable by λterms ; then so is the composite function g ( f 1 , . . . , f k ). Choose terms Φ1 , . . . , Φk , Ψ which strongly represent respectively the functions f 1 , . . . , f k , g . Then the term : χ = λx 1 . . . λx n [Ψ, (Φ1 )x 1 . . . x n , . . . , (Φk )x 1 . . . x n ] strongly represents the composite function g ( f 1 , . . . , f k ). Indeed, if p , . . . , p are Church numerals, then : n
1
(χ)p . . . p 'β [Ψ, (Φ1 )p . . . p , . . . , (Φk )p . . . p ]. 1
n
1
n
1
n
Now each of the terms (Φi )p . . . p (1 ≤ i ≤ k) is, either unsolvable (and in that 1 n case f i (p 1 , . . . , p n ) is undefined), or βequivalent to a Church numeral q (then i f i (p 1 , . . . , p n ) = q i ). If one of the terms (Φi )p . . . p is not solvable, then, by 1 n lemma 2.7, neither is (χ)p . . . p . If (Φi )p . . . p 'β q for all i (1 ≤ i ≤ k) where 1 n 1 n i q is a Church numeral, then by lemma 2.7, we have : i
(χ)p . . . p 'β (Ψ)q . . . q . 1
n
1
k
Q.E.D.
3. Fixed point combinators A fixed point combinator is a closed term M such that (M )F 'β (F )(M )F for every term F . The main point is the existence of such terms. Here are two examples : Proposition 2.9. Let Y be the term λ f (λx( f )(x)x)λx( f )(x)x ; then, for every term F , we have (Y )F 'β (F )(Y )F . Indeed, (Y )F Â (G)G, where G = λx(F )(x)x ; therefore : (Y )F Â (λx(F )(x)x)G Â (F )(G)G 'β (F )(Y )F . Q.E.D.
Y is known as Curry’s fixed point combinator. Note that we have neither (Y )F Â (F )(Y )F , nor even (Y )F β (F )(Y )F . Proposition 2.10. Let Z be the term (A)A, where A ≡ λaλ f ( f )(a)a f . Then, for any term F , we have (Z )F Â (F )(Z )F .
Chapter 2. Representation of recursive functions
35
Indeed, (Z )F ≡ (A)AF Â (F )(A)AF ≡ (F )(Z )F . Q.E.D.
Z is called Turing’s fixed point combinator. Proposition 2.11. Every fixed point combinator is solvable, but not normalizable. Let M be a fixed point combinator and f a variable. Then : (M )0 f 'β ((0)(M )0) f 'β f and it follows that M is solvable. If M is normalizable, then so is M f . Let M 0 be the normal form of M f . Since M f 'β ( f )(M ) f , it follows that M 0 'β ( f )M 0 . But these terms are normal, so that M 0 = ( f )M 0 which is clearly impossible. Q.E.D.
Representation of functions defined by minimization The following lemma is an application of results in chapter 4. Lemma 2.12. Let b, t 0 , t 1 be terms, and suppose b 'β 1 (resp. 0). Then (b)t 0 t 1 Âw t 0 (resp. t 1 ). Recall that 1, 0 are respectively the booleans λxλy x and λxλy y ; and that Âw denotes the weak head reduction (see page 30).
This lemma is the particular case of theorem 4.11, when k = 2 and n = 0. Q.E.D.
Lemma 2.13. There exists a closed term ∆ such that, for all terms Φ, n : (∆Φ)n Â ( (Φn)(∆Φ)(suc)n )n. Let T = λδλϕλν( (ϕν)(δϕ)(suc)ν )ν. Then ∆ is defined as a fixed point of T , by means, for example, of Curry’s fixed point combinator : we take ∆ = (D)D, where D = λx(T )(x)x. Then : (∆Φ)n = (D)DΦn Â ((T )(D)D)Φn = (T )∆Φn Â ( (Φn)(∆Φ)(suc)n )n. We can also take ∆ = D 0 D 0 , where D 0 is the normal form of D, that is : D 0 = λxλϕλν( (ϕν)(xxϕ)(suc)ν )ν. The Turing fixed point combinator gives another solution : ∆ = A AT with A = λaλ f ( f )(a)a f . Q.E.D.
Lemma 2.14. Let Φ be a λterm and n ∈ N. If Φn is not solvable, then neither is (∆Φ)n. If Φn 'β 0 (Boolean), then (∆Φ)n 'β n. b Â (∆Φ)pb with p = n + 1. If Φn 'β 1 (Boolean), then (∆Φ)n
Lambdacalculus, types and models
36 b = (suc)n 0). (Recall that n
Indeed, it follows from lemma 2.13 that (∆Φ)n Â ( (Φn)(∆Φ)(suc)n )n. Hence, if Φn is not solvable, then neither is (∆Φ)n (remark 3, page 31). Obviously, if Φn 'β 0 (Boolean), then (∆Φ)n 'β n. On the other hand, according to the same lemma, we also have : b Â ( (Φn)(∆Φ)(suc) b b )n b ; by lemma 2.12, if Φn b 'β 1 (Boolean), then : (∆Φ)n n b b )n b Â (∆Φ)(suc)n. b ( (Φn)(∆Φ)(suc) n b Â (∆Φ)(suc)n b = (∆Φ)pb with p = n + 1. Therefore (∆Φ)n Q.E.D.
Proposition 2.15. Let f (n 1 , . . . , n k , n) be a partial function from Nk+1 to N, and suppose that it is strongly representable by a term of the λcalculus. Then the partial function defined by g (n 1 , . . . , n k ) = µn{ f (n 1 , . . . , n k , n) = 0} is also strongly representable. Let ψ be the partial function from Nk+1 to {0, 1}, which has the same domain as f , and such that ψ(n 1 , . . . , n k , n) = 0 ⇔ f (n 1 , . . . , n k , n) = 0 . Then g (n 1 , . . . , n k ) = µn{ψ(n 1 , . . . , n k , n) = 0}. Let F denote a λterm which strongly represents f ; consider the term : Ψ = λx 1 . . . λx k λx((F x 1 . . . x k x)λd 1)0. Then, it is easily seen that Ψ strongly represents ψ. Now consider the term ∆ constructed above (lemma 2.13). We show that the term : G = λx 1 . . . λx k ((∆)(Ψ)x 1 . . . x k )0 strongly represents the function g . Indeed, let n 1 , . . . , n k ∈ N ; we put : Φ = (Ψ)n 1 . . . n k and therefore, we get Gn 1 . . . n k Â (∆Φ)0. If g (n 1 , . . . , n k ) is defined and equal to p, then ψ(n 1 , . . . , n k , n) is defined and equal to 1 for n < p and to 0 for n = p. Thus Φn = (Ψ)n 1 . . . n k n 'β 1 for n < p, and Φp = (Ψ)n 1 . . . n k p 'β 0. Now, we can apply lemma 2.14, and we get successively (since 0 = b 0) : Gn . . . n Â (∆Φ)0 Â (∆Φ)b 1 Â · · · Â (∆Φ)pb 'β p. 1
k
If g (n 1 , . . . , n k ) is undefined, there are two possibilities : i) ψ(n 1 , . . . , n k , n) is defined and equal to 1 for n < p and is undefined for n = p. 0) : Then we can successively deduce from lemma 2.14 (since 0 = b b Gn 1 . . . n k Â (∆Φ)0 Â (∆Φ)1 Â · · · Â (∆Φ)pb ; the last term obtained is not solvable, since neither is Φp = Ψn 1 . . . n k p (lemma 2.14). Consequently, Gn 1 . . . n k is not solvable (theorem 2.3,iii) ; ii) ψ(n 1 , . . . , n k , n) is defined and equal to 1 for all n. Then (again by lemma 2.14) : b Â ··· Gn 1 . . . n k Â (∆Φ)0 Â (∆Φ)b 1 Â · · · Â (∆Φ)n
Chapter 2. Representation of recursive functions
37
So the head reduction of Gn 1 . . . n k does not end. Therefore, by theorem 2.3, Gn 1 . . . n k is not solvable. Q.E.D.
It is intuitively clear, according to Church’s thesis, that any partial function from Nk to N, which is representable by a λterm, is partial recursive. We shall not give a formal proof of this fact. So we can state the Theorem 2.16 (ChurchKleene theorem). The partial functions from Nk to N which are representable (resp. strongly representable) by a term of the λcalculus are the partial recursive functions. The λterms which represent a given partial recursive function, that we obtain by this method, are not normal in general, and even not normalizable. Indeed, in the proof of lemma 2.13, we use a fixed point combinator, which is never a normalizable term (proposition 2.11). Let us show that we can get normal terms. Lemma 2.17. Let x be a variable and t ∈ Λ. Then, there exists a normal term t 0 such that t [n/x] 'β t 0 [n/x] for every n ∈ N. We define t 0 by induction on the length of t : if t is a variable, then t 0 = t ; if t = λy u, then t 0 = λy u 0 ; if t = uv, then t 0 = (x)Iu 0 v 0 (with I = λy y). It is trivial to show, by induction on the length of t , that t 0 is normal and that t [n/x] 'β t 0 [n/x] for every n ∈ N. We simply have to observe that (n)I 'β I if n ∈ N. Q.E.D.
Corollary 2.18. For every partial recursive function ϕ, there exists a normal term which (strongly) represents ϕ. For simplicity, we suppose ϕ to be a unary function. Let Φ be a closed λterm which strongly represents ϕ (theorem 2.16) and put t = Φx. Then Ψ = λx t 0 is normal, by lemma 2.17, and strongly represents ϕ : indeed, if n ∈ N, we have Ψn 'β t 0 [n/x] 'β t [n/x] = Φn. Q.E.D.
4. The second fixed point theorem Consider a recursive enumeration : n 7→ t n of the terms of the λcalculus. The inverse function will be denoted by t 7→ [[t ]] : more precisely, if t is a λterm,
Lambdacalculus, types and models
38
then [[t ]] is the Church numeral n such that t n = t , which will be called the numeral of t . The function n 7→ [[(t n )n]] is thus recursive, from N to the set of Church numerals. By theorem 2.16, there exists a term δ such that (δ)n 'β [[(t n )n]], for every integer n. Now, given an arbitrary term F , let B = λx(F )(δ)x. Then, for any integer n, we have (B )n 'β (F )[[(t n )n]]. Take n = [[B ]], that is to say t n = B ; then (t n )n = (B )[[B ]]. If we denote the term (B )[[B ]] by A, we obtain A 'β (F )[[A]]. So we have proved the : Theorem 2.19. For every λterm F , there exists a λterm A such that A 'β (F )[[A]]. Remark. The intuitive meaning of theorem 2.19 is that we can write, as ordinary λterms, programs using a new instruction σ (for “self”) which denotes the numeral of the program itself. Indeed, if such a program is written as Φ[σ/x], where Φ is a λterm, consider the λterms F = λx Φ, and A given by theorem 2.19. Then, we have A 'β (F )[[A]] and therefore, A 'β Φ[[[A]]/x] ; thus, A is the λterm we are looking for.
Theorem 2.20. Let X , Y be two nonempty disjoint sets of terms, which are saturated under the equivalence relation 'β . Then X and Y are recursively inseparable. Suppose that X and Y are recursively separable. This means that there exists a recursive set A ⊂ Λ such that X ⊂ A and Y ⊂ A c (the complement of A ). By assumption, there exist terms ξ and η such that ξ ∈ X and η ∈ Y . Since the characteristic function of A is recursive, there is a term Θ such that, for every integer n : (Θ)n 'β 1 ⇔ t n ∈ A and (Θ)n 'β 0 ⇔ t n ∉ A . Now let F = λx(Θ)xηξ. According to theorem 2.19, there exists a term A such that (F )[[A]] 'β A, which implies (Θ)[[A]]ηξ 'β A. If A ∈ A , then, by the definition of Θ, (Θ)[[A]] 'β 1, and it follows that : (Θ)[[A]]ηξ 'β η. Therefore A 'β η. Since η ∈ Y ⊂ A c and Y is saturated under the equivalence relation 'β , we conclude that A ∈ Y , thus A ∉ A , which is a contradiction. Similarly, if A ∉ A , then (Θ)[[A]] 'β 0, hence (Θ)[[A]]ηξ 'β ξ, and A 'β ξ. Since ξ ∈ X ⊂ A and X is saturated under the equivalence relation 'β , we conclude that A ∈ X , thus A ∈ A , which is again a contradiction. Q.E.D.
Corollary 2.21. The set of normalizable (resp. solvable) λterms is not recursive.
Chapter 2. Representation of recursive functions
39
Apply theorem 2.20 : take X as the set of normalizable (resp. solvable) terms, and Y = X c . Q.E.D.
The same method shows that, for instance, the set of λterms which are βequivalent to a Church integer, or the set of λterms which are βequivalent to a given one t 0 , are not recursive. The set of strongly normalizable λterms is also not recursive but, since it is not closed for βequivalence, the above method does not work to prove this. The undecidability of strong normalization will be proved in chapter 10.
References for chapter 2 [Bar84], [Hin86]. (The references are in the bibliography at the end of the book).
40
Lambdacalculus, types and models
Chapter 3 Intersection type systems 1. System DΩ A type system is a class of formulas in some language, the purpose of which is to express some properties of λterms. By introducing such formulas, as comments in the terms, we construct what we call typed terms, which correspond to programs in a high level programming language. The main connective in these formulas is “ → ”, the type A → B being that of the “ functions ” from A to B , that is to say from the set of terms of type A to the set of terms of type B . The first type system which we shall examine consists of propositional formulas. It uses the conjunction ∧ in a very special way (this is why it is called intersection type system). It does not seem that this system can be used as a model for a programming language. However, it is very useful as a tool for studying pure λcalculus. We will call it system DΩ. The types of this system are the formulas built with : a constant Ω (type constant) ; variables X , Y , . . . (type variables) ; the connectives → and ∧. We will write A 1 , A 2 , . . . , A k → A instead of A 1 → (A 2 → (. . . (A k → A) . . .)). The positive and negative occurrences of a variable X in a type A are defined by induction on the length of A : if A is a variable, or A = Ω, then the possible occurrence of X in A is positive ; if A = B ∧C , then any positive (resp. negative) occurrence of X in B or in C is positive (resp. negative) in A ; 41
42
Lambdacalculus, types and models if A = B → C , then the positive (resp. negative) occurrences of X in A are the positive (resp. negative) occurrences of X in C , and the negative (resp. positive) occurrences of X in B .
We also define the final occurrences of the variable X in the type A : if A is a variable, or A = Ω, then the possible occurrence of X in A is final ; if A = B ∧C , then the final occurrences of X in A are its final occurrences in B and its final occurrences in C ; if A = B → C , then the final occurrences of X in A are its final occurrences in C . Hence every final occurrence of a variable in a type is positive. By a variable declaration, we mean an ordered pair (x, A), where x is a variable of the λcalculus, and A is a type. It will be denoted by x : A instead of (x, A). A context Γ is a mapping from a finite set of variables to the set of all types. Thus it is a finite set {x 1 : A 1 , . . . , x k : A k } of variable declarations, where x 1 , . . . , x k are distinct variables ; we will denote it by x 1 : A 1 , . . . , x k : A k (without the braces). So, in such an expression, the order does not matter. We will say that x i is declared of type A i in the context Γ. The integer k may be 0 ; in that case, we have the empty context. We will write Γ, x : A in order to denote the context obtained by adding the declaration x : A to the context Γ, provided that x is not already declared in Γ. Given a λterm t , a type A, and a context Γ, we define, by means of the following rules, the notion : t is of type A in the context Γ (we will also say : “ t may be given type A in the context Γ ”) ; this will be denoted by Γ `DΩ t : A (or Γ ` t : A if there is no ambiguity) : 1. If x is a variable, then Γ, x : A `DΩ x : A. 2. If Γ, x : A `DΩ t : B , then Γ `DΩ λx t : A → B . 3. If Γ `DΩ t : A → B and Γ `DΩ u : A, then Γ `DΩ (t )u : B . 4. If Γ `DΩ t : A ∧ B , then Γ `DΩ t : A and Γ `DΩ t : B . 5. If Γ `DΩ t : A and Γ `DΩ t : B , then Γ `DΩ t : A ∧ B . 6. Γ `DΩ t : Ω (for all t and Γ). Any expression of the form Γ `DΩ t : A obtained by means of these rules will be called a typing of t in system DΩ. A typable term is a term which may be given some type in some context. The notation `DΩ t : A will mean that t is of type A in the empty context. Note that, because of rule 6, there are terms which are typable in the context Γ, while not all of their free variables are declared in that context.
Chapter 3. Intersection type systems
43
Proposition 3.1. Suppose Γ `DΩ t : A, and let Γ0 ⊂ Γ which contains all those declarations in Γ which concern variables occurring free in t . Then Γ0 `DΩ t : A. The proof is immediate, by induction on the number of rules used to obtain Γ `DΩ t : A. Q.E.D.
Lemma 3.2. If Γ, x : F `DΩ t : A, then for every variable x 0 which is not declared in Γ and not free in t , we have Γ, x 0 : F `DΩ t [x 0 /x] : A, and the length of the derivation is the same for both typings. We consider the derivation of Γ, x : F `DΩ t : A, and we perform on it an arbitrary permutation of variables. Obviously we obtain a correct derivation in DΩ. Now, we choose the permutation which swap x and x 0 , and does not change any other variable. Since x 0 is not declared in Γ, we obtain a derivation of Γ, x 0 : F `DΩ t [x 0 /x, x/x 0 ] : A. But x 0 is not free in t , and therefore t [x 0 /x, x/x 0 ] = t [x 0 /x]. Q.E.D.
Proposition 3.3. If Γ `DΩ t : A and Γ0 ⊃ Γ, then Γ0 `DΩ t : A. Proof by induction on the length of the derivation of Γ `DΩ t : A. Consider the last rule used in this derivation. If it is one of the rules 1, 3, 4, 5, 6, then the induction step is immediate. If it is rule 2, then t = λx u, A = B → C , and we have Γ, x : B `DΩ u : C . Let x 0 be any variable not declared in Γ0 and not free in u. By lemma 3.2, we get Γ, x 0 : B `DΩ u[x 0 /x] : C , and the derivation has the same length. By induction hypothesis, we get Γ0 , x 0 : B `DΩ u[x 0 /x] : C . Therefore Γ0 `DΩ λx 0 u[x 0 /x] : B → C by rule 2. But, since x 0 is not free in u, we have λx 0 u[x 0 /x] = λx u = t , and therefore Γ0 `DΩ t : A. Q.E.D.
Normalization theorems Since types can be thought of as properties of λterms, it seems natural to try and associate with each type a subset of Λ (the set of all λterms). We shall now describe a way of doing this. Given any two subsets X and Y of Λ, we denote by X → Y , the subset of Λ defined by the following condition : u ∈ (X → Y ) ⇔ (u)t ∈ Y for all t ∈ X . Obviously : If X ⊃ X 0 and Y ⊂ Y 0 , then (X → Y ) ⊂ (X 0 → Y 0 ).
Lambdacalculus, types and models
44
A subset X of Λ is said to be saturated if and only if, for all terms t , t 1 ,. . . , t n , u, we have (u[t /x])t 1 . . . t n ∈ X ⇒ (λx u)t t 1 . . . t n ∈ X . The intersection of any set of saturated subsets of Λ is clearly saturated. Also clear is the fact that, for any subset X of Λ, the set of terms which reduce to an element of X by leftmost reduction is saturated. Similarly, the set of terms which reduce to an element of X by head reduction is saturated. Proposition 3.4. Let Y be a saturated subset of Λ ; then X → Y is saturated for all X ⊂ Λ. Assume (u[t /x])t 1 . . . t n ∈ X → Y ; then for all v in X , (u[t /x])t 1 . . . t n v ∈ Y , and, since Y is saturated, (λx u)t t 1 . . . t n v ∈ Y . Therefore, (λx u)t t 1 . . . t n ∈ X → Y . Q.E.D.
An interpretation I is, by definition, a function which associates, with each type variable X , a saturated subset of Λ, denoted by X I (or X  if there is no ambiguity). Given such a function, we can extend it and associate with each type A a saturated subset of Λ, denoted by AI (or simply A), defined as follows, by induction on the length of A : if A is a type variable, then A is given with the interpretation I ; Ω = Λ ; if A = B → C , then A = B  → C  ; if A = B ∧C , then A = B  ∩ C . Lemma 3.5 (Adequacy lemma). Let I be an interpretation, and u a λterm, such that : x 1 : A 1 , . . . , x k : A k `DΩ u : A. If t 1 ∈ A 1 I ,. . . , t k ∈ A k I , then u[t 1 /x 1 , . . . , t k /x k ] ∈ AI . The proof proceeds by induction on the number of rules used to obtain the typing of u. Consider the last one : If it is rule 1, then u is one of the variables x i , and A = A i ; in that case u[t 1 /x 1 , . . . , t k /x k ] = t i , and the conclusion is immediate. If it is rule 2, then A = B → C and u = λx v. We can assume that x does not occur free in t 1 , . . . , t k and is different from x 1 , . . . , x k ; moreover : x : B, x 1 : A 1 , . . . , x k : A k `DΩ v : C . By induction hypothesis, v[t /x, t 1 /x 1 , . . . , t k /x k ] ∈ C  holds for every t ∈ B . But it then follows from our assumptions about x that : v[t /x, t 1 /x 1 , . . . , t k /x k ] = v[t 1 /x 1 , . . . , t k /x k ][t /x]. Then we have (λx v[t 1 /x 1 , . . . , t k /x k ])t ∈ C , since C is saturated. Now this holds for all t ∈ B , so λx v[t 1 /x 1 , . . . , t k /x k ] ∈ (B  → C ) = A.
Chapter 3. Intersection type systems
45
If it is rule 3, then u = (w)v, where w is of type B → A and v is of type B in the context x 1 : A 1 , . . . , x k : A k . By induction hypothesis, we have : w[t 1 /x 1 , . . . , t k /x k ] ∈ B → A, and v[t 1 /x 1 , . . . , t k /x k ] ∈ B , thus : (w[t 1 /x 1 , . . . , t k /x k ])v[t 1 /x 1 , . . . , t k /x k ] ∈ A. If it is rule 4, then we know that a previous typing of u gave it the type A ∧ B (or B ∧ A), in the same context. By induction hypothesis : u[t 1 /x 1 , . . . , t k /x k ] ∈ A ∧ B  = A ∩ B , and therefore : u[t 1 /x 1 , . . . , t k /x k ] ∈ A. If it is rule 5, then A = B ∧ C , and, by previous typings (in the same context), u is of type B as well as of type C . By induction hypothesis, we have u[t 1 /x 1 , . . . , t k /x k ] ∈ B , C , and therefore u[t 1 /x 1 , . . . , t k /x k ] ∈ B ∧C . If it is rule 6, then the result is obvious. Q.E.D.
A type A is said to be trivial if no variable has a final occurrence in A. (For example A → Ω ∧ (B → Ω) is a trivial type, for all A and B ). The trivial types are those obtained by applying the following rules : Ω is trivial ; if A is trivial, then B → A is trivial for every B ; if A, B are trivial, then so is A ∧ B . As an immediate consequence, we have : If A is a trivial type, then its value AI under any interpretation I is the whole set Λ. Lemma 3.6. Let N0 , N be subsets of Λ, with the following properties : N is saturated, N0 ⊂ N , N0 ⊂ (Λ → N0 ), N ⊃ (N0 → N ). Let I be the interpretation such that X I = N for every type variable X . Then AI ⊃ N0 for every type A, and AI ⊂ N for every nontrivial type A. We first prove, by induction on A, that AI ⊃ N0 ; this is obvious whenever A is a type variable, or A = Ω, or A = B ∧C . If A = B → C , then A = B  → C , and B  ⊂ Λ, C  ⊃ N0 (induction hypothesis) ; hence A ⊃ Λ → N0 , and since it has been assumed that Λ → N0 ⊃ N0 , we have A ⊃ N0 . Now we prove, by induction on A, that A ⊂ N for every nontrivial type A. The result is immediate whenever A is a type variable, or A = Ω, or A = B ∧C . If A = B → C , then C is not trivial ; we have A = B  → C , B  ⊃ N0 (this has just been proved), and C  ⊂ N (induction hypothesis). Hence A ⊂ (N0 → N ), and since we assumed that (N0 → N ) ⊂ N , we can conclude that A ⊂ N . Q.E.D.
Lambdacalculus, types and models
46
Theorem 3.7 (Head normal form theorem). Let t be a term which is typable with a nontrivial type A, in system DΩ. Then the head reduction of t is finite. The converse of this theorem is true and will be proved later (theorem 4.9). Let N0 = {(x)v 1 . . . v p ; x is a variable, v 1 , . . . , v p ∈ Λ} and N = {t ∈ Λ ; the head reduction of t is finite}. Lemma 3.8. N0 and N satisfy the hypotheses of lemma 3.6. Clearly, N0 ⊂ N and N0 ⊂ Λ → N0 . Also, N is saturated : indeed, if (u[t /x])t 1 . . . t n has a finite head reduction, then the head reduction of (λx u)t t 1 . . . t n is also finite. We now prove that N ⊃ N0 → N : let u ∈ N0 → N ; then, for any variable x, (u)x has a finite head reduction (since x ∈ N0 ). Suppose that the head reduction of u is infinite, namely : u, u 1 , . . . , u n , . . . Then there is an n such that u n starts with λ ; otherwise the head reduction of (u)x would be : (u)x, (u 1 )x, . . . , (u n )x, . . . which is infinite. Let k be the least integer such that u k starts with λ ; for instance u k = λy v k , and then u n = λy v n for every n ≥ k. Thus the head reduction of v k is : v k , v k+1 , . . . Therefore, the head reduction of (u)x is : (u)x, (u 1 )x, . . . , (u k )x, v k [x/y], v k+1 [x/y], . . . Again, it is infinite and we have a contradiction. Q.E.D.
Now we can prove theorem 3.7 : let t be a term which is typable with a nontrivial type A in the context x 1 : A 1 , . . . , x k : A k . Consider the interpretation I such that X I = N for every type variable X . It follows from the adequacy lemma that, whenever a i ∈ A i I , t [a 1 /x 1 , . . . , a k /x k ] ∈ AI . By lemma 3.6, A i I ⊃ N0 , so all variables are in A i I , and therefore t ∈ AI . Also by lemma 3.6, AI ⊂ N , thus t ∈ N and the head reduction of t is finite. Q.E.D.
An ordered pair (N0 , N ) of subsets of Λ is said to be adapted if it satisfies the following properties : i) N is saturated ; ii) N0 ⊂ N ; N0 ⊂ (N → N0 ) ; (N0 → N ) ⊂ N . An equivalent way of stating condition (ii) is : ii’) N0 ⊂ (N → N0 ) ⊂ (N0 → N ) ⊂ N . Indeed, the inclusion (N → N0 ) ⊂ (N0 → N ) is an immediate consequence of N0 ⊂ N . Lemma 3.9. Let (N0 , N ) be an adapted pair, and I an interpretation such that, for every type variable X , X I is a saturated subset of N containing N0 . Then,
Chapter 3. Intersection type systems
47
for every type A with no negative (resp. positive) occurrence of the symbol Ω, we have the inclusion AI ⊃ N0 (resp. AI ⊂ N ). The proof is by induction on A. The conclusion is immediate whenever A is a type variable or A = Ω. If A = B ∧C , and if there is no negative (resp. positive) occurrence of Ω in A, then the situation is the same in B , and in C . Therefore, by induction hypothesis, we have B I , C I ⊃ N0 (resp. ⊂ N ). Thus B ∧ C I = B I ∩ C I ⊃ N0 (resp. ⊂ N ). If A = B → C , and if Ω has no negative occurrence in A, then Ω has no positive (resp. negative) occurrence in B (resp. C ). By induction hypothesis, B I ⊂ N and C I ⊃ N0 . Hence B I → C I ⊃ N → N0 . Since (N0 , N ) is an adapted pair, we have N → N0 ⊃ N0 , and therefore AI ⊃ N0 . If A = B → C and Ω has no positive occurrence in A, then Ω has no negative (resp. positive) occurrence in B (resp. C ). By induction hypothesis, B I ⊃ N0 and C I ⊂ N . Therefore, B I → C I ⊂ N0 → N . Now (N0 , N ) is an adapted pair, so N0 → N ⊂ N , and, finally, AI ⊂ N . Q.E.D.
Now we shall prove that the pair (N0 , N ) defined below is adapted : N is the set of all terms which are normalizable by leftmost βreduction : Namely, we have t ∈ N if and only if the sequence obtained from t by leftmost βreduction ends with a normal term. N0 is the set of all terms of the form (x)t 1 . . . t n , where t 1 , . . . , t n ∈ N and x is a variable. In particular, all variables are in N0 (take n = 0). We now check conditions (i) and (ii) in the definition of adapted pairs (page 46) : i) N is saturated : clearly, if (u[t /x])t 1 . . . t n is normalizable by leftmost βreduction, then so is (λx u)t t 1 . . . t n . ii) N0 ⊂ N : if t ∈ N0 , then t = (x)t 1 . . . t n for some variable x and t 1 , . . . , t n are all normalizable by leftmost βreduction. Thus t clearly has the same property. The inclusion N0 ⊂ (N → N0 ) is obvious. Now we come to (N0 → N ) ⊂ N : let t ∈ N0 → N and x be some variable not occurring in t ; since x ∈ N0 , (t )x ∈ N , thus (t )x is normalizable by leftmost βreduction. We need to prove that the same property holds for t ; this is done by induction on the length of the normalization of (t )x by leftmost βreduction. If t does not start with λ, then the first step of this normalization is a leftmost βreduction in t , which produces a term t 0 ; thus the term (t 0 )x has a normalization by leftmost βreduction which is shorter than that of (t )x. Hence, by induction hypothesis, t 0 is normalizable by leftmost βreduction, and therefore so is t . If t = λy u, then the first leftmost βreduction in (t )x produces the term u[x/y], which is therefore normalizable by leftmost βreduction. Hence u satisfies the
Lambdacalculus, types and models
48
same property, and so does t = λy u : let u = u 0 , u 1 , . . . , u n be the normalization of u by leftmost βreduction, then that of λy u is : λy u, λy u 1 , . . . , λy u n . Theorem 3.10 (Normalization theorem). Let t be a typable term in system DΩ, of type A in the context x 1 : A 1 , . . . , x k : A k . Suppose that the symbol Ω has no positive occurrence in A, and no negative occurrence in A 1 , . . . , A k . Then t is normalizable by leftmost βreduction. Define an interpretation I by taking X I = N for every type variable X . It follows from lemma 3.9 that A i I ⊃ N0 ; now x i ∈ N0 (by definition of N0 ), thus x i ∈ A i I ; by the adequacy lemma, we have : t = t [x 1 /x 1 , . . . , x n /x n ] ∈ AI . Now by lemma 3.9, AI ⊂ N and therefore t ∈ N . Q.E.D.
The converse of this theorem will be proved later (theorem 4.13). Corollary 3.11. Suppose that x 1 : A 1 , . . . , x k : A k `DΩ t : A, and Ω does not occur in A, A 1 ,. . . ,A k . Then t is normalizable by leftmost βreduction. An infinite quasi leftmost reduction of a term t ∈ Λ is an infinite sequence of terms t = t 0 , t 1 , . . . , t n , . . . such that : for every n ≥ 0, t n β0 t n+1 (t n+1 is obtained by reducing a redex in t n ) ; for every n ≥ 0, there exists a p ≥ n such that t p+1 is obtained by reducing the leftmost redex in t p . We can state a strengthened normalization theorem : Theorem 3.12 (Quasi leftmost normalization theorem). Suppose x 1 : A 1 , . . . , x k : A k `DΩ t : A, and Ω does not occur in A,A 1 ,. . . ,A k . Then there is no infinite quasi leftmost reduction of t . In order to prove it, we again define an adapted pair (N0 , N ) : N is the set of all terms which do not admit an infinite quasi leftmost reduction ; N0 is the set of all terms of the form (x)t 1 . . . t n , where x is some variable, and t 1 , . . . , t n ∈ N . In particular, all variables are in N0 (take n = 0). We check conditions (i) and (ii) of the definition of adapted pairs (page 46) : i) N is saturated : given (λx u)t t 1 . . . t n = τ0 , we assume the existence of an infinite quasi leftmost βreduction τ0 , τ1 , . . . , τn , . . ., and we prove : (u[t /x])t 1 . . . t n ∉ N by induction on the least integer k such that τk+1 is obtained from τk by reducing the leftmost redex. If k = 0, then τ1 = (u[t /x])t 1 . . . t n , and, therefore, this term admits an infinite quasi leftmost βreduction. If k > 0, then τ1 is obtained by a reduction either in u, or in t , t 1 , . . . , t n , so it can be written τ1 = (λx u 0 )t 0 t 10 . . . t n0 (with either
Chapter 3. Intersection type systems
49
u = u 0 or u β0 u 0 , and the same for t , t 1 , . . . , t n ). Now the induction hypothesis applies to τ1 (since the integer corresponding to its quasi leftmost βreduction is k −1), so (u 0 [t 0 /x])t 10 . . . t n0 ∉ N . But we have (u[t /x])t 1 . . . t n β (u 0 [t 0 /x])t 10 . . . t n0 , and therefore there exists an infinite quasi leftmost βreduction for the term (u[t /x])t 1 . . . t n . ii) N0 ⊂ N : let τ ∈ N0 , say τ = (x)t 1 . . . t n , where t 1 , . . . , t n ∈ N and x is some variable. Suppose that τ admits an infinite quasi leftmost βreduction, say τ = τ0 , τ1 , . . . , τk , . . . ; then τk = (x)t 1k . . . t nk , with either t ik = t ik+1 or t ik β0 t ik+1 . Clearly, there exists i ≤ n such that t ik contains the leftmost redex of τk for every large enough k. Hence t i admits an infinite quasi leftmost βreduction, contradicting our assumption. The inclusion N0 ⊂ (N → N0 ) is obvious. It remains to prove that (N0 → N ) ⊂ N : let τ ∈ N0 → N and x be a variable which does not occur in τ ; since x ∈ N0 , (τ)x ∈ N . If τ admits an infinite quasi leftmost βreduction, say τ = τ0 , τ1 , . . . , τk , . . . , then so does (τ)x (contradicting the definition of N ) : indeed, if none of the τn ’s start with λ, then (τ0 )x, (τ1 )x, . . . , (τk )x,. . . is an infinite quasi leftmost βreduction of (τ)x. If τk = λy τ0k , then τ0k admits an infinite quasi leftmost reduction, and so does τ0k [x/y]. Hence (τ0 )x, (τ1 )x, . . . , (τk )x, τ0k [x/y] is an initial segment of an infinite quasi leftmost reduction of the term (τ)x. Now the end of the proof of the quasi leftmost normalization theorem 3.12 is the same as that of the normalization theorem 3.10. Q.E.D.
The following theorem is another application of the same method. Theorem 3.13. Suppose x 1 : A 1 , . . . , x k : A k `DΩ t : A, and Ω does not occur in A,A 1 ,. . . ,A k . Then there exists a βηnormal term u such that, if t βη t 0 for some t 0 , then t 0 βη u. Remark. In particular, t is βηnormalizable (take t 0 = t ) and its βηnormal form is unique. The interesting fact is that the proof does not use the ChurchRosser theorems of chapter 1 (theorems 1.24 and 1.32).
We define a new adapted pair (N0 , N ). N is the set of all terms with the desired property ; in other words : t ∈ N ⇔ there exists a βηnormal term u such that, if t βη t 0 for some t 0 , then t 0 βη u. N0 = {(x)t 1 . . . t n ; x is any variable, t 1 . . . t n ∈ N }. We now check conditions (i) and (ii) of the definition of adapted pairs (page 46) : i) N is saturated : suppose that (u[t /x])t 1 . . . t n ∈ N , and let τ be its (unique) βηnormal form. Let v ∈ Λ be such that : (?) (λx u)t t 1 . . . t n βη v.
50
Lambdacalculus, types and models
We show that v βη τ. Consider, at the beginning of the βηreduction (?), the longest possible sequence of βηreductions which take place inside u or t or t 1 or . . . or t n ; this gives (λx u 0 )t 0 t 10 . . . t n0 , with u βη u 0 , t βη t 0 and t i βη t i0 . Then, there are three possibilities : • The βηreduction (?) stops there. Thus, v = (λx u 0 )t 0 t 10 . . . t n0 so that v βη (u 0 [t 0 /x])t 10 . . . t n0 . But we have (u[t /x])t 1 . . . t n βη (u 0 [t 0 /x])t 10 . . . t n0 , because the relation βη is λcompatible. Since (u[t /x])t 1 . . . t n ∈ N , it follows from the definition of N that (u 0 [t 0 /x])t 10 . . . t n0 βη τ ; therefore v βη τ. • The following step consists in reducing the βredex (λx u 0 )t 0 and gives : (u 0 [t 0 /x])t 10 . . . t n0 . Therefore, we have (u 0 [t 0 /x])t 10 . . . t n0 βη v and it follows that (u[t /x])t 1 . . . t n βη v. Since (u[t /x])t 1 . . . t n ∈ N , it follows from the definition of N that v βη τ. • λx u 0 is an ηredex, i.e. u 0 = (u 00 )x and x is not free in u 00 ; moreover, the following step consists in reducing this ηredex. This gives (u 00 )t 0 t 10 . . . t n0 , i.e. (u 0 [t 0 /x])t 10 . . . t n0 . Thus, the result follows as in the previous case. ii) N0 ⊂ N : let t = (x)t 1 . . . t n ∈ N0 , where x is some variable, and t 1 , . . . , t n ∈ N . Suppose that t βη t 0 . We have t 0 = (x)t 10 . . . t n0 with t i βη t i0 . Therefore t i0 βη u i , where u i is the (unique) βηnormal form of t i . It follows that t 0 βη (x)u 1 . . . u n . The inclusion N0 ⊂ (N → N0 ) is obvious, by definition of N0 . It remains to prove that (N0 → N ) ⊂ N : let t ∈ (N0 → N ) and x be a variable which does not occur in t ; since x ∈ N0 , we have (t )x ∈ N . Let u be the (unique) βηnormal form of (t )x and define w ∈ Λ as follows : w = λx u if λx u is not a ηredex, and w = v if u = (v)x with x not free in v ; then w is βηnormal. Consider a βηreduction t βη t 0 ; we show that t 0 βη w. We have (t )x βη (t 0 )x βη u. If the βηreduction from (t 0 )x to u takes place inside t 0 , we have u = (v)x and t 0 βη v ; thus, x is not free in v (because it is not free in t 0 ) and t 0 βη w = v. Otherwise, we have t 0 βη λx t 00 and t 00 βη u, so that t 0 βη λx u ; and in case u = (v)x with x not free in v, we get t 0 βη λx(v)x βη v. Thus, we have again t 0 βη w in any case, and this shows that t ∈ N . Now, the end of the proof of theorem 3.13 is the same as that of the normalization theorem 3.10.
2. System D In order to study the strongly normalizable terms, we shall deal with the same type system, but without using the constant Ω. Here it will be called system D. The definitions below are quite the same as in the previous section, except for those about saturated sets and interpretations.
Chapter 3. Intersection type systems
51
So the types of system D are formulas built with : variables X , Y , . . . (type variables) ; the connectives → and ∧. As before, a context Γ is a set of the form x 1 : A 1 , x 2 : A 2 , . . . , x k : A k in which x 1 , x 2 , . . . , x k are distinct variables of the λcalculus and A 1 , A 2 , . . . , A k are types of system D. Given a λterm t , a type A, and a context Γ, we define, by means of the following rules, the notion : t is of type A in the context Γ (or t may be given type A in the context Γ) ; this will be denoted by Γ `D t : A (or Γ ` t : A if there is no ambiguity) : 1. If x is a variable, then Γ, x : A `D x : A. 2. If Γ, x : A `D t : B , then Γ `D λx t : A → B . 3. If Γ `D t : A → B and Γ `D u : A, then Γ `D (t )u : B . 4. If Γ `D t : A ∧ B , then Γ `D t : A and Γ `D t : B . 5. If Γ `D t : A and Γ `D t : B , then Γ `D t : A ∧ B . Any expression of the form Γ `D t : A obtained by means of these rules will be called a typing of t in system D. A term is typable if it may be given some type in some context. Clearly, if a term t is typed in the context x 1 : A 1 , . . . , x k : A k , then the free variables of t are among x 1 , . . . , x k (this was not true in system DΩ). As in DΩ, we have : Proposition 3.14. If Γ `D t : A and Γ0 ⊃ Γ, then Γ0 `D t : A. If Γ `D t : A, and if Γ0 ⊂ Γ is the set of those declarations in Γ which concern variables occurring free in t , then Γ0 `D t : A.
The strong normalization theorem Consider a fixed subset N of Λ (in fact, we shall mostly deal with the case where N is the set of strongly normalizable terms). A subset X of Λ is said to be N saturated if, for all terms t 1 , . . . , t n , u : (u[t /x])t 1 . . . t n ∈ X ⇒ (λx u)t t 1 . . . t n ∈ X for every t ∈ N . Proposition 3.15. If Y is an N saturated subset of Λ, then X → Y is N saturated for all X . Indeed, suppose t ∈ N and (u[t /x])t 1 . . . t n ∈ X → Y . For any t 0 in X , (u[t /x])t 1 . . . t n t 0 ∈ Y , and therefore (λx u)t t 1 . . . t n t 0 ∈ Y , since Y is N −saturated. Hence (λx u)t t 1 . . . t n ∈ X → Y . Q.E.D.
Lambdacalculus, types and models
52
An N interpretation I is, by definition, a function which associates with each type variable X an N saturated subset of Λ, denoted by X I (or simply X  if there is no ambiguity). Given such a function, we can extend it and associate with each type A an N saturated subset of Λ, denoted by AI (or simply A), defined as follows, by induction on the length of A : if A is a type variable, then AI is given with the interpretation I ; if A = B → C , then AI = B I → C I ; if A = B ∧C , then AI = B I ∩ C I . Lemma 3.16 (Adequacy lemma). Let I be an N interpretation such that F I ⊂ N for every type F of system D, and u a λterm, such that : x 1 : A 1 , . . . , x k : A k `D u : A. If t 1 ∈ A 1 I , . . . , t k ∈ A k I then u[t 1 /x 1 , . . . , t k /x k ] ∈ AI . The proof proceeds by induction on the number of rules used to obtain the typing of u. Consider the last one : If it is rule 1, 3, 4 or 5, then we can repeat the proof of the adequacy lemma (lemma 3.5), for the corresponding rules. If it is rule 2, then A = B → C and u = λx v ; we can assume that x does not occur free in t 1 , . . . , t k and is different from x 1 , . . . , x k . Moreover : x : B, x 1 : A 1 , . . . , x k : A k `D v : C . By induction hypothesis, v[t /x, t 1 /x 1 , . . . , t k /x k ] ∈ C  holds for any t ∈ B . It then follows from our assumptions about x that : v[t /x, t 1 /x 1 , . . . , t k /x k ] = v[t 1 /x 1 , . . . , t k /x k ][t /x]. Since C is N saturated and t ∈ B  ⊂ N , we have : (λx v[t 1 /x 1 , . . . , t k /x k ])t ∈ C . Now since t is an arbitrary element of B , we obtain : λx v[t 1 /x 1 , . . . , t k /x k ] ∈ (B  → C ) = A. Q.E.D.
We now give a method which will provide a set N such that F I ⊂ N for every N interpretation I and every type F of system D. In this context, an ordered pair (N0 , N ) of subsets of Λ is said to be adapted if and only if : i) N is N saturated ; ii) N0 ⊂ N ; N0 ⊂ (N → N0 ) ; (N0 → N ) ⊂ N . The difference with the definition page 46 lies in condition (i). As above, condition (ii) can also be stated this way : ii’) N0 ⊂ (N → N0 ) ⊂ (N0 → N ) ⊂ N . Lemma 3.17. Let (N0 , N ) be an adapted pair, and I an N interpretation such that, for every type variable X , X I is an N saturated subset of N containing
Chapter 3. Intersection type systems
53
N0 . Then, for every type A, AI is an N saturated subset of N which contains N0 . Proof by induction on A. The result is clear whenever A is a type variable or A = B ∧C . If A = B → C , then A = B  → C , thus A is N saturated since C  is (proposition 3.15). Moreover, by induction hypothesis, B  ⊃ N0 , and C  ⊂ N . Hence B  → C  ⊂ N0 → N . Now N0 → N ⊂ N according to the definition of adapted pairs ; therefore B → C  ⊂ N . Similarly, we have B  ⊂ N , and C  ⊃ N0 . Hence B → C  ⊃ N → N0 ; since N → N0 ⊃ N0 (definition of adapted pairs), we obtain B → C  ⊃ N0 . Q.E.D.
Now we define two sets N and N0 and show that (N0 , N ) is an adapted pair : N is the set of strongly normalizable terms ; in other words, t ∈ N ⇔ there is no infinite sequence t = t 0 , t 1 , . . . , t n , . . . such that t i β0 t i +1 for all i ; therefore each maximal sequence of this form (called normalization of t ) ends with the normal form of t . N0 is the set of all terms of the form (x)t 1 . . . t n , where x is some variable, and t1 , . . . , tn ∈ N . Proposition 3.18. A strongly normalizable term admits only finitely many normalizations. (This is an application of the well known König’s lemma). Let t be a term which admits infinitely many normalizations. Then at least one of the terms obtained by contracting a redex in t admits infinitely many normalizations. Let t 1 be such a term ; we have t β0 t 1 . Now the same argument applies to t 1 ; so we can carry on and construct an infinite sequence t = t 0 , t 1 , . . . , t n , . . . such that t n β0 t n+1 for all n ; therefore t is not strongly normalizable. Q.E.D.
Proposition 3.19. N is N saturated. Let t ∈ N , (u[t /x])t 1 . . . t n ∈ N . We need to prove that (λx u)t t 1 . . . t n ∈ N . Let p (resp. q) be the sum of all the lengths of the normalizations of t (resp. (u[t /x])t 1 . . . t n ). The proof is by induction on p, and, for each fixed p, by induction on q. Consider the terms obtained by contracting a redex in (λx u)t t 1 . . . t n . It is sufficient to prove that all of them are in N . The redex on which the contraction is done may be : 1. The redex (λx u)t ; then the reduced term is (u[t /x])t 1 . . . t n , which is in N ;
Lambdacalculus, types and models
54
2. A redex in u, the reduced term being u 0 , with u β0 u 0 ; we want to prove that (λx u 0 )t t 1 . . . t n ∈ N . But we have u[t /x] β0 u 0 [t /x] (proposition 1.20), and therefore u[t /x]t 1 . . . t n β0 u 0 [t /x]t 1 . . . t n ; thus, the sum of the lengths of the normalizations of (u 0 [t /x])t 1 . . . t n is < q, and the induction hypothesis yields the expected result ; 3. A redex in t i ; same proof ; 4. A redex in t , the reduced term being t 0 ; then the sum of the lengths of the normalizations of t 0 is p 0 < p. On the other hand, we have u[t /x] β u[t 0 /x] (proposition 1.23), so there is a normalization of (u[t /x])t 1 . . . t n which involves the term (u[t 0 /x])t 1 . . . t n ; therefore, (u[t 0 /x])t 1 . . . t n ∈ N . With the induction hypothesis, we conclude that (λx u)t 0 t 1 . . . t n ∈ N . Q.E.D.
Now we prove that (N0 , N ) is an adapted pair : condition (i) was checked in proposition 3.19 ; we have obviously N0 ⊂ N and N0 ⊂ N → N0 ; in order to prove that N0 → N ⊂ N , suppose that u is not strongly normalizable, and let x be some variable (x ∈ N0 ) ; there exists an infinite sequence u = u 0 , u 1 , . . . , u n , . . . such that u i β0 u i +1 for all i ; then the sequence (u)x = (u 0 )x, (u 1 )x, . . . , (u n )x, . . . attests that (u)x is not strongly normalizable. Theorem 3.20 (Strong normalization theorem). Every term which is typable in system D is strongly normalizable. Indeed, let t be a term of type A, in the context x 1 : A 1 , . . . , x k : A k . Define an N interpretation I by taking X I = N for every type variable X . We have x i ∈ N0 by definition of N0 , so x i ∈ A i  ; by the adequacy lemma, t = t [x 1 /x 1 , . . . , x n /x n ] ∈ A. Now by lemma 3.17, A ⊂ N ; thus t ∈ N . Q.E.D.
Remark. Proposition 3.19 provides the following algorithm for checking whether or not a term is strongly normalizable : if t is a head normal form, say t = λx 1 . . . λx n (x)t 1 . . . t k : then do the checking for t 1 , . . . , t k ; otherwise, we have t = λx 1 . . . λx n (λx u)v t 1 . . . t k : then do the checking for v and for (u[v/x])t 1 . . . t k . The algorithm terminates if and only if t is strongly normalizable.
3. Typings for normal terms We intend to show that head normal forms and normal forms are typable, in a notable way : a head normal form is typable in system DΩ, with a nontrivial type ; a normal form is typable in system D (and therefore also in system DΩ, with a type in which the symbol Ω does not occur).
Chapter 3. Intersection type systems
55
Proposition 3.21. Let t be a term in head normal form. Then t is typable in system DΩ, with a type of the form U1 , . . . ,Un → X (where X is a type variable, and n ≥ 0). Indeed, t = λx 1 . . . λx n (y)u 1 . . . u k . Now, (y)u 1 . . . u k is of type X in the context y : U (where U = Ω, Ω, . . . , Ω → X ). Thus t is of type U1 , . . . ,Un → X in the context y : U (U1 , . . . ,Un may be arbitrarily chosen, except when y = x i ; in that case, take Ui = U ). Q.E.D.
Lemma 3.22. If x 1 : A 1 , x 2 : A 2 , . . . , x k : A k ` t : A, then : x 1 : A 1 ∧ A 01 , x 2 : A 2 , . . . , x k : A k ` t : A. Proof by induction on the number of rules used to obtain : x 1 : A 1 , x 2 : A 2 ,. . . , x k : A k ` t : A (either rules 1 to 6, page 42 or rules 1 to 5, page 51). Consider the last one. The only nontrivial case is that of rule 1, when t = x 1 . Then we have A = A 1 . Now, by rule 1, x 1 : A 1 ∧ A 01 , . . . ` x 1 : A 1 ∧ A 01 ; therefore x 1 : A 1 ∧ A 01 , . . . ` x 1 : A 1 (rule 4). Q.E.D.
Proposition 3.23. Given any two contexts Γ, Γ0 , there exists a context Γ00 such that, if Γ ` t : A and Γ0 ` u : B , then Γ00 ` t : A, u : B . Even if it means extending both contexts, we may assume that : Γ is x 1 : A 1 , . . . , x k : A k and Γ0 is x 1 : B 1 , . . . , x k : B k . Then it suffices to take for Γ00 the context x 1 : A 1 ∧ B 1 , . . . , x k : A k ∧ B k and apply the previous lemma. Q.E.D.
The next proposition shows that every normal term is typable in system D. Proposition 3.24. For every normal term t , there exist a type A and a context Γ such that Γ `D t : A. Moreover, if t does not start with λ, then, for every type A, there exists a context Γ such that Γ `D t : A. Recall that the normal terms are defined by the following conditions : any variable x is a normal term ; if t is a normal term, and if x is a variable, then λx t is a normal term ; if t , u are normal terms, and if t does not start with λ, then (t )u is a normal term.
Lambdacalculus, types and models
56
The proof of the proposition is by induction on the length of t . If t is a variable, then t is of type A in the context t : A. If t = λx u, then u is of type A in a context Γ ; we may assume that the declaration x : B occurs in Γ, for some type B (otherwise we add it). Hence Γ `D t : B → A. Now suppose that t = (u)v, and u does not start with λ. Let A be any type of system D. By induction hypothesis, v is of some type B , in some context Γ. Moreover, there exists a context Γ0 such that Γ0 `D u : B → A. By the previous proposition, there exists a context Γ00 such that Γ00 `D v : B , u : B → A. Thus Γ00 `D (u)v : A. Q.E.D.
Principal typings of a normal term in system D We have just shown that every normal term t is typable in system D. We shall improve this result and see that, actually, there is a type which characterizes t up to ηequivalence. Recall that, if x 1 : A 1 , . . . , x k : A k `D t : A, then the free variables of t are among x 1 , . . . , x k , and the symbol Ω does not occur in the types A 1 , . . . , A k , A. Let t be a normal term and {x 1 , . . . , x k } a finite set of variables, containing all the free variables of t . We shall define a special kind of typings of t in system D, of the form x 1 : A 1 , . . . , x k : A k `D t : A, which will be called principal typings of t . The definition is by induction on t : If t is a variable x i , we take distinct type variables X 1 , . . . , X k . The principal typings are x 1 : X 1 , . . . , x k : X k `D x i : X i . If t = λx u, let x : A, x 1 : A 1 ,. . . , x k : A k `D u : B be a principal typing of u. Then x 1 : A 1 ,. . . , x k : A k `D t : A → B is a principal typing of t . If t does not start with λ, we have t = (x)t 1 . . . t n , where x is a variable, and t 1 , . . . , t n are normal terms. Let x : A i , x 1 : A 1i , . . . , x k : A ki `D t i : B i be a principal typing of t i (1 ≤ i ≤ n). Even if it means changing the type variables, we may assume that, whenever i 6= j , the typings of t i and t j have no type variable in common. Then we take a new type variable X , and we obtain a principal typing of t , which is Γ `D t : X , where Γ is the context : V V V x : ni=1 A i ∧ (B 1 , . . . , B n → X ), x 1 : ni=1 A 1i , . . . , x k : ni=1 A ki . This is indeed a typing of t : it follows from lemma 3.22 that Γ `D t i : B i and Γ `D x : (B 1 , . . . , B n → X ) ; then it remains to apply rule 3, page 51. Lemma 3.25. Let x 1 : A 1 , . . . , x k : A k `D t : A be a principal typing of a normal term t , and y 1 , . . . , y l be new variables. Then there exist types B 1 , . . . , B l such that x 1 : A 1 , . . . , x k : A k , y 1 : B 1 , . . . , y l : B l `D t : A is a principal typing of t .
Chapter 3. Intersection type systems
57
Immediate proof by induction on the length of t . Q.E.D.
Definition. Given any λterm t , every term u such that t η u will be called an ηreduced image of t . Theorem 3.26. Let x 1 : A 1 , . . . , x k : A k `D t : A be a principal typing of a normal term t , and let u be a typed term in system DΩ, of type A in the context x 1 : A 1 , . . . , x k : A k . Then there exists an ηreduced image of t which can be obtained from u by leftmost βreduction. Examples : t = λx(x)x ; the principal type is X ∧ (X → Y ) → Y ; any term of that type can therefore be reduced to t by leftmost βreduction ; t = λ f λx( f )x ; the principal type is (X → Y ) → (X → Y ) ; any term of that type can be reduced either to t , or to λ f f (which is an ηreduced image of t ), by leftmost βreduction ; t = λ f λx( f )( f )x ; the principal type is (X → Y ) ∧ (Y → Z ) → (X → Z ). Lemma 3.27. Suppose t is normal and t η t 0 ; then t 0 is normal. Moreover, if λ is not the first symbol in t , then neither is it in t 0 . We can assume that t η 0 t 0 (t 0 is obtained by one single ηreduction in t ). The proof is by induction on t . If t is a variable, then t = t 0 and the result is obvious. If t starts with λ, then there are two possibilities : t = λx u, t 0 = λx u 0 , and u η 0 u 0 ; then u 0 is normal, thus so is t 0 . t = λx(t 0 )x, and x does not occur free in t 0 ; then t 0 needs to be normal, since t is. If t does not start with λ, then t = (u)v, and the first symbol in u is not λ. In that case, either t 0 = (u)v 0 or (u 0 )v, with u η 0 u 0 or v η 0 v 0 . By induction hypothesis, u 0 and v 0 are normal and u 0 does not start with λ. Thus t 0 is normal (and does not start with λ). Q.E.D.
Lemma 3.28. Consider two terms t , v, and a variable x with no free occurrence in v. Suppose (v)x ÂÂ t . Then there exists an ηreduced image u of λx t such that v ÂÂ u. Recall that t 0 ÂÂ t 1 means that t 1 is obtained from t 0 by leftmost βreduction. The proof proceeds by induction on the number of steps of leftmost βreduction which transform (v)x in t . 1. (v)x = t ; then λx t η v (definition of η) ; take u = v. 2. (v)x 6= t and v does not start with λ. Then the first leftmost βreduction in (v)x is done in the subterm v ; it gives a term (v 0 )x, where v 0 is obtained from v
Lambdacalculus, types and models
58
by a leftmost βreduction. By induction hypothesis, there exists a term u such that λx t η u and v 0 ÂÂ u. Thus v ÂÂ u. 3. (v)x 6= t and v starts with λ. Since x is not free in v, we may write v = λx w ; therefore, a leftmost βreduction in (v)x produces the term w. Thus it follows from our assumption that w ÂÂ t . Hence v = λx w ÂÂ λx t . Q.E.D.
Theorem 3.29. Let t be a normal term, and x 1 : A 1 , . . . , x k : A k `D t : A a principal typing of t . Then there exists an interpretation I such that : i) x 1 ∈ A 1 I ,. . . , x k ∈ A k I ; ii) for every term v ∈ AI having all its free variables among x 1 , . . . , x k , there exists an ηreduced image u of t such that v ÂÂ u. We first show how theorem 3.26 easily follows from theorem 3.29 : indeed, let v be any typed term in system DΩ, of type A in the context x 1 : A 1 ,. . . , x k : A k ; by lemma 3.25, we may assume that the free variables of v are all among x 1 , . . . , x k . By the adequacy lemma (lemma 3.5), we have v[a 1 /x 1 , . . . , a k /x k ] ∈ AI whenever a i ∈ A i I ; now x i ∈ A i I , and therefore v ∈ AI . Then theorem 3.29 ensures the existence of an ηreduced image of t which can be obtained from v by leftmost βreduction. Now we prove theorem 3.29 by induction on the length of t : If t is a variable, say x 1 , then the given typing is x 1 : X 1 , . . . , x k : X k `D x 1 : X 1 , where the X i0 s are type variables. The interpretation I can be defined by v ∈ X i I ⇔ v ÂÂ x i . If t = λx u, then we have a principal typing of u of the form : x : A, x 1 : A 1 ,. . . , x k : A k `D u : B ; by induction hypothesis, there exists an interpretation I such that x ∈ AI , x 1 ∈ A 1 I ,. . . , x k ∈ A k I . Now the given principal typing of t = λx u is x 1 : A 1 ,. . . , x k : A k `D t : A → B . Let v ∈ A → B I be a term with no free variables but x 1 , . . . , x k (so x does not occur free in v). Since x ∈ AI , (v)x ∈ B I . Therefore, by induction hypothesis, (v)x ÂÂ w, where w is an ηreduced image of u. By lemma 3.28, there exists a term t 0 such that v ÂÂ t 0 and λx w η t 0 ; thus v ÂÂ t 0 and λx u η t 0 . If t does not start with λ, then t = (x)t 1 . . . t n , where x is some variable and t 1 , . . . , t n are normal terms. We also have principal typings for the t i ’s : x : A i , x 1 : A 1i ,. . . , x k : A ki `D t i : B i , and interpretations Ii . Observe that the typings of the t i0 s have no type variable in common, so it is possible to define one single interpretation I such that for every i , Ii and I have the same restriction to the type variables occurring in the typing of t i . Now the given principal typing of t is Γ `D t : X , where Γ is the context : V V V x : ni=1 A i ∧ (B 1 , . . . , B n → X ), x 1 : ni=1 A 1i , . . . , x k : ni=1 A ki .
Chapter 3. Intersection type systems By induction hypothesis, x ∈ A i I , thus x ∈  V j similarly, we have x j ∈  ni=1 A i I .
59 Vn
i =1
A i I ;
We define the value of X in the interpretation I by taking : X I = {v ∈ Λ; there exist t 10 ∈ B 1 I , . . . , t n0 ∈ B n I such that v ÂÂ (x)t 10 . . . t n0 } (this is indeed a saturated subset of Λ). It follows from this definition that x ∈ B 1 , . . . , B n → X I . Thus : V x ∈  ni=1 A i ∧ (B 1 , . . . , B n → X )I . Let v ∈ X I , with no free variables but x 1 , . . . , x k . Then v reduces to (x)t 10 . . . t n0 by leftmost βreduction ; we have t i0 ∈ B i I and therefore, by induction hypothesis, t i0 ÂÂ t i00 , where t i00 is an ηreduced image of t i . Hence v ÂÂ (x)t 100 . . . t n00 , which is clearly an ηreduced image of t = (x)t 1 . . . t n . So we have shown that the interpretation I satisfies all the required properties with respect to the given principal typing of t . Q.E.D.
Corollary 3.30. Let t , t 0 be two normal terms ; i) Suppose that Γ `DΩ t : A ⇒ Γ `DΩ t 0 : A, for any type A and any context Γ ; then t η t 0 . ii) Suppose that Γ `DΩ t : A ⇔ Γ `DΩ t 0 : A, for any type A and any context Γ ; then t = t 0 . i) Take Γ and A such that Γ `DΩ t : A is a principal typing of t . By assumption, we have Γ `DΩ t 0 : A ; by theorem 3.26, there exists a term u such that t η u and t 0 ÂÂ u. Now since t 0 is normal, this implies t 0 = u. ii) It follows from (i) that t η t 0 and t 0 η t ; therefore t = t 0 (indeed, if t η t 0 and t 6= t 0 , then t 0 is strictly shorter than t ). Q.E.D.
References for chapter 3 [Hin78], [Hin86], [Cop78], [Pot80], [Ron84]. (The references are in the bibliography at the end of the book).
60
Lambdacalculus, types and models
Chapter 4 Normalization and standardization 1. Typings for normalizable terms Notation. In this chapter, the notation ` refers to system D or system DΩ (the result hold in both cases). Of course, the notation `DΩ refers to system DΩ only, and the notation `D refers to system D only. Proposition 4.1. Let Γ be a context and x 1 , . . . , x k variables which are not declared in Γ. Suppose that Γ, x 1 : A 1 , . . . , x k : A k ` u : B , and Γ ` t i : A i for all i such that x i occurs free in u (1 ≤ i ≤ k). Then Γ ` u[t 1 /x 1 , . . . , t k /x k ] : B . Proof by induction on the number of rules used for the typing Γ, x 1 : A 1 , . . . , x k : A k ` u : B . Consider the last one : If it is rule 1, then u is a variable ; if u = x i , then B = A i , and u[t 1 /x 1 , . . . , t k /x k ] = t i , which is of type B in the context Γ. if u is a variable and u 6= x 1 , . . . , x k , then u[t 1 /x 1 , . . . , t k /x k ] = u, and Γ contains the declaration u : B ; thus Γ ` u : B. If it is rule 2, then u = λy v, B = C → D, and : Γ, x 1 : A 1 , . . . , x k : A k , y : C ` v : D. By induction hypothesis, we have Γ, y : C ` v[t 1 /x 1 , . . . , t k /x k ] : D. Therefore, by rule 2, we obtain Γ ` λy v[t 1 /x 1 , . . . , t k /x k ] : C → D, that is to say: Γ ` u[t 1 /x 1 , . . . , t k /x k ] : C → D. If it is rule 3, then u = v w and : Γ, x 1 : A 1 , . . . , x k : A k ` v : C → B , w : C . By induction hypothesis : Γ ` v[t 1 /x 1 , . . . , t k /x k ] : C → B and Γ ` w[t 1 /x 1 , . . . , t k /x k ] : C . Hence Γ ` (v[t 1 /x 1 , . . . , t k /x k ])w[t 1 /x 1 , . . . , t k /x k ] : B . 61
Lambdacalculus, types and models
62 In other words Γ ` u[t 1 /x 1 , . . . , t k /x k ] : B . The other cases are obvious. Q.E.D.
We will say that a type A is prime if A 6= Ω and A is not a conjunction. So a prime type is either a type variable or a type of the form A → B . Any type A is a conjunction of prime types and of Ω (when A is prime, this conjunction reduces to one single element). These prime types will be called the prime factors of A. The formal definition, by induction on the length of A, of the prime factors of A, is as follows : • if A = Ω, it has no prime factor ; • if A is a variable, or A = B → C , it has exactly one prime factor, which is A itself ; • if A = B ∧ C , the prime factors of A are the prime factors of B and the prime factors of C . Lemma 4.2. Suppose Γ ` t : A, where A is a prime type. i) If t is some variable x, then x is declared of type A 0 in Γ, A being a prime factor of A 0 . ii) If t = λx u, then A = B → C , and Γ, x : B ` u : C . iii) If t = uv, then Γ ` v : B , Γ ` u : B → A 0 , and A is a prime factor of A 0 . In case (ii), recall that the notation “ Γ, x : B ” implies that x is not declared in Γ (otherwise, one should rename the bound variables of λx u). The given typing of t (with a prime type A in the context Γ) is obtained by the rules listed on p. 42 or p. 51. Consider the first step when one of these rules produces a typing Γ ` t : A 0 , where A is a prime factor of A 0 . The rule applied at that step is neither rule 4 nor rule 5 : Indeed, rule 4 requires a previous typing of the form Γ ` t : A 0 ∧ B , and A would already be a prime factor of A 0 ∧ B . As for rule 5, it requires previous typings of the form Γ ` t : A 01 , and Γ ` t : A 02 , with A 0 = A 01 ∧ A 02 ; then A would already be either a prime factor of A 01 or of A 02 . In case (i), the rule applied may only be 1, 4 or 5, since the term obtained is a variable. But 4 and 5 have just been eliminated ; so it is rule 1, and therefore x is declared of type A 0 in Γ. In case (ii), the rule applied may only be 2, 4, or 5, since the term obtained is λx u. So it is rule 2, which implies that A 0 is of the form B → C ; now this is a prime type, thus A 0 = A = B → C . Moreover, in this case, rule 2 requires as a previous typing : Γ, x : B ` u : C . In case (iii), the rule applied may only be 3, 4 or 5, since the term obtained is uv. So it is rule 3, and therefore we have : Γ ` v : B and Γ ` u : B → A 0 . Q.E.D.
Chapter 4. Normalization and standardization
63
Proposition 4.3. If Γ ` t : A and t β t 0 , then Γ ` t 0 : A. We may assume t β0 t 0 (that is to say that t 0 is obtained by contracting one redex in t ). The proposition is proved by induction on the number of rules used to obtain Γ ` t : A. Consider the last one : It cannot be rule 1, since t β0 t 0 is impossible when t is a variable. If it is rule 2, then t = λx u, A = B → C , and Γ, x : B ` u : C . In this case, t 0 = λx u 0 and u β0 u 0 . By induction hypothesis, we have Γ, x : B ` u 0 : C ; thus Γ ` λx u 0 : B → C , that is to say Γ ` t 0 : A. If it is rule 3, then t = uv, Γ ` u : B → A, and Γ ` v : B . Here there are three possible situations for t 0 : i) t 0 = u 0 v, with u β0 u 0 ; by induction hypothesis, we have Γ ` u 0 : B → A, and therefore Γ ` t 0 : A. ii) t 0 = uv 0 , with v β0 v 0 ; by induction hypothesis, Γ ` v 0 : B ; thus Γ ` t 0 : A. iii) u = λx w and t 0 = w[v/x] ; so we have Γ ` λx w : B → A. Therefore, by lemma 4.2(ii), Γ, x : B ` w : A ; now, since Γ ` v : B , proposition 4.1 proves that Γ ` w[v/x] : A, that is to say Γ ` t 0 : A. If the last rule used is 4, 5 or 6, then the result is obvious. Q.E.D.
Proposition 4.4. Let Γ be a context and x 1 , . . . , x k variables which are not declared in Γ. If Γ ` u[t 1 /x 1 , . . . , t k /x k ] : B , and if t 1 , . . . , t k are typable in the context Γ, then there exist types A 1 , . . . , A k such that Γ ` t i : A i (1 ≤ i ≤ k) and Γ, x1 : A 1 , . . . , xk : A k ` u : B . Remarks. 1. If the type system is DΩ, then the condition “ t i is typable in the context Γ ” is satisfied anyway (Γ ` t i : Ω). 2. The necessity of introducing the conjunction symbol ∧, with its specific syntax, appears in this proposition ; the result is characteristic of this kind of type systems.
First, observe that the proposition is obvious when u = x i . Indeed, in that case, we have Γ ` t i : B , and, of course, Γ, x i : B ` x i : B . Thus we can take A i = B , and, for j 6= i , take A j as any type satisfying Γ ` t j : A j . Now suppose u 6= x 1 , . . . , x k . The proof is by induction on the number of rules used to obtain Γ ` u[t 1 /x 1 , . . . , t k /x k ] : B . Consider the last one. If it is rule 1, then u[t 1 /x 1 , . . . , t k /x k ] is a variable y, and Γ contains the declaration y : B . Thus u is also a variable. Now since u 6= x 1 , . . . , x k , we have u[t 1 /x 1 , . . . , t k /x k ] = u, and u = y. Therefore Γ ` u : B ; besides, it has been assumed that Γ ` t i : A i for appropriate types A i . If it is rule 2, then we have B = C → D, u[t 1 /x 1 , . . . , t k /x k ] = λy u 0 and Γ, y : C ` u 0 : D. Since u 6= x 1 , . . . , x k , we have u = λy v. As usual, we may suppose that y does not occur free in Γ, u, t 1 , . . . , t k , and y 6= x 1 , . . . , x k . We have
Lambdacalculus, types and models
64
u 0 = v[t 1 /x 1 , . . . , t k /x k ] and therefore Γ, y : C ` v[t 1 /x 1 , . . . , t k /x k ] : D. By induction hypothesis, there exist types A i such that Γ, y : C ` t i : A i , and Γ, y : C , x 1 : A 1 , . . . , x k : A k ` v : D. Consequently : Γ, x 1 : A 1 , . . . , x k : A k ` u : C → D. Moreover, since y does not occur in t i , we have Γ ` t i : A i (propositions 3.1, 3.3 and 3.14). If it is rule 3, then u[t 1 /x 1 , . . . , t k /x k ] = v 0 w 0 , and Γ ` v 0 : C → B , Γ ` w 0 : C . Since u 6= x 1 , . . . , x k , we have u = v w, and therefore : v 0 = v[t 1 /x 1 , . . . , t k /x k ], w 0 = w[t 1 /x 1 , . . . , t k /x k ]. Consequently : Γ ` v[t 1 /x 1 , . . . , t k /x k ] : C → B , and Γ ` w[t 1 /x 1 , . . . , t k /x k ] : C . By induction hypothesis, there exist types A 0i , A 00i such that : Γ ` t i : A 0i ; Γ ` t i : A 00i ; Γ, x 1 : A 01 , . . . , x k : A 0k ` v : C → B ; Γ, x 1 : A 001 , . . . , x k : A 00k ` w : C . Let A i = A 0i ∧ A 00i ; then we have : Γ, x 1 : A 1 , . . . , x k : A k ` v : C → B , w : C . Thus : Γ, x 1 : A 1 , . . . , x k : A k ` u : B . Moreover, Γ ` t i : A i . If it is rule 4 or rule 6, then the result is trivial. If it is rule 5, then : B = B 0 ∧ B 00 , and Γ ` u[t 1 /x 1 , . . . , t k /x k ] : B 0 , Γ ` u[t 1 /x 1 , . . . , t k /x k ] : B 00 . By induction hypothesis, there exist types A 0i , A 00i such that : Γ ` t i : A 0i ; Γ ` t i : A 00i ; Γ, x 1 : A 01 , . . . , x k : A 0k ` u : B 0 ; Γ, x 1 : A 001 , . . . , x k : A 00k ` u : B 00 . Let A i = A 0i ∧ A 00i ; then we have x 1 : A 1 , . . . , x k : A k ` u : B 0 ∧ B 00 , that is to say u : B . Moreover, Γ ` t i : A i . Q.E.D.
Corollary 4.5. If Γ ` u[t /x] : B and if t is typable in the context Γ, then Γ ` (λx u)t : B . Remark. In system DΩ, the condition about t is satisfied anyway, since Γ ` t : Ω.
Proof. By proposition 4.4, we have Γ ` t : A and Γ, x : A ` u : B for some type A. Hence Γ ` λx u : A → B (rule 2), and therefore, by rule 3, Γ ` (λx u)t : B . Q.E.D.
Theorem 4.6. Let t and t 0 be two λterms such that t 0 is obtained from t by βreduction (in other words t β t 0 ). If Γ `DΩ t 0 : A, then Γ `DΩ t : A. We may suppose t β0 t 0 (i.e. t 0 is obtained by contracting a redex in t ). The proof is by induction on the length of t and, for each fixed t , by induction on the length of A. If A = Ω, the result is trivial.
Chapter 4. Normalization and standardization
65
If A = A 1 ∧ A 2 , then Γ ` t 0 : A 1 and Γ ` t 0 : A 2 . By induction hypothesis, we have Γ ` t : A 1 , and Γ ` t : A 2 , therefore Γ ` t : A. So we may now suppose that A is a prime type. There are three possible cases for t : i) t is a variable ; this is impossible since t β0 t 0 . ii) t = λx u ; then t 0 = λx u 0 and u β0 u 0 . Since λx u 0 is of prime type A in the context Γ, by lemma 4.2(ii), we have A = B → C , and Γ, x : B ` u 0 : C . Now u is shorter than t , so by induction hypothesis, Γ, x : B ` u : C . Thus t = λx u is of type A = B → C in the context Γ. iii) t = uv ; then we have three possible situations for t 0 : a) t 0 = uv 0 , with v β0 v 0 ; by assumption uv 0 is of prime type A in the context Γ. By lemma 4.2(iii), we have Γ ` v 0 : B and Γ ` u : B → A 0 , A being a prime factor of A 0 . Now v is shorter than t so, by induction hypothesis, Γ ` v : B . Thus t = uv is of type A 0 , and hence also of type A, in the context Γ. b) t 0 = u 0 v, with u β0 u 0 ; similarly, we have : Γ ` v : B and Γ ` u 0 : B → A 0 , A being a prime factor of A 0 . By induction hypothesis, Γ ` u : B → A 0 . Thus t = uv is of type A 0 , and hence also of type A in the context Γ. c) u = λx w, (so t = (λx w)v) and t 0 = w[v/x]. The assumption is Γ ` w[v/x] : A. By corollary 4.5, and since we are in system DΩ, we also have Γ ` (λx w)v : A. Q.E.D.
As an immediate consequence of theorem 4.6 and proposition 4.3, we obtain : Theorem 4.7. If t is βequivalent to t 0 , and if Γ `DΩ t : A, then Γ `DΩ t 0 : A. We are then able to give an alternative proof of the uniqueness of the normal form : Corollary 4.8. Suppose t and t 0 are normal and t 'β t 0 . Then t = t 0 . Apply theorem 4.7 and corollary 3.30. Q.E.D.
Theorem 4.9. For every λterm t , the following conditions are equivalent : i) t is solvable ; ii) t is βequivalent to a head normal form ; iii) the head reduction of t is finite ; iv) t is typable with a nontrivial type in system DΩ.
Lambdacalculus, types and models
66
Recall that the trivial types are those obtained by the following rules : Ω is trivial ; if A is trivial, then so is B → A for every B ; if A, B are trivial, then so is A ∧ B . Lemma 4.10. If λx t (resp. t u) is typable with a nontrivial type in system DΩ, then the same property holds for t . We may assume that this type is nontrivial and prime, since any nontrivial type has a prime factor which is also nontrivial. Suppose that Γ ` λx t : A, where A is a prime nontrivial type. By lemma 4.2(ii), we get A = B → C and Γ, x : B ` t : C . Moreover, C is nontrivial since A is. Suppose that Γ ` t u : A, where A is a prime nontrivial type. By lemma 4.2(iii), we get Γ ` t : B → A 0 and A is a prime factor of A 0 . It follows that A 0 is nontrivial. Q.E.D.
We are now able to prove theorem 4.9. (i) ⇒ (iv) : Let u = λx 1 . . . λx k t be the closure of t . Then u is solvable (remark 2, p. 31, chapter 2), and therefore uv 1 . . . v n 'β x, where x is some variable with no occurrence in u. Since x can obviously be typed with a nontrivial type, the same holds for uv 1 . . . v n (theorem 4.7), and hence also for u, according to lemma 4.10. Applying this lemma again, we can see that t itself is typable with a nontrivial type. (iv) ⇒ (iii) : This is the head normal form theorem 3.7. (iii) ⇒ (ii) : Obvious. (ii) ⇒ (i) : We may suppose that t is a closed term (otherwise, take its closure). We have t 'β λx 1 . . . λx k (x i )u 1 . . . u l (closed term in head normal form). Let v i = λy 1 . . . λy l x (where x is a new variable), and v j be arbitrary terms for j 6= i , 1 ≤ j ≤ k. Then (t )v 1 . . . v k 'β x, which proves that t is solvable. Q.E.D.
As an application of theorem 4.9, we now prove the following property of solvable terms, which we have used in chapter 2 (namely, lemma 2.12) : Theorem 4.11. If t 'β λx 1 . . . λx k (x i )t 1 . . . t n (with 1 ≤ i ≤ k) then, there exist t 0j 'β t j (1 ≤ j ≤ n) such that, for any u 1 , . . . , u k ∈ Λ, we have : (t )u 1 . . . u k Âw (u i )t 100 . . . t n00 with t 00j = t 0j [u 1 /x 1 , . . . , u k /x k ]. Recall that Âw denotes weak head reduction (see page 30). Lemma 4.12. If t Â (x)t 1 . . . t n , then t [u/x, u 1 /x 1 , . . . , u k /x k ] Âw (u)t 10 . . . t n0 where t 0j = t j [u/x, u 1 /x 1 , . . . , u k /x k ] for 1 ≤ j ≤ k.
Chapter 4. Normalization and standardization
67
Proof by induction on the length of the head reduction from t to (x)t 1 . . . t n . Note that this reduction is, indeed, a weak head reduction, because the final term does not begin with a λ. The result is trivial if this length is 0, i.e. if t = (x)t 1 . . . t n . Otherwise, by proposition 2.2, we have t = (λz w)v v 1 . . . v p (since t does not begin with a λ). Let t ∗ = (w[v/z])v 1 . . . v p ; we can apply the induction hypothesis to t ∗ , so that t ∗ [u/x, u 1 /x 1 , . . . , u k /x k ] Âw (u)t 10 . . . t n0 . Define v 0 = v[u/x, u 1 /x 1 , . . . , u k /x k ], and the same for v 1 , . . . , v p , w. Thus, we have : t ∗ [u/x, u 1 /x 1 , . . . , u k /x k ] = (w[v/z][u/x, u 1 /x 1 , . . . , u k /x k ])v 10 . . . v p0 = (w[u/x, u 1 /x 1 , . . . , u k /x k , v 0 /z])v 10 . . . v p0 (by lemma 1.13) = (w 0 [v 0 /z])v 10 . . . v p0 (again by lemma 1.13, since z is not free in u, u 1 , . . . , u k ). Therefore, we have (w 0 [v 0 /z])v 10 . . . v p0 Âw (u)t 10 . . . t n0 . It follows trivially that (λz w 0 )v 0 v 10 . . . v p0 Âw (u)t 10 . . . t n0 . This gives the result, because t [u/x, u 1 /x 1 , . . . , u k /x k ] = (λz w 0 )v 0 v 10 . . . v p0 . Q.E.D.
We can now prove theorem 4.11. The hypothesis gives : (t )x 1 . . . x k 'β (x i )t 1 . . . t n and the variables x 1 , . . . , x k are not free in t . By theorem 4.9, the head reduction of (t )x 1 . . . x k is finite and gives a λterm which is βequivalent to (x i )t 1 . . . t n . In other words : (t )x 1 . . . x k Â (x i )t 10 . . . t n0 , with t 0j 'β t j (1 ≤ j ≤ n). We now use lemma 4.12, with the substitution [u 1 /x 1 , . . . , u k /x k ], and we obtain (t )u 1 . . . u k Âw (u i )t 100 . . . t n00 with t 00j = t 0j [u 1 /x 1 , . . . , u k /x k ]. Q.E.D.
Theorem 4.13. For every λterm t , the following conditions are equivalent : i) t is normalizable ; ii) t is normalizable by leftmost βreduction ; iii) there exist a type A and a context Γ, both containing no occurrence of the symbol Ω, such that Γ `DΩ t : A ; iv) there exist a type A with no positive occurrence of Ω, and a context Γ with no negative occurrence of Ω, such that Γ `DΩ t : A. Clearly, (ii) ⇒ (i) and (iii) ⇒ (iv). We already know that (iv) ⇒ (ii) : this is the normalization theorem 3.10. It remains to prove that (i) ⇒ (iii) : If t is normalizable, then t 'β t 0 for some normal term t 0 ; by proposition 3.24, there exist a type A and a context Γ, both containing no occurrence of the symbol Ω, such that Γ `D t 0 : A. It then follows from theorem 4.7 that we also have Γ `DΩ t : A. Q.E.D.
Lambdacalculus, types and models
68
Theorem 4.14. A λterm t is normalizable if and only if it admits no infinite quasi leftmost reduction. The condition is obviously sufficient. Conversely, if t is normalizable, then, by theorem 4.13, there exist a type A and a context Γ, both containing no occurrence of the symbol Ω, such that Γ `DΩ t : A. Thus, it follows from the quasi leftmost normalization theorem 3.12 that t admits no infinite quasi leftmost reduction. Q.E.D.
With the help of the results above, we can now give yet another proof of the uniqueness of the normal form (the third, see corollary 4.8) which makes no use of the ChurchRosser theorem 1.24. Theorem 4.15. If t is normalizable, then it has only one normal form. In other words, if t β u, t β u 0 and u, u 0 are normal, then u = u 0 . By theorem 4.13(i)(ii), t is normalizable by leftmost βreduction. We prove the theorem by induction on the total length of this reduction (i.e. the total number of symbols which appear in it). By proposition 2.2, we have t = λx 1 . . . λx k (ξ)t 1 . . . t n where ξ is a variable or a redex. If ξ is a variable, the leftmost βreduction of t is exactly the succession of the leftmost βreductions of t 1 , . . . , t n . Therefore, we can apply the induction hypothesis to t 1 , . . . , t n and we see that t has only one normal form, which is : λx 1 . . . λx k (ξ)t 1∗ . . . t n∗ where t i∗ is the (unique) normal form of t i . If ξ = (λx u)v is a redex, the first step of leftmost βreduction in t gives : t ∗∗ = λx 1 . . . λx k u[v/x]t 1 . . . t n . By the induction hypothesis, t ∗∗ has a unique normal form t ∗ . Consider now any βreduction of t , which gives a normal form. We show that it gives t ∗ . Since t = λx 1 . . . λx k (λx u)v t 1 . . . t n , this reduction begins with some βreductions in u, v, t 1 , . . . , t n , which give : λx 1 . . . λx k (λx u 0 )v 0 t 10 . . . t n0 , with u β u 0 , v β v 0 , t 1 β t 10 , . . . , t n β t n0 . Then, the head redex is reduced, which gives λx 1 . . . λx k u 0 [v 0 /x]t 10 . . . t n0 . But βreduction is a λcompatible relation, and therefore, we have : t ∗∗ β λx 1 . . . λx k u 0 [v 0 /x]t 10 . . . t n0 . This shows that this βreduction will finally give a normal form of t ∗∗ , i.e. t ∗ . Q.E.D.
Strong normalization The next proposition is a generalization of corollary 4.5. It holds for both systems D and DΩ (in the case of system DΩ, the condition “ t is typable in the context Γ ” is satisfied anyway, since Γ `DΩ t : Ω).
Chapter 4. Normalization and standardization
69
Proposition 4.16. For all terms u, t , t 1 , . . . , t n , and any variable x, if Γ ` (u[t /x])t 1 . . . t n : B , and if t is typable in the context Γ, then Γ ` (λx u)t t 1 . . . t n : B . The proof is by induction on n and, for each fixed n, by induction on the length of B . The case n = 0 is precisely corollary 4.5. If B = B 1 ∧ B 2 , then(u[t /x])t 1 . . . t n may be given both type B 1 and type B 2 in the context Γ ; by induction hypothesis, the same holds for (λx u)t t 1 . . . t n , which is thus typable in the context Γ, with type B 1 ∧ B 2 . Now we may suppose that B is a prime type and that n ≥ 1. We have Γ ` u[t /x]t 1 . . . t n : B ; it follows from lemma 4.2(iii) that t n is of type C , and (u[t /x])t 1 . . . t n−1 of type C → B 0 , in the context Γ, B being a prime factor of B 0. By induction hypothesis, we have Γ ` (λx u)t t 1 . . . t n−1 : C → B 0 . Therefore (λx u)t t 1 . . . t n is of type B 0 , and hence also of type B , in the context Γ. Q.E.D.
Theorem 4.17. Every strongly normalizable term is typable in system D. Consider a strongly normalizable term τ, and let N (τ) be the sum of the lengths of all possible normalizations of τ (proposition 3.18 ensures the correctness of this definition). The proof is by induction on N (τ). By proposition 2.2, we have : τ = λx 1 . . . λx m (v)t 1 . . . t n , where v is either a variable or a redex. If v is a variable, then t 1 , . . . , t n are strongly normalizable and we have : N (τ) > N (t 1 ), . . . , N (t n ). Thus t 1 , . . . , t n are typable, with types A 1 , . . . , A n , respectively, in system D ; we may suppose that all these typings are in the same context Γ (proposition 3.23) and that Γ contains a declaration for each of the variables x 1 , . . . , x m , v, say x 1 : U1 , . . . , x m : Um , v : V (with V = Ui whenever v = x i ). Let X be a new type variable, V 0 = V ∧ (A 1 , . . . , A n → X ), and Γ0 the context obtained by replacing in Γ the declaration of v with : v : V 0 . Then we have Γ0 `D t i : A i (1 ≤ i ≤ n), and thus Γ0 `D (v)t 1 . . . t n : X ; hence τ may be given : either type U1 , . . . ,Um → X (if v 6= x 1 , . . . , x m ) or type U1 , . . . ,Ui −1 ,V 0 ,Ui +1 , . . . ,Um → X (if v = x i ). If v = (λx u)t (v is a redex), then τ = λx 1 . . . λx m (λx u)t t 1 . . . t n ; let τ0 = u[t /x]t 1 . . . t n . Clearly, N (τ) > N (τ0 ) (every normalization of τ0 is strictly included in a normalization of τ) ; it is also clear that N (τ) > N (t ) (since t is a subterm of τ). Thus, by induction hypothesis, τ0 and t are typable in system D ; moreover, proposition 3.23 allows us to assume that they are typable in the same context. It then follows from proposition 4.16 that (λx u)t t 1 . . . t n is typable, with some type B , in some context Γ : even if it means extending it,
Lambdacalculus, types and models
70
we may assume that Γ contains a declaration for each of the variables x 1 , . . . , x m , say x 1 : U1 , . . . , x m : Um . Finally, τ is seen to be typable, with type U1 , . . . ,Um → B . Q.E.D.
Corollary 4.18. A term is strongly normalizable if and only if it is typable in system D. Indeed, by the strong normalization theorem 3.20, every term which is typable in system D is strongly normalizable. Remarks. 1. Theorem 4.6 does not hold any more if we replace system DΩ with system D. For instance, the term t = λy(λx y)(y)y is βequivalent to λy y, which is of type Y → Y , where Y is any type variable. Now t may not be given type Y → Y : Indeed, if `D t : Y → Y , then, by lemma 4.2(ii), we have : y : Y `D (λx y)(y)y : Y ; therefore, by lemma 4.2(iii), y : Y `D (y)y : A for some type A ; hence y : Y `D y : B → C (by lemma 4.2(iii)) ; but this is in contradiction with lemma 4.2(i). Nevertheless, t is typable ; for example, it may be given type Y ∧ (Y → X ) → Y ∧ (Y → X ). There is an analogue of theorem 4.6 for system D, which uses βI reduction instead of βreduction (see below theorem 4.21). 2. A normalizable term, of which every proper subterm is strongly normalizable, need not be strongly normalizable. For instance, the term : t = (λx(λy z)(x)δ)δ, where δ = λx xx, is normalizable (it is βequivalent to z), but not strongly normalizable (t reduces to (λy z)(δ)δ, and (δ)δ is not normalizable).
βI reduction A λterm of the form (λx t )u will be called a I redex if x is a free variable of t . Reducing a I redex will be called a step of βI reduction. A finite sequence of such steps will be called a βI reduction. The notation t βI t 0 means that t 0 is obtained by βI reduction from t . We will now prove the following result (Barendregt’s conservation theorem) : Theorem 4.19. If t 0 is strongly normalizable and if t βI t 0 , then t is strongly normalizable. Lemma 4.20. If Γ `D u[v/x] : A and if x is free in u, then v is typable, in system D, in the context Γ. We first observe that the result is trivial if u is a variable : indeed, this variable must be x. Therefore, from now on, we assume that u is not a variable.
Chapter 4. Normalization and standardization
71
We prove the lemma by induction on the length of the proof of the typing : Γ `D u[v/x]:A in system D. Consider the last rule used in this proof (page 51). If it is rule 1, u[v/x] is a variable, thus u must also be a variable. If it is rule 2, then u[v/x] = λy w and we have A = B → C and Γ, y:B ` w:C . Now, u is not an application (u[v/x] would also be an application) and we assumed it is not a variable. Therefore, we have u = λy u 0 and w = u 0 [v/x]. Thus Γ, y:B ` u 0 [v/x]:C is the previous step of the proof. Now, the variable x is free in u 0 , since it is free in u. By the induction hypothesis, we see that v is typable, in system D, in the context Γ, y:B . But y is not free in v and it follows from proposition 3.14 that v is typable in the context Γ. If it is rule 3, then u[v/x] = w 0 w 1 and we have : Γ ` w 0 :B → A, Γ ` w 1 :B . Now, u is not an abstraction (u[v/x] would also be an abstraction) and we assumed it is not a variable. Therefore, we have u = u 0 u 1 and w 0 = u 0 [v/x], w 1 = u 1 [v/x]. Thus, some previous steps of the proof are Γ ` u 0 [v/x]:B → A, Γ ` u 1 [v/x]:B . But x is free in u = u 0 u 1 , and therefore, it is free in u 0 or in u 1 . We may thus apply the induction hypothesis, and we see that v is typable, in system D, in the context Γ. The case of the rules 4 and 5 is trivial. Q.E.D.
Theorem 4.21. Let t and t 0 be two λterms such that t βI t 0 . If Γ `D t 0 : A, then Γ `D t : A. Remark. Thus, the typings in system D are preserved by inverse βI reduction. This theorem is close to theorem 4.6, which says that, in system DΩ, the typings are preserved by inverse βreduction.
We may assume that t 0 is obtained from t by one step of βI reduction. The proof is by induction on the length of t and, for each fixed t , by induction on the length of A. It is exactly the same as for theorem 4.6, except for : • the very first step : of course, the case A = Ω is not considered. • the very last step (iii)(c), which is managed as follows : c) u = λx w, (so t = (λx w)v) and t 0 = w[v/x]. Since we have a step of βI reduction, the variable x is free in w. Now, the assumption is : Γ `D w[v/x] : A. By lemma 4.20, v is typable in the context Γ, in system D. By corollary 4.5, we also have Γ `D (λx w)v : A. Q.E.D.
We can now prove theorem 4.19 : if t 0 is strongly normalizable, it is typable in system D (corollary 4.18). By theorem 4.21, t is also typable in system D ; thus, by corollary 4.18, t is strongly normalizable. Q.E.D.
Lambdacalculus, types and models
72
Two redexes (λx t )u and (λx 0 t 0 )u 0 will be called equivalent if : u = u 0 and t [u/x] = t 0 [u 0 /x 0 ] (they have identical arguments and reducts). A redex which is equivalent to a I redex will be called a I 0 redex. For example, (λx uv)u is always a I 0 redex, even when x is not free in u, v. Indeed, in this case, it is equivalent to the I redex (λx xv)u. We shall write t βI 0 t 0 if t 0 is obtained from t by a sequence of reductions of I 0 redexes. We can strengthen theorems 4.21 and 4.19 in the following way, with exactly the same proof : Theorem 4.22. Let t and t 0 be two λterms such that t βI 0 t 0 . If Γ `D t 0 : A, then Γ `D t : A. Theorem 4.23. If t 0 is strongly normalizable and if t βI 0 t 0 , then t is strongly normalizable.
The λI calculus The terms of the λI calculus form a subset ΛI of Λ, which is defined as follows : • If x is a variable, then x ∈ ΛI . • If t , u ∈ ΛI , then t u ∈ ΛI . • If t ∈ ΛI and x is a variable which appears free in t , then λx t ∈ ΛI . The typical example of a closed λterm which is not in ΛI is λxλy x. If t ∈ ΛI , then every subterm of t is in ΛI (trivial proof, by induction on the length of t ). Proposition 4.24. If t , t 1 , . . . , t n ∈ ΛI , then t [t 1 /x 1 , . . . , t n /x n ] ∈ ΛI . Proof by induction on the length of t : the result is immediate if t is a variable, or if t = uv, with u, v ∈ ΛI . If t = λx u, then : t [t 1 /x 1 , . . . , t n /x n ] = λx u[t 1 /x 1 , . . . , t n /x n ] (we suppose x 6= x 1 , . . . , x n ). By hypothesis, there is a free occurrence of x in u and therefore, there is also one in u[t 1 /x 1 , . . . , t n /x n ]. By induction hypothesis, we have u[t 1 /x 1 , . . . , t n /x n ] ∈ ΛI . It follows that λx u[t 1 /x 1 , . . . , t n /x n ] ∈ ΛI . Q.E.D.
Proposition 4.25. ΛI is closed by βreduction. More precisely, if t ∈ ΛI and t β t 0 , then t 0 ∈ ΛI and t 0 has the same free variables as t .
Chapter 4. Normalization and standardization
73
Suppose t ∈ ΛI and t β0 t 0 ; we show the result by induction on the length of t ; observe that t cannot be a variable. If t = λx u, then t 0 = λx u 0 with u β0 u 0 . Since u ∈ ΛI and x is a free variable of u, by induction hypothesis, u 0 has the same properties. It follows that t 0 ∈ ΛI and t 0 has the same free variables as t . If t = uv, we have three possibilities for t 0 : t 0 = u 0 v with u β0 u 0 ; by induction hypothesis, we have u 0 ∈ ΛI and u 0 has the same free variables as u. Hence, t 0 ∈ ΛI and t 0 has the same free variables as t . t 0 = uv 0 with v β0 v 0 ; same proof. u = λx w (so that t = (λx w)v), and t 0 = w[v/x] ; we have v, w ∈ ΛI and therefore, by proposition 4.24, we have t 0 ∈ ΛI . Now, let F v (resp. F w ) the set of free variables of v (resp. w) ; thus, we have x ∈ F w . The set of free variables of t is F v ∪ (F w \ {x}). The set of free variables of t 0 is the same, because v is really a subterm of t 0 = w[v/x]. Q.E.D.
Theorem 4.26. If t ∈ ΛI is normalizable, then t is strongly normalizable. We prove first the following lemma on strong normalization : Lemma 4.27. Let t 1 , . . . , t n , u, v ∈ Λ be such that u[v/x]t 1 . . . t n and v are strongly normalizable. Then (λx u)v t 1 . . . t n is strongly normalizable. By corollary 4.18, we know that u[v/x]t 1 . . . t n and v are typable in system D. By proposition 3.23, they are typable in the same context. Then, we apply proposition 4.16, which shows that (λx u)v t 1 . . . t n is typable in system D. Applying again corollary 4.18, we see that (λx u)v t 1 . . . t n is strongly normalizable. We can give a more direct proof, which does not use types. Suppose that there exists an infinite sequence of βreductions for the λterm (λx u)v t 1 . . . t n . There are two possible cases : • Each βreduction takes place in one of the terms u, v, t 1 , . . . , t n . Thus, there is an infinite sequence of βreductions in one of these terms. But it cannot be v, which is strongly normalizable ; and it can be neither u, nor t 1 , . . . , nor t n , because u[v/x]t 1 . . . t n is strongly normalizable. • The sequence begins with a finite number of βreductions in the terms u, v, t 1 , . . . , t n and then, the head redex is reduced. This gives (λx u 0 )v 0 t 10 . . . t n0 with u β u 0 , v β v 0 , t 1 β t 10 , . . . , t n β t n0 and then u 0 [v 0 /x]t 10 . . . t n0 . Therefore, this term is not strongly normalizable. But βreduction is a λcompatible relation, and it follows that u[v/x]t 1 . . . t n β u 0 [v 0 /x]t 10 . . . t n0 . Therefore, u[v/x]t 1 . . . t n is also not strongly normalizable, which is a contradiction. Q.E.D.
Lambdacalculus, types and models
74
Now, we prove theorem 4.26 : by theorem 4.13, we know that t is normalizable by leftmost reduction. We prove the result by induction on the total length of this leftmost reduction (i.e. the sum of the lengths of the λterms which appear in it). By proposition 2.2, there are two possibilities for t : • t = λx 1 . . . λx m (y)t 1 . . . t n where y is a variable. Then, we have t 1 , . . . , t n ∈ ΛI and their leftmost reductions are stricly shorter than the one of t . By induction hypothesis, they are all strongly normalizable, and so is t . • t = λx 1 . . . λx m (λx u)v t 1 . . . t n ; we have to show that (λx u)v t 1 . . . t n is strongly normalizable. By lemma 4.27, it suffices to show that u[v/x]t 1 . . . t n and v are strongly normalizable. Now, u[v/x]t 1 . . . t n is obtained by βreduction from (λx u)v t 1 . . . t n ∈ ΛI . Thus, u[v/x]t 1 . . . t n ∈ ΛI (proposition 4.25). It is clear that its leftmost reduction is strictly shorter than the one of : t = λx 1 . . . λx m (λx u)v t 1 . . . t n . Thus, by induction hypothesis, we see that u[v/x]t 1 . . . t n is strongly normalizable. But λx u ∈ ΛI , because it is a subterm of t ; thus, x is a free variable of u. It follows that v is a subterm of u[v/x]t 1 . . . t n , and therefore v is also strongly normalizable. Q.E.D.
There is a short proof of theorem 4.26, by means of the above results on βI reduction : suppose that t ∈ ΛI is normalizable and let t 0 be its normal form. Thus, t 0 is typable in sytem D (proposition 3.24). But we have t βI t 0 , since the reduction of t takes place in ΛI . Therefore, by theorem 4.21, t is typable in sytem D and thus, t is strongly normalizable (theorem 3.20). Q.E.D.
βηreduction Let X 1 , . . . , X k be distinct type variables, A a type, Γ a context, and U1 , . . .Uk arbitrary types. The type (resp. the context) obtained by replacing, in A (resp. in Γ), each occurrence of X i by Ui (1 ≤ i ≤ k) will be denoted by : A[U1 /X 1 , . . . ,Uk /X k ] (resp. Γ[U1 /X 1 , . . . ,Uk /X k ]). The next two propositions hold for both systems D and DΩ. Proposition 4.28. If Γ ` t : A, then Γ[U1 /X 1 , . . . ,Uk /X k ] ` t : A[U1 /X 1 , . . . ,Uk /X k ]. Immediate, by induction on the number of rules used to obtain Γ ` t : A. Q.E.D.
Chapter 4. Normalization and standardization
75
Proposition 4.29. Suppose t η 0 t 0 and Γ ` t 0 : A, and let X 1 , . . . , X k be the type variables which occur either in Γ or in A.Then : Γ[U1 /X 1 , . . . ,Uk /X k ] ` t : A[U1 /X 1 , . . . ,Uk /X k ] for all types U1 , . . . ,Uk of the form V →W. Recall that t η 0 t 0 means that t 0 is obtained from t by one ηreduction. The proof of the proposition is by induction on the length of t and, for a given t , by induction on the length of A. If A = Ω, the result is trivial. If A = A 1 ∧ A 2 , then Γ ` t 0 : A 1 , Γ ` t 0 : A 2 . By induction hypothesis, we have Γ[U1 /X 1 , . . . ,Uk /X k ] ` t : A i [U1 /X 1 , . . . ,Uk /X k ](i = 1, 2) ; therefore, by rule 5, Γ[U1 /X 1 , . . . ,Uk /X k ] ` t : A[U1 /X 1 , . . . ,Uk /X k ]. So we now may suppose that A is a prime type. The three possible situations for t are : i) t is a variable : this is impossible since t η 0 t 0 . ii) t = λx u ; then we have two possible cases for t 0 : a) t 0 = λx u 0 , with u η 0 u 0 . Since Γ ` t 0 : A (prime type), it follows from lemma 4.2(ii) that A = B → C , and Γ, x : B ` u 0 : C . By induction hypothesis : Γ[U1 /X 1 , . . . ,Uk /X k ], x : B [U1 /X 1 , . . . ,Uk /X k ] ` u : C [U1 /X 1 , . . . ,Uk /X k ] for all types Ui of the form V → W . Thus t is of type : B [U1 /X 1 , . . . ,Uk /X k ] → C [U1 /X 1 , . . . ,Uk /X k ] = A[U1 /X 1 , . . . ,Uk /X k ] in the context Γ[U1 /X 1 , . . . ,Uk /X k ]. b) t = λx t 0 x, and x does not occur free in t 0 . By assumption, we have : Γ ` t 0 : A, A being a prime type. According to the definition of prime types, we have two cases : If A = B → C , then, x : B ` t 0 x : C ; hence Γ ` λx t 0 x : B → C , in other words Γ ` t : A ; by proposition 4.28, we have : Γ[U1 /X 1 , . . . ,Uk /X k ] ` t : A[U1 /X 1 , . . . ,Uk /X k ]. If A is a type variable X i , then Γ ` t 0 : X i ; therefore, by proposition 4.28, we have Γ[U1 /X 1 , . . . ,Uk /X k ] ` t 0 : Ui . Now, by assumption, Ui = V → W . It then follows that Γ[U1 /X 1 , . . . ,Uk /X k ], x : V ` t 0 x : W and, consequently, Γ[U1 /X 1 , . . . ,Uk /X k ] ` λx t 0 x : Ui , that is to say Γ[U1 /X 1 , . . . ,Uk /X k ] ` t : Ui . iii) t = uv ; again, we have two possible cases for t 0 : a) t 0 = uv 0 , with v η 0 v 0 ; since uv 0 is of prime type A in the context Γ, it follows from lemma 4.2(iii) that v 0 is of type B and u of type B → A 0 in the context Γ, A being a prime factor of A 0 . By induction hypothesis : Γ[U1 /X 1 , . . . ,Uk /X k ] ` v : B [U1 /X 1 , . . . ,Uk /X k ] for all types Ui of the form V → W . By proposition 4.28, we have :
Lambdacalculus, types and models
76
Γ[U1 /X 1 , . . . ,Uk /X k ] ` u : B [U1 /X 1 , . . . ,Uk /X k ] → A 0 [U1 /X 1 , . . . ,Uk /X k ]. Thus t = uv is of type A 0 [U1 /X 1 , . . . ,Uk /X k ] in the context Γ[U1 /X 1 , . . . ,Uk /X k ], and hence is also of type A[U1 /X 1 , . . . ,Uk /X k ]. b) t = u 0 v, with u η 0 u 0 ; the proof is the same as in case (a). Q.E.D.
Theorem 4.30. A λterm is βηnormalizable if and only if it is normalizable. Necessity : let t be a βηnormalizable term ; we prove that t is normalizable, by induction on the length of its βηnormalization. Consider the first βηreduction done in t : it produces a term t 0 , which is normalizable (induction hypothesis). If it is a βreduction, then t β0 t 0 , thus t is also normalizable. If it is an ηreduction, then t η 0 t 0 ; since t 0 is normalizable (induction hypothesis), we have Γ `DΩ t 0 : A, where both A and Γ contain no occurrence of the symbol Ω (theorem 4.13). By proposition 4.29, there exist a type A 0 and a context Γ0 , with no occurrence of Ω, such that Γ0 `DΩ t : A 0 ; it then follows from theorem 4.13 that t is normalizable. Sufficiency : if t is normalizable, then t β t 0 for some normal term t 0 ; consider a maximal sequence of ηreductions starting with t 0 (such a sequence needs to be finite, since the length of terms strictly decreases under ηreduction) : it produces a term which is still normal (lemma 3.27) and contains no ηredex, in other words a βηnormal term. Q.E.D.
We can now give an alternative proof of the uniqueness of the βηnormal form : Theorem 4.31. If t ∈ Λ is βηnormalizable, then it has only one βηnormal form. More precisely, there exists a βηnormal term u such that, if t βη t 0 for some t 0 , then t 0 βη u. Remark. This is exactly the ChurchRosser property for t .
By theorem 4.30, t is normalizable ; by theorem 4.13(i)(iii), there exist a type A and a context Γ, both containing no occurrence of the symbol Ω, such that Γ `DΩ t : A. Then the result follows immediately from theorem 3.13. Q.E.D.
Theorem 4.32. A λterm t is solvable if, and only if there exists a head normal form u such that t βη u. If t is solvable, then t β u for some head normal form u and, therefore, t βη u. Conversely, suppose that t βη u, u being a head normal form. Then, there exists a sequence t 0 , t 1 , . . . , t n such that t 0 = t , t n is solvable and, for each i = 0, . . . , n we have t i β t i +1 or t i η 0 t i +1 .
Chapter 4. Normalization and standardization
77
We show that t is solvable by induction on n. This is trivial if n = 0. If n ≥ 1, then t 1 is solvable, by induction hypothesis and there are two cases : i) t 0 β t 1 ; then t = t 0 is solvable. ii) t 0 η 0 t 1 ; since t 1 is solvable, by theorem 4.9(i)(iv), it is typable with a nontrivial type in system DΩ. By proposition 4.29, t = t 0 has the same property ; it is therefore solvable, again by theorem 4.9(i)(iv). Q.E.D.
2. The finite developments theorem Remark. Until the end of this chapter, we shall only use the ChurchRosser theorem 1.24 and the strong normalization theorem 3.20.
Let t ∈ Λ ; recall that a redex in t is, by definition, an occurrence, in t , of a subterm of the form (λx u)v. In other words, a redex is defined by a subterm of the form (λx u)v, together with its position in t . So we clearly have the following inductive definition for the redexes of a term t : if t is a variable, then there is no redex in t ; if t = λx u, the redexes in t are those in u ; if t = uv, the redexes in t are those in u, those in v, and, if u starts with λ, t itself. We add to the λcalculus a new variable, denoted by c, and we define Λ(c) as the least set of terms satisfying the following rules : 1. If x is a variable 6= c, then x ∈ Λ(c) ; 2. If x is a variable 6= c, and if t ∈ Λ(c), then λx t ∈ Λ(c) ; 3. If t , u ∈ Λ(c), then (c)t u ∈ Λ(c) ; 4. If t , u ∈ Λ(c), and if t starts with λ, then t u ∈ Λ(c). Lemma 4.33. If t , u ∈ Λ(c), and if x is a variable 6= c, then u[t /x] ∈ Λ(c). The proof is by induction on u. The result is obvious whenever u is a variable 6= c, or u = λy v, or u = (c)v w. If u = (λy v)w, then u[t /x] = (λy v[t /x])w[t /x]. By induction hypothesis, v[t /x], w[t /x] ∈ Λ(c), and therefore u[t /x] ∈ Λ(c). Q.E.D.
Lemma 4.34. If t ∈ Λ(c) and t β0 t 0 , then t 0 ∈ Λ(c). By induction on t . If t = λx u, then t 0 = λx u 0 , with u β0 u 0 ; then the conclusion follows from the induction hypothesis. If t = (c)uv, then t 0 = (c)u 0 v or (c)uv 0 , with u β0 u 0 or v β0 v 0 . By induction hypothesis, u 0 , v 0 ∈ Λ(c), and therefore t 0 ∈ Λ(c). If t = (λx u)v, there are three possibilities for t 0 :
Lambdacalculus, types and models
78
t 0 = (λx u 0 )v, or (λx u)v 0 , with u β0 u 0 or v β0 v 0 . By induction hypothesis, u 0 , v 0 ∈ Λ(c), and then t 0 ∈ Λ(c). t 0 = u[v/x] ; then t 0 ∈ Λ(c) by lemma 4.33. Q.E.D.
We see that Λ(c) is invariant under βreduction (if t ∈ Λ(c) and t β t 0 , then t 0 ∈ Λ(c)). Lemma 4.35. Let t ∈ Λ(c), and Γ be any context in which all the variables of t , except c, are declared. Then there exist two types C , T of system D such that Γ, c : C `D t : T . Proof by induction on t : this is obvious when t is a variable 6= c. If t = λx u, we can assume that the variable x is not declared in Γ (otherwise, we change the name of this variable in t ). By induction hypothesis, we have Γ, x : A, c : C ` u : U , and therefore Γ, c : C ` λx u : A → U . If t = (c)uv, with u, v ∈ Λ(c), then, by induction hypothesis : Γ, c : C ` u : U , and Γ, c : C 0 ` v : V . Hence : Γ, c : C ∧C 0 ∧ (U ,V → W ) ` (c)uv : W . If t = (λx u)v, with u, v ∈ Λ(c), we may assume that the variable x is not declared in Γ (otherwise, we change the name of this variable in λx u). By induction hypothesis : Γ, x : A, c : C ` u : U , and Γ, c : C 0 ` v : V ; but here A is an arbitrary type, so we can take A = V . Then Γ, c : C ` λx u : V → U , and therefore : Γ, c : C ∧C 0 ` (λx u)v : U . Q.E.D.
Corollary 4.36. Every term in Λ(c) is strongly normalizable. This is immediate, according to the strong normalization theorem 3.20. Q.E.D.
We define a mapping from Λ(c) onto Λ, denoted by T 7→ T , by induction on T: if T is a variable 6= c, then T  = T ; if T = λx U , with U ∈ Λ(c), then T  = λxU  ; if T = (c)UV , with U ,V ∈ Λ(c), then T  = (U )V  ; if T = (λx U )V , with U ,V ∈ Λ(c), then T  = (λxU )V  ; Roughly speaking, one obtains T  by “ forgetting ” c in T . Let T ∈ Λ(c) and t = T  ; there is an obvious way of associating, with each redex R in T , a redex r = R in t , called the image of R. Distinct redexes in T have distinct images in t ; this property, like the next ones, is immediate, by induction on T :
Chapter 4. Normalization and standardization
79
If T,U ∈ Λ(c), and T  = t , U  = u, then T [U /x] = t [u/x]. Let T ∈ Λ(c), R be a redex in T , T 0 the term obtained by contracting R in T , t = T , r = R, and t 0 = T 0  ; then t 0 is the term obtained by contracting the redex r in t . Lemma 4.37. Let t ∈ Λ and R be a set of redexes of t . Then there exists a unique term T ∈ Λ(c) such that t = T  and R is the set of all images of the redexes of T . This term T will be called the representative of (t , R). So we have a onetoone correspondence between Λ(c) and the set of ordered pairs (t , R) such that t ∈ Λ and R is a set of redexes of t . We define T by induction on t . If t is a variable, then R = ; ; the only way of obtaining a term T ∈ Λ(c) such that T  is a variable is to use rule 1 in the inductive definition of Λ(c) given above. Thus T = t . If t = λx u, then R is a set of redexes of u. Only rule 2 can produce a term T such that T  starts with λ. So T = λx U , and U needs to be the representative of (u, R). If t = t 1 t 2 , let R 1 (resp. R 2 ) be the subset of R consisting of those redexes which occur in t 1 (resp. t 2 ). T is obtained by rule 3 or rule 4, thus either T = (c)T1 T2 , or T = T1 T2 , Ti being the representative of (t i , R i ). If t itself is not a member of R, then T cannot be obtained by rule 4 ; otherwise T would be a redex, and its image t would be in R. Thus T = (c)T1 T2 . If t is a member of R, then T needs to be a redex, so T cannot be obtained by rule 3, and therefore T = T1 T2 . Q.E.D.
Intuitively, the representative of (t , R) is obtained by using the variable c to “ destroy ” those redexes of t which are not in R, and to “ neutralize ” the applications in such a way that they cannot be transformed in redexes via βreduction. Let t ∈ Λ, R be a set of redexes of t , r 0 a redex of t , and t 0 the term obtained by contracting r 0 in t . We define a set R 0 of redexes of t 0 called residues of R relative to r 0 : let S = R ∪ {r 0 }, T be the representative of (t , S ), R 0 the redex of T of which r 0 is the image, and T 0 the term obtained by contracting R 0 in T ; so we have t 0 = T 0 . Then R 0 is, by definition, the set of images in t 0 of the redexes of T 0 . Remark. The set of residues of R relative to r 0 does not only depend on t and t 0 , but also on the redex r 0 . For example, take t = (λx x)(λx x)x, t 0 = (λx x)x, r 0 = t and r 1 = t 0 : clearly, t 0 is obtained by contracting either the redex r 0 or the redex r 1 in t ; but {r 0 } has a residue relative to r 1 , while it has no residue relative to r 0 .
Let t ∈ Λ ; a reduction B starting with t consists, by definition, of a finite sequence of terms (t 0 = t ), t 1 , . . . , t n , together with a sequence of redexes :
80
Lambdacalculus, types and models
r 0 , r 1 , . . . , r n−1 , such that each r i is a redex of t i , and t i +1 is obtained by contracting the redex r i in the term t i (0 ≤ i < n). The term t n is called the result of the reduction B . We shall also say that the reduction B leads from t to t n . Now let R be a set of redexes of t . We define the set of residues of R in t n , relative to the reduction B , by induction on n : we just gave the definition for the case n = 1 ; suppose n > 1, and let R n−1 be the set of residues of R in t n−1 relative to B ; then the residues of R in t n relative to B are the residues of R n−1 in t n relative to r n−1 . Let t ∈ Λ and R be a set of redexes of t . A development of (t , R) is, by definition, a reduction D starting with t such that its redexes r 0 , r 1 , . . . , r n−1 satisfy the following conditions : r 0 ∈ R, and r i is a residue of R relative to the reduction r 0 , r 1 , . . . , r i −1 (0 < i < n). The development is said to be complete provided that R has no residue in t n relative to the reduction D. The main purpose of the next theorem is to prove that the lengths of the developments of a set of redexes are bounded. Theorem 4.38 (Finite developments theorem). Let t ∈ Λ, and R be a set of redexes of t . Then : i) There exists an integer N such that the length of every development of (t , R) is ≤ N. ii) Every development of (t , R) can be extended to a complete development. iii) All complete developments of (t , R) have the same result. Let D be a development of (t , R), (t 0 = t ), t 1 , . . . , t n its sequence of terms, r 0 , r 1 , . . . , r n−1 its sequence of redexes, R i the set of residues of R in t i relative to the βreduction r 0 , . . . , r i −1 (1 ≤ i ≤ n), and R 0 = R. We have r 0 ∈ R 0 , each t i (1 ≤ i ≤ n) is obtained by contracting the redex r i −1 in t i −1 , and r i ∈ R i . Therefore R i is the set of residues of R i −1 relative to r i −1 . Let T ∈ Λ(c) be the representative of (t , R) and Ti ∈ Λ(c) (0 ≤ i ≤ n) the representative of (t i , R i ) (T0 = T ). Since r i ∈ R i , r i is the image of a redex R i in Ti . Let Ui +1 ∈ Λ(c) (0 ≤ i ≤ n−1) be the term obtained by contracting the redex R i in Ti . Then Ui +1  = t i +1 (the term obtained by contracting the redex r i in t i ). The set of all images of the redexes of Ui +1 is therefore the set of residues of R i in t i +1 relative to r i (by definition of this set of residues), that is to say R i +1 . Consequently, Ui +1 is the representative of (t i +1 , R i +1 ), and therefore Ui +1 = Ti +1 . So we have proved that the sequence of terms (T0 = T ), T1 , . . . , Tn and the sequence of redexes R 0 , R 1 , . . . , R n−1 form a reduction B (D) of T . Clearly, the mapping D → B (D) is a onetoone correspondence between the developments of (t , R) and the reductions of its representative T . In particular, the length of any development of (t , R) is that of some reduction of T . Thus it is ≤ N , where N is the maximum of the lengths of the reductions of T (T ∈ Λ(c)
Chapter 4. Normalization and standardization
81
is strongly normalizable). Moreover, every reduction of T can be extended to a reduction which reaches the normal form of T . Because of the correspondence defined above, this implies that every development of (t , R) can be extended to a development in which the last term contains no residue of R, in other words to a complete development. Finally,if (t 0 = t ), t 1 , . . . t n is a complete development of (t , R), and if the corresponding reduction of T is (T0 = T ), T1 , . . . , Tn , then Tn is the normal form of T ; therefore, t n = Tn  does not depend on the development. Q.E.D.
3. The standardization theorem Let t be a λterm. Any redex of t which is not the head redex will be called an internal redex of t . An internal reduction (resp. head reduction) is, by definition, a sequence t 1 , . . . , t n of λterms such that t i +1 is obtained by contracting an internal redex (resp. the head redex) of t i . A standard reduction consists of a head reduction followed by an internal one. Theorem 4.39 (Standardization theorem). If t β t 0 , then there is a standard reduction leading from t to t 0 . Let t be a λterm, R a set of redexes of t , and NR the sum of the lengths of all complete developments of (t , R). Consider the result u of any complete deR
velopment of (t , R) ; we shall write t −→ u. The finite developments theorem ensures that NR and u are uniquely determined (if R = ;, then NR = 0 and u ≡ t ). We shall say that the set R is internal if all the members of R are internal redexes of t . Lemma 4.40. Let r be an internal redex of t , and t 0 the term obtained by contracting r . If t 0 has a head redex, then this is the only residue, relative to r , of the head redex of t . The term t cannot be a head normal form, otherwise t 0 would also be one. So we have t ≡ λx 1 . . . λx m (λy u)v t 1 . . . t n . The result of the contraction of the redex r is the term : t 0 ≡ λx 1 . . . λx m (λy u 0 )v 0 t 10 . . . t n0 , and the head redex of t 0 can be seen to be the only residue (relative to r ) of the head redex of t . Q.E.D.
Corollary 4.41. Let R be an internal set of redexes of t . Then every development of (t , R) is an internal reduction of t ; if t 0 is the result of a development of (t , R), then the head redex of t 0 (if there is one) is the only residue of the head redex of t .
Lambdacalculus, types and models
82
By lemma 4.40, every residue of an internal redex of t relative to an internal redex of t is an internal redex ; this proves the first part of the corollary. For the second one, it is enough to apply repeatedly the same lemma. Q.E.D.
We shall call head reduced image of a term t any term obtained from t by head reduction. Theorem 4.42. Consider a sequence t 0 , t 1 , . . . , t n of λterms, and, for each i , a set R0
R n−1
R1
R i of redexes of t i , such that : t 0 −→ t 1 −→ t 2 · · · t n−1 −→ t n . Then there exist a sequence u 0 , u 1 , . . . , u n of terms, and, for each i , a set S i of internal redexes of S0
S n−1
S1
u i , such that : u 0 −→ u 1 −→ u 2 · · · u n−1 −→ u n , u 0 is a head reduced image of t 0 , and u n ≡ t n . The proof is by induction on the ntuple (NR n−1 , . . . , NR 0 ), with the lexicographical order on the ntuples of integers. The result is obvious if all the R i ’s are internal. Otherwise, consider the least integer k such that t k has a head redex, which is in R k . If k = 0, then t 0 has a head redex ρ, which is in R 0 . Let t 00 be the term obtained by contracting the redex ρ, and R 00 the set of residues of R 0 relative to ρ. We R 00
R0
have t 0 −→ t 1 , and therefore t 00 −→ t 1 . Moreover, it is clear that NR 0 < NR 0 . Thus 0 we obtain the expected conclusion by applying the induction hypothesis to the R 00
R n−1
R1
sequence : t 00 −→ t 1 −→ t 2 · · · t n−1 −→ t n . Now suppose k > 0, and let ρ k be the head redex of t k , t k0 the term obtained by contracting that redex, and R k0 the set of residues of R k relative to ρ k . Since R k0
Rk
0 ρ k ∈ R k , and t k −→ t k+1 , we clearly have NR 0 < NR k and t k0 −→ t k+1 . k
On the other hand, R k−1 is an internal set of redexes of t k−1 , so by the previous corollary there is an internal reduction which leads from t k−1 to t k . Thus t k−1 0 has a head redex, which we denote by ρ k−1 . Now let R k−1 = R k−1 ∪ {ρ k−1 } ; the 0 result of a complete development of t k−1 relative to R k−1 can be obtained by taking the result t k of a complete development of t k−1 relative to R k−1 , then the result of a complete development of t k relative to the set of residues of ρ k−1 relative to R k−1 . But there is only one such residue, namely the head redex of t k . So the result is t k0 , and therefore we have : R0
0 R k−1
R k0
R n−1
t 0 −→ t 1 · · · t k−1 −→ t k0 −→ t k+1 · · · t n−1 −→ t n . This yields the conclusion, since the induction hypothesis applies ; indeed, we have : (NR n−1 , . . . , NR k+1 , NR 0 , NR 0 , . . . , NR 0 ) k k−1 < (NR n−1 , . . . , NR k+1 , NR k , NR k−1 , . . . , NR 0 ),
Chapter 4. Normalization and standardization
83
since NR 0 < NR k . k
Q.E.D.
Now we are able to complete the proof of the standardization theorem : consider a reduction (t 0 = t ), t 1 , . . . , t n−1 , (t n = t 0 ) which leads from t to t 0 . One obtains t i +1 from t i by contracting a redex r i of t i , that is by a complete development of the set R i = {r i }. Thus, by theorem 4.42, there exists a sequence S0
S1
S n−1
u 0 −→ u 1 −→ u 2 · · · u n−1 −→ u n such that u 0 is a head reduced image of t 0 , u n ≡ t n and S i is an internal set of redexes of u i . Hence there is an internal reduction which leads from u 0 to t n and therefore, there is a standard reduction which leads from t 0 to t n . Q.E.D.
As a consequence, we obtain an alternative proof of part of theorem 4.9 : Corollary 4.43. A λterm is βequivalent to a head normal form if and only if its head reduction is finite. If t is βequivalent to a head normal form, then, by the ChurchRosser theorem, we have t β u, where u is a head normal form. By the standardization theorem, there exists a head reduced image of t , say t 0 , such that some internal reduction leads from t 0 to u. If t 0 has a head redex, then also u has a head redex (an internal reduction does not destroy the head redex) : this is a contradiction. Thus the head reduction of t ends with t 0 . The converse is obvious. Q.E.D.
Corollary 4.44. If t 'β λx u, then there exists a head reduced image of t of the form λx v. Indeed, by the ChurchRosser theorem, we have t β λx u 0 . By the standardization theorem, there exists a head reduced image t 0 of t , such that some internal reduction leads from t 0 to λx u 0 . Now an internal reduction cannot introduce an occurrence of λ in a head position. Therefore t 0 starts with λ. Q.E.D.
A term t is said to be of order 0 if no term starting with λ is βequivalent to t . Therefore, corollary 4.44 can be restated this way : a term t is of order 0 if and only if no head reduced image of t starts with λ. Remark. The standardization theorem is very easy to prove with the hypothesis that the head reduction of t is finite or, more generally, that there exists an upper bound for the lengths of those head reductions of t which lead to a term which can be reduced to t 0 . Indeed, in such a case, it is enough to consider, among all the reductions which lead from t to t 0 , any of those starting with a head reduction of maximal length, let us say
Lambdacalculus, types and models
84
(t 0 = t ), t 1 , . . . , t k . The proof of the theorem will be completed if we show that all the reductions which lead from t k to t 0 are internal. This is obvious if t k is a head normal form. Now suppose that t k = λx 1 . . . λx m (λx u)v v 1 . . . v n and consider a reduction, leading from t k to t 0 , which is not internal ; it cannot start with a head reduction (otherwise we would have a reduction, leading from t to t 0 , starting with a head reduction of length > k). Consequently, it starts with an internal reduction, which leads from t k = λx 1 . . . λx m (λx u)v v 1 . . . v n to λx 1 . . . λx m (λx u 0 )v 0 v 10 . . . v n0 (with u β u 0 , v β v 0 , v i β v i0 ). This internal reduction is followed by at least one step of head reduction, which leads to λx 1 . . . λx m u 0 [v 0 /x]v 10 . . . v n0 . Now this term can be obtained from t k by the following path : first one step of head reduction, which gives λx 1 . . . λx m u[v/x]v 1 . . . v n ; then a βreduction applied to u, v, v 1 , . . . , v n , which leads to λx 1 . . . λx m u 0 [v 0 /x]v 10 . . . v n0 . Since λx 1 . . . λx m u 0 [v 0 /x]v 10 . . . v n0 β t 0 , what we have obtained is a reduction which leads from t k to t 0 and starts with a head reduction : this is impossible. Q.E.D.
The standardization theorem is usually stated in a (slightly) stronger form. First, we define the rank of a redex ρ in a λterm t , by induction on the length of t . If t = λx u, then ρ is a redex of u ; the rank of ρ in t is the same as in u. If t = (u)v then either ρ = t , or ρ is a redex of u, or ρ is a redex of v ; if ρ = t , then the rank of ρ in t is 0 ; if ρ is in u, then its rank in t is the same as in u ; if ρ is in v, then its rank in t is its rank in v plus the number of redexes in u. Remark. The rank describes the order of redexes in t , from left to right (the position of a redex is given by the position of its leading λ).
Consider a reduction t 0 , . . . , t k and let n i be the rank, in t i , of the redex ρ i which is reduced at this step. The reduction will be called strongly standard if we have n 0 ≤ n 1 ≤ . . . ≤ n k−1 . Remark. A strongly standard reduction is clearly a standard one. Indeed, if there is a head redex, then its rank is 0.
Theorem 4.45 (Standardization theorem, 2nd form). If t β t 0 , then there is a strongly standard reduction leading from t to t 0 . The proof is by induction on the length of t 0 . By theorem 4.39, we consider a standard reduction from t to t 0 . This standard reduction begins with a head reduction from t to u, which is followed by an internal reduction from u to t 0 . By proposition 2.2, we have u = λx 1 . . . λx k (ρ)u 1 . . . u n where ρ is a redex or a variable ; therefore, we have : t 0 = λx 1 . . . λx k (ρ 0 )u 10 . . . u n0 , with ρ β ρ 0 , u 1 β u 10 , . . . , u n β u n0 . Then, there are two possibilities :
Chapter 4. Normalization and standardization
85
i) If ρ = (λx v)w is a redex, then ρ 0 = (λx v 0 )w 0 (because the reduction from u to t 0 is internal) and we have v β v 0 , w β w 0 . By induction hypothesis, there are strongly standard reductions leading from v to v 0 , w to w 0 , u 1 to u 10 , . . . , u n to u n0 . By putting these reductions in sequence, we get a strongly standard reduction from u to t 0 ; and therefore, also a strongly standard reduction from t to t 0 . ii) If ρ is a variable, then ρ = ρ 0 and we have u 1 β u 10 , . . . , u n β u n0 . The end of the proof is the same as in case (i). Q.E.D.
References for chapter 4 [Bar83], [Bar84], [Cop78], [Hin86], [Mit79], [Pot80]. (The references are in the bibliography at the end of the book). The proof given above of the finite developments theorem was communicated to me by M. Parigot.
86
Lambdacalculus, types and models
Chapter 5 The Böhm theorem Let αn = λz 1 . . . λz n λz(z)z 1 . . . z n for every n ≥ 0 (in particular, α0 = λz z) ; αn is the “ applicator ” of order n (it applies an nary function to its arguments). Propositions 5.1 and 5.8 show that, in some weak sense, applicators behaves like variables with respect to normal terms. Proposition 5.1. Let t be a normal λterm and x 1 , . . . , x k variables ; then t [αn1 /x 1 , . . . , αnk /x k ] is normalizable provided that n 1 , . . . , n k ∈ N are large enough. The proof is by induction on the length of t . If t is a variable, then the result is clear, since αn is normal. If t = λy u, then t [αn1 /x 1 , . . . , αnk /x k ] = λy u[αn1 /x 1 , . . . , αnk /x k ] ; by induction hypothesis, u[αn1 /x 1 , . . . , αnk /x k ] is normalizable provided that n 1 , . . . , n k are large enough, thus so is t [αn1 /x 1 , . . . , αnk /x k ]. Now we can assume that t does not start with λ. Since t is normal, by proposition 2.2, we have t = (y)t 1 . . . t p , where y is a variable. Now t i is shorter than t , so t i [αn1 /x 1 , . . . , αnk /x k ] is normalizable provided that n 1 , . . . , n k are large enough. Let u i be its normal form. If y ∉ {x 1 , . . . x k }, then t [αn1 /x 1 , . . . , αnk /x k ] 'β (y)u 1 . . . u p , which is a normal form. If y ∈ {x 1 , . . . x k }, say y = x 1 , then : t [αn1 /x 1 , . . . , αnk /x k ] 'β (αn1 )u 1 . . . u p 'β (λx 1 . . . λx n1 λx(x)x 1 . . . x n1 )u 1 . . . u p ; if n 1 ≥ p, this term becomes, after βconversion : λx p+1 . . . λx n1 λx(x)u 1 . . . u p x p+1 . . . x n1 which is in normal form. Q.E.D.
Remark. In proposition 5.1, the condition “ provided that n 1 , . . . , n k are large enough ” is indispensable : if δ = λy(y)y and t = (x)δδ, then t [α0 /x] is not normalizable.
87
88
Lambdacalculus, types and models
The main result in this chapter is the following theorem, due to C. Böhm : Theorem 5.2. Let t , t 0 be two closed normal λterms, which are not βηequivalent ; then there exist closed λterms t 1 , . . . , t k such that : (t )t 1 . . . t k 'β 0, and (t 0 )t 1 . . . t k 'β 1. Recall that, by definition, 0 = λxλy y and 1 = λxλy x. Corollary 5.3. Let t , t 0 be two closed normal λterms, which are not βηequivalent, and v, v 0 two arbitrary λterms. Then there exist λterms t 1 , . . . , t k such that (t )t 1 . . . t k 'β v and (t 0 )t 1 . . . t k 'β v 0 . Indeed, by theorem 5.2, we have (t )t 1 . . . t k 'β 0 and (t 0 )t 1 . . . t k 'β 1 ; thus (t )t 1 . . . t k v 0 v 'β v and (t 0 )t 1 . . . t k v 0 v 'β v 0 . Q.E.D.
The following corollary shows that the βηequivalence is maximal, among the λcompatible equivalence relations on Λ which contain the βequivalence. Corollary 5.4. Let ' be an equivalence relation on Λ, containing 'β , such that : t ' t 0 ⇒ (t )u ' (t 0 )u and λx t ' λx t 0 , for every term t , t 0 , u and every variable x. If there exist two normalizable non βηequivalent terms t 0 , t 00 such that t 0 ' t 00 , then v ' v 0 for all terms v, v 0 . Indeed, let x 1 , . . . , x k be the free variables of t 0 , t 00 , let t = λx 1 . . . λx k t 0 and t 0 = λx 1 . . . λx k t 00 . Then t ' t 0 and t is not βηequivalent to t 0 . Thus, by corollary 5.3, we have (t )t 1 . . . t k 'β v and (t 0 )t 1 . . . t k 'β v 0 ; therefore v ' v 0 . Q.E.D.
We will call Böhm transformation any function from Λ into Λ, obtained by composing “ elementary ” functions of the form : t 7→ (t )u 0 or t 7→ t [u 0 /x] (where u 0 and x are given term and variable). The function t 7→ (t )u 0 , from Λ to Λ, will be denoted by B u0 . The function t 7→ t [u 0 /x] will be denoted by B u0 ,x . Note that every Böhm transformation F is compatible with both β and βηequivalence : t 'β t 0 ⇒ F (t ) 'β F (t 0 ) and t 'βη t 0 ⇒ F (t ) 'βη F (t 0 ). Lemma 5.5. For every Böhm transformation F , there exist terms t 1 , . . . , t k such that F (t ) = (t )t 1 . . . t k for every closed term t . The proof is immediate, by induction on the number of elementary functions of which F is the composite. Indeed, if F (t ) is in the indicated form, then so are (F (t ))u 0 and (F (t ))[u 0 /x] : the former is (t )t 1 . . . t k u 0 , and the latter (t )t 10 . . . t k0 where t i0 = t i [u 0 /x], since t is closed. Q.E.D.
Chapter 5. The Böhm theorem
89
Theorem 5.6. Let x 1 , . . . , x k be distinct variables and t , t 0 be two normal nonβηequivalent terms. Then, for all distinct integers n 1 , . . . , n k , provided that they are large enough, there exists a Böhm transformation F such that : F (t [αn1 /x 1 , . . . , αnk /x k ]) 'β 0 and F (t 0 [αn1 /x 1 , . . . , αnk /x k ]) 'β 1. Theorem 5.2 is an immediate consequence of theorem 5.6 : indeed, if t is a closed term, and F a Böhm transformation, then, by lemma 5.5, we have F (t ) = (t )t 1 . . . t n , where t 1 , . . . , t n depend only on F . By applying theorem 5.6, we therefore obtain (t )t 1 . . . t n 'β 0, and (t 0 )t 1 . . . t n 'β 1. We may suppose that t 1 , . . . , t n are closed terms (in case they have free variables x 1 , . . . , x p , simply replace t i by t i [a 1 /x 1 , . . . , a p /x p ], where a 1 , . . . , a p are fixed closed terms, for instance 0). We also deduce : Corollary 5.7. Let ' be an equivalence relation on Λ, containing 'β , such that t ' t 0 ⇒ (t )u ' (t 0 )u and t [u/x] ' t 0 [u/x] for every term t , t 0 , u and every variable x. If there exist two normalizable nonβηequivalent terms t 0 , t 00 such that t 0 ' t 00 , then t ' t 0 for all terms t , t 0 . By theorem 5.6 (where we take k = 0), there exists a Böhm transformation F such that F (t 0 ) 'β 0, and F (t 00 ) 'β 1. Thus it follows from the assumptions about relation ' that t 0 ' t 00 ⇒ F (t 0 ) ' F (t 00 ). Therefore 0 ' 1, and hence (0)t 0 t ' (1)t 0 t , that is t ' t 0 . Q.E.D.
Proposition 5.8. Let x 1 , . . . , x k be distinct variables and t , t 0 be two normal nonβηequivalent terms. Then, for all distinct integers n 1 , . . . , n k , provided that they are large enough, the terms : t [αn1 /x 1 , . . . , αnk /x k ] and t 0 [αn1 /x 1 , . . . , αnk /x k ] are not βηequivalent. Immediate from theorem 5.6. Q.E.D.
Corollary 5.9. Let t , t 0 be two normalizable terms : i) if t [αn /x] 'βη t 0 [αn /x] for infinitely many integers n, then t 'βη t 0 ; ii) if (t )αn 'βη (t 0 )αn for infinitely many integers n, then t 'βη t 0 . Proof of (i) : it is the particular case k = 1 of proposition 5.8. Proof of (ii) : let x be a variable with no occurrence in t , t 0 ; by applying (i) to the terms (t )x and (t 0 )x, we obtain : (t )x 'βη (t 0 )x, thus λx(t )x 'βη λx(t 0 )x, and therefore t 'βη t 0 . Q.E.D.
The following result will be used to prove theorem 5.6 :
90
Lambdacalculus, types and models
Lemma 5.10. Let t , u be two λterms. If one of the following conditions hold, then there exists a Böhm transformation F such that : F (t ) 'β 0 and F (u) 'β 1. i) t = (x)t 1 . . . t p , u = (y)u 1 . . . u q , where x 6= y or p 6= q ; ii) t = λx 1 . . . λx m λx(x)t 1 . . . t p , u = λx 1 . . . λx n λx(x)u 1 . . . u q where m 6= n or p 6= q. Proof of (i). Case 1 : x 6= y ; let σ0 = λz 1 . . . λz p 0, σ1 = λz 1 . . . λz q 1. By βreduction, we obtain immediately B σ0 ,x B σ1 ,y (t ) 'β 0 and B σ0 ,x B σ1 ,y (u) 'β 1. Thus B σ0 ,x B σ1 ,y is the desired Böhm transformation. Case 2 : x = y and p 6= q, say p < q ; then we have : B αq ,x (t ) = (αq )t 10 . . . t p0 and B αq ,x (u) = (αq )u 10 . . . u q0 (where τ0 = τ[αq /x] for every term τ). By βreduction, we obtain : B αq ,x (t ) 'β λz p+1 . . . λz q λz(z)t 10 . . . t p0 z p+1 . . . z q and B αq ,x (u) 'β λz(z)u 10 . . . u q0 . Then the result follows from case 1 of part (ii), treated below. Proof of (ii). Case 1 : m 6= n, say m < n ; take distinct variables z 1 , . . . , z n , z not occurring in t , u. Let B = B z B zn . . . B z1 . Then, by βreduction, we have : B (t ) 'β (z m+1 )t 10 . . . t p0 z m+2 . . . z n z, and B (u) 'β (z)u 100 . . . u q00 (where τ0 is the term τ[z 1 /x 1 , . . . , z m /x m , z m+1 /x], and τ00 is the term τ[z 1 /x 1 , . . . , z n /x n , z/x]). Since z m+1 6= z, the result follows from case 1 of part (i) above. Case 2 : m = n and p 6= q ; let B = B x B xm . . . B x1 . We have : B (t ) = (x)t 1 . . . t p and B (u) = (x)u 1 . . . u q . Since p 6= q, the result follows from case 2 of (i). Q.E.D.
The length l g (t ) of a term t is inductively defined as follows (actually, it is the length of the expression obtained from t by erasing all the parentheses) : if t is a variable, then l g (t ) = 1 ; l g ((t )u) = l g (t ) + l g (u) ; l g (λx t ) = l g (t ) + 2. We now prove theorem 5.6 by induction on l g (t ) + l g (t 0 ). Take a variable y 6= x 1 , . . . , x k , with no occurrence in t , t 0 , and let w, w 0 be the terms obtained from (t )y, (t 0 )y by normalization. If w 'βη w 0 , then λy w 'βη λy w 0 , thus λy(t )y 'βη λy(t 0 )y and hence t 'βη t 0 , which contradicts the hypothesis. Thus w and w 0 are not βηequivalent. If both t , t 0 start with λ, say t = λx u, t 0 = λx 0 u 0 , then : w = u[y/x], w 0 = u 0 [y/x 0 ] and l g (w) + l g (w 0 ) = l g (t ) + l g (t 0 ) − 4. If t starts with λ, say t = λx u, while t 0 does not, then either t 0 = (v 0 )u 0 or t 0 is a variable. Thus, w = u[y/x], w 0 = (t 0 )y and l g (w) + l g (w 0 ) = l g (t ) + l g (t 0 ) − 1.
Chapter 5. The Böhm theorem
91
Therefore, in both cases, we can apply the induction hypothesis to w, w 0 . Thus, given large enough distinct integers n 1 , . . . , n k , there exists a Böhm transformation F such that : F (w[αn1 /x 1 , . . . , αnk /x k ]) 'β 0 and F (w 0 [αn1 /x 1 , . . . , αnk /x k ]) 'β 1. Now we have : w[αn1 /x 1 , . . . , αnk /x k ] 'β (t [αn1 /x 1 , . . . , αnk /x k ])y and w 0 [αn1 /x 1 , . . . , αnk /x k ] 'β (t 0 [αn1 /x 1 , . . . , αnk /x k ])y. It follows that Böhm transformation F B y have the required properties : F B y (t [αn1 /x 1 , . . . , αnk /x k ]) 'β 0 and F B y (t 0 [αn1 /x 1 , . . . , αnk /x k ])) 'β 1. Now we may suppose that none of t , t 0 start with λ (note that this happens at the first step of the induction, since we then have l g (t ) = l g (t 0 ) = 1, so t and t 0 are variables). Since t , t 0 are normal, we have t = (x)t 1 . . . t p and t 0 = (y)t 10 . . . t q0 , where x, y are variables, and t 1 , . . . , t p , t 10 , . . . , t q0 are normal terms. We now fix distinct integers n 1 , . . . , n k and distinct variables x 1 , . . . , x k . We will use the notation τ[] as an abbreviation for τ[αn1 /x 1 , . . . , αnk /x k ], for every λterm τ. Now, there are the following three possibilities : 1. Suppose that x, y ∉ {x 1 , . . . , x k }. Then we have : t [] = (x)t 1 [] . . . t p [] and t 0 [] = (y)t 10 [] . . . t q0 []. If x 6= y or p 6= q, then, by lemma 5.10(i), there exists a Böhm transformation F such that F (t []) 'β 0 and F (t 0 []) 'β 1 : this is the expected result. In case x = y and p = q, take any integer n > n 1 , . . . , n k , p. Then : B αn ,x (t []) = (αn )t 1 [[]] . . . t p [[]] and B αn ,x (t 0 []) = (αn )t 10 [[]] . . . t p0 [[]] (the notation τ[[]] stands for τ[αn1 /x 1 , . . . , αnk /x k , αn /x], for every term τ). Since αn = λz 1 . . . λz n λz(z)z 1 . . . z n , we therefore obtain, by βreduction : B αn ,x (t []) 'β λz p+1 . . . λz n λz(z)t 1 [[]] . . . t p [[]]z p+1 . . . z n and B αn ,x (t 0 []) 'β λz p+1 . . . λz n λz(z)t 10 [[]] . . . t q0 [[]]z p+1 . . . z n . Note that the terms t i [[]] and t i0 [[]] contain none of the variables z, z 1 , . . . , z n . We have : B z B zn . . . B z p+1 B αn ,x (t []) 'β (z)t 1 [[]] . . . t p [[]]z p+1 . . . z n and B z B zn . . . B z p+1 B αn ,x (t 0 []) 'β (z)t 10 [[]] . . . t p0 [[]]z p+1 . . . z n . Now, by hypothesis, t = (x)t 1 . . . t p and t 0 = (x)t 10 . . . t p0 , and t and t 0 are not βηequivalent. Thus, for some i (1 ≤ i ≤ p), t i and t i0 are not βηequivalent. Let πi = λx 1 . . . λx n x i and B = B z B zn . . . B z p+1 B αn ,x . Since the variable z occurs neither in t i [[]] nor in t i0 [[]], we have : B πi ,z B (t []) 'β t i [[]] ; B πi ,z B (t 0 []) 'β t i0 [[]]. Now l g (t i )+l g (t i0 ) < l g (t )+l g (t 0 ). Thus, by induction hypothesis, provided that n 1 , . . . , n k , n are large enough distinct integers, there exists a Böhm transforma
Lambdacalculus, types and models
92
tion, say F , such that F (t i [[]]) 'β 0 and F (t i0 [[]]) 'β 1. Therefore, F B πi ,z B (t []) 'β 0 and F B πi ,z B (t 0 []) 'β 1, which is the expected result. 2. Now suppose that x ∈ {x 1 , . . . , x k }, for instance x = x 1 , while y ∉ {x 1 , . . . , x k }. Then we have t [] = (αn1 )t 1 [] . . . t p [] and t 0 [] = (y)t 10 [] . . . t q0 []. For every n 1 ≥ p, we have, by βreduction : t [] 'β λz p+1 . . . λz n1 λz(z)t 1 [] . . . t p []z p+1 . . . z n1 . Therefore, if we let B = B z B zn1 . . . B z p+1 , we have : B (t []) 'β (z)t 1 [] . . . t p []z p+1 . . . z n1 and B (t 0 []) = (y)t 10 [] . . . t q0 []z p+1 . . . z n1 z. Since y and z are distinct variables, lemma 5.10(i) provides a Böhm transformation F such that F B (t []) 'β 0 and F B (t 0 []) 'β 1, which is the expected result. 3. Finally, suppose that x, y ∈ {x 1 , . . . , x k }. If x 6= y, say, for instance, x = x 1 , y = x 2 , then : t [] = (αn1 )t 1 [] . . . t p [] and t 0 [] = (αn2 )t 10 [] . . . t q0 []. For all n 1 ≥ p and n 2 ≥ q, we have, by βreduction : t [] 'β λz p+1 . . . λz n1 λz(z)t 1 [] . . . t p []z p+1 . . . z n1 and t 0 [] 'β λz q+1 . . . λz n2 λz(z)t 10 [] . . . t q0 []z q+1 . . . z n2 . Since n 1 6= n 2 (by hypothesis), the result follows from lemma 5.10(ii). If x = y, say, for instance, x = y = x 1 , then : t [] = (αn1 )t 1 [] . . . t p [] and t 0 [] = (αn1 )t 10 [] . . . t q0 []. For every n 1 ≥ p, q, we have, by βreduction : t [] 'β λz p+1 . . . λz n1 λz(z)t 1 [] . . . t p []z p+1 . . . z n1 and t 0 [] 'β λz q+1 . . . λz n1 λz(z)t 10 [] . . . t q0 []z q+1 . . . z n1 . If p 6= q, then the results follows from lemma 5.10(ii) (n 1 − p 6= n 1 − q). If p = q, then : t [] 'β λz p+1 . . . λz n1 λz(z)t 1 [] . . . t p []z p+1 . . . z n1 and t 0 [] 'β λz p+1 . . . λz n1 λz(z)t 10 [] . . . t p0 []z p+1 . . . z n1 . Now, by hypothesis, t = (x)t 1 . . . t p and t 0 = (x)t 10 . . . t p0 , and t and t 0 are not βηequivalent. Thus, for some i (1 ≤ i ≤ p), t i and t i0 are not βηequivalent. Let πi = λx 1 . . . λx n1 x i and B = B z B zn1 . . . B z p+1 . Since the variables z, z j occur neither in t i [] nor in t i0 [], we have : B πi ,z B (t []) 'β t i [] ; B πi ,z B (t 0 []) 'β t i0 []. Now l g (t i )+l g (t i0 ) < l g (t )+l g (t 0 ). Thus, by induction hypothesis, provided that n 1 , . . . , n k are large enough distinct integers, there exists a Böhm transformation, say F , such that F (t i []) 'β 0 and F (t i0 []) 'β 1. Therefore, F B πi ,z B (t []) 'β 0 and F B πi ,z B (t 0 []) 'β 1. This completes the proof. Q.E.D.
Chapter 5. The Böhm theorem
References for chapter 5 [Bar84], [Boh68]. (The references are in the bibliography at the end of the book).
93
94
Lambdacalculus, types and models
Chapter 6 Combinatory logic 1. Combinatory algebras In this chapter, we shall deal with theories in the first order predicate calculus with equality and we assume that the reader has some familiarity with elementary model theory. We consider a language L 0 consisting of one binary function symbol Ap (for “ application ”). Given terms f , t , u, v, . . . , the term Ap( f , t ) will be written ( f )t or f t ; the terms (( f )t )u, ((( f )t )u)v, . . . will be respectively written ( f )t u, ( f )t uv, . . . or even f t u, f t uv, . . . A model for this language (that is a nonempty set A, equipped with a binary function) is called an applicative structure. Let L be the language obtained by adding to L 0 two constant symbols K , S. We shall use the following notations : t ≡ u will mean that t and u are identical terms of L ; M = F will mean that the closed formula F is satisfied in the model M (of L ) ; A ` F will mean that F is a consequence of the set A of formulas, in other words, that every model of A satisfies F . Given terms t , u of L , and a variable x, t [u/x] denotes the term obtained from t by replacing every occurrence of x with u. Consider the following axioms : (C 0 )
(K )x y = x ; (S)x y z = ((x)z)(y)z.
Actually, we consider the closure of these formulas, namely, the axioms : ∀x∀y{(K )x y = x} ; ∀x∀y∀z{(S)x y z = ((x)z)(y)z}. The term (S)K K is denoted by I . Thus C 0 ` (I )x = x. A model of this system of axioms is called a combinatory algebra. The combinatory algebra consisting of one single element is said to be trivial. For every term t of L , and every variable x, we now define a term of L , denoted by λx t , by induction on the length of t : 95
Lambdacalculus, types and models
96
• if x does not occur in t , then λx t ≡ (K )t ; • λx x ≡ (S)K K ≡ I ; • if t ≡ (u)v and x occurs in t , then λx t ≡ ((S)λx u)λx v. Proposition 6.1. For every term t of L , the term λx t does not contain the variable x, and we have C 0 ` ∀x{(λx t )x = t }. It follows that C 0 ` (λx t )u = t [u/x], for all terms t , u of L . It is obvious that x does not occur in λx t . The second part of the statement is proved by induction on the length of t : If x does not occur in t , then (λx t )x ≡ (K )t x, and C 0 ` (K )t x = t . If t ≡ x, then (λx t )x ≡ (I )x, and C 0 ` (I )x = x. If t ≡ (u)v and x occurs in t , then (λx t )x ≡ (((S)λx u)λx v)x. By the second axiom of C 0 , we have C 0 ` (λx t )x = ((λx u)x)(λx v)x. Now, by induction hypothesis : C 0 ` (λx u)x = u and (λx v)x = v. Therefore, C 0 ` (λx t )x = (u)v = t . Q.E.D.
It follows immediately that : C 0 ` (λx 1 . . . λx k t )x 1 . . . x k = t for all variables x 1 , . . . , x k . Proposition 6.2. All nontrivial combinatory algebras are infinite. Let A be a finite combinatory algebra, and n its cardinality. For 0 ≤ i ≤ n, let a i ∈ A be the interpretation in A of the term λx 0 λx 1 . . . λx n x i . Then there exist two distinct integers i , j ≤ n such that a i = a j . Suppose, for example, that i = 0 and j = 1. We therefore have : a 0 b 0 b 1 . . . b n = a 1 b 0 b 1 . . . b n , for all b 0 , b 1 , . . . , b n ∈ A. Thus b 0 = b 1 for all b 0 , b 1 ∈ A, which means that A is trivial. Q.E.D.
An applicative structure A is said to be combinatorially complete if, for every term t of L 0 , with variables among x 1 , . . . , x k , and parameters in A, there exists an element f ∈ A such that A = ( f )x 1 . . . x k = t , that is to say : ( f )a 1 . . . a k = t [a 1 /x 1 , . . . , a k /x k ] for all a 1 , . . . , a k ∈ A). This property is therefore expressed by the following axiom scheme : (CC )
∃ f ∀x 1 . . . ∀x n {( f )x 1 . . . x n = t }
where t is an arbitrary term of L 0 , and n ≥ 0. Proposition 6.3. An applicative structure A is combinatorially complete if and only if A can be given a structure of combinatory algebra. In other words, A is combinatorially complete if and only if the constant symbols K and S may be interpreted in A in such a way as to satisfy C 0 .
Chapter 6. Combinatory logic
97
Indeed, if A is a combinatory algebra, and t is any term with variables among x 1 , . . . , x n , then it suffices to take f = λx 1 . . . λx n t . Conversely, if A is combinatorially complete, then there exist k, s ∈ A satisfying C 0 : it is enough to apply CC , first with n = 2 and t = x 1 , then with n = 3 and t = ((x 1 )x 3 )(x 2 )x 3 . Q.E.D.
The axiom scheme CC is thus equivalent to the conjunction of two particular cases : (CC 0 ) ∃k∀x∀y{(k)x y = x} ; ∃s∀x∀y∀z{(s)x y z = ((x)z)(y)z} Let E denote the term λxλy(x)y. By proposition 6.1, we therefore have : C 0 ` (E )x y = (x)y. By definition of λ, we have λy(x)y ≡ ((S)(K )x)I , and hence : E ≡ λx((S)(K )x)I . Thus, by proposition 6.1 : C 0 ` (E )x = ((S)(K )x)I . Let t be a term containing no occurrence of the variable x. Then, by definition of λ : λx(t )x ≡ ((S)λx t )I ≡ ((S)(K )t )I . We have thus proved : Proposition 6.4. Let t be a term and x a variable not occurring in t ; then : C 0 ` λx(t )x = (E )t = ((S)(K )t )I . We now consider the axioms : (C 1 )
K = λxλy(K )x y ; S = λxλyλz(S)x y z.
According to proposition 6.1, the following formulas are consequences of the axioms C 0 +C 1 : (K )x = λy(K )x y ; (S)x y = λz(S)x y z ; thus, by proposition 6.4, so are the formulas : (C 10 )
(E )(K )x = (K )x ; (E )(S)x y = (S)x y.
Proposition 6.5. The following formulas are consequences of C 0 +C 10 : i) λx t = (E )λx t = λx(λx t )x for every term t of L ; ii) (E )E = E ; (E )(E )x = (E )x. i) The second identity follows from proposition 6.4, since x does not occur in λx t . On the other hand, by definition of λx t , we have either λx t ≡ (K )t , or λx t ≡ (S)K K , or λx t ≡ (S)uv for suitable terms u, v. It follows immediately that C 10 ` (E )λx t = λx t . ii) We have E = λxλy(x)y, and hence C 0 +C 10 ` (E )E = E , by (i). Now, by proposition 6.4, C 0 ` (E )x = λy(x)y, and therefore, by (i) again : C 0 +C 10 ` (E )(E )x = (E )x. Q.E.D.
Lambdacalculus, types and models
98
2. Extensionality axioms The following axiom scheme : (WEXT)
∀x(t = u) → λx t = λx u
(where t , u are arbitrary terms of L , allowed to contain variables) is called the weak extensionality scheme. As a consequence of this axiom, we obtain (by induction on n) : ∀x 1 . . . ∀x n {t = u} → λx 1 . . . λx n t = λx 1 . . . λx n u. The weak extensionality axiom is the following formula : (Wext)
∀y∀z{∀x[(y)x = (z)x] → (E )y = (E )z}.
Proposition 6.6. WEXT and Wext are equivalent modulo C 0 +C 10 . Indeed, let A be a model of C 0 +C 10 + WEXT, and b, c ∈ A such that : (b)x = (c)x for every x ∈ A. Applying WEXT with t ≡ (b)x and u ≡ (c)x, we obtain λx(b)x = λx(c)x. Now both λx(b)x = (E )b and λx(c)x = (E )c hold in A, since A = C 0 (proposition 6.4). Thus (E )b = (E )c. Conversely, let A be a model of C 0 +C 10 + Wext, and t , u two terms with parameters in A, where x is the only variable ; assume that A = ∀x(t = u). Since A = C 0 , we have A = (λx t )x = t , (λx u)x = u (proposition 6.1). Thus A = ∀x{(λx t )x = (λx u)x}. By Wext, we obtain A = (E )λx t = (E )λx u, and hence A = λx t = λx u (by proposition 6.5). Q.E.D.
We shall denote by C L (combinatory logic) the system of axioms : C 0 +C 1 + Wext (or, equivalently, C 0 +C 1 + WEXT). Now we consider the axioms : (C 10 )
(E )K = K ; (E )(K )x = (K )x ; (E )S = S ; (E )(S)x = (S)x ; (E )(S)x y = (S)x y.
Proposition 6.7. C L is equivalent to C 0 +C 10 + Wext. The following formulas (in fact, their closures) are obviously consequences of C 0 +C 1 : K = λxλy(K )x y ; (K )x = λy(K )x y ; S = λxλyλz(S)x y z ; (S)x = λyλz(S)x y z ; (S)x y = λz(S)x y z. In view of proposition 6.5, we deduce immediately that C 10 is a consequence of C 0 +C 1 , and therefore of C L. Conversely, we have C 10 ` (S)x y = (E )(S)x y, and hence : C 0 +C 10 ` (S)x y = λz(S)x y z. Now we also have : C 0 ` (λyλz(S)x y z)y = λz(S)x y z.
Chapter 6. Combinatory logic
99
Thus C 0 +C 10 ` (S)x y = (λyλz(S)x y z)y. By Wext, we obtain first : (E )(S)x = (E )λyλz(S)x y z, then (S)x = λyλz(S)x y z (by C 10 and proposition 6.5) ; thus (S)x = (λxλyλz(S)x y z)x. By applying Wext again, we conclude that : (E )S = (E )λxλyλz(S)x y z, and hence S = λxλyλz(S)x y z (by C 10 and proposition 6.5 again). The same kind of proof gives the equation K = λxλy(K )x y. Q.E.D.
The extensionality axiom is the formula : (Ext)
∀y∀z{∀x[(y)x = (z)x] → y = z}.
As a consequence of this axiom, we obtain (by induction on n) : (Extn )
∀y∀z{∀x 1 . . . ∀x n [(y)x 1 . . . x n = (z)x 1 . . . x n ] → y = z}.
We now prove that, modulo C 0 , the extensionality axiom is equivalent to : Wext +(E = I ). Indeed, it is clear that Wext +(E = I ) + C 0 ` Ext (since C 0 + (E = I ) ` (E )x = x). Conversely, we have C 0 ` (E )x y = (I )x y = (x)y. With Ext2 , we obtain C 0 + Ext ` E = I. We shall denote by EC L (extensional combinatory logic) the system of axioms C 0 + Ext. Note that C 0 + Ext ` C 1 , and thus EC L ` C L ; indeed, by proposition 6.1, for every term T , we have : C 0 ` (T )x 1 . . . x n = (λx 1 . . . λx n (T )x 1 . . . x n )x 1 . . . x n ; then, by Extn , we can deduce T = λx 1 . . . λx n (T )x 1 . . . x n .
ScottMeyer’s axioms Let A be an applicative structure, with a distinguished element e, satisfying the following axioms, known as ScottMeyer’s axioms : i) Combinatorial completeness ∃k∀x∀y[(k)x y = x] ; ∃s∀x∀y∀z[(s)x y z = ((x)z)(y)z] ; ii) ∀x∀y[(e)x y = (x)y] ; iii) Weak extensionality ∀y∀z{∀x[(y)x = (z)x] → (e)y = (e)z} Theorem 6.8. Let A be an applicative structure satisfying the ScottMeyer’s axioms. Then there is a unique way of assigning values in A to the symbols K , S of L so that A becomes a model of C L satisfying ∀x[(E )x = (e)x]. Moreover, in that model, we have E = (e)e. Notice that E is a term of L , not a symbol.
100
Lambdacalculus, types and models
Unicity : suppose that values have been assigned to K , S so that C L is satisfied. We have (E )x = (e)x, thus (E )E = (e)E (take x = E ), and hence E = (e)E (we have seen that C L ` E = (E )E ). Now the above weak extensionality axiom gives : ∀x[(E )x = (e)x] → (e)E = (e)e. Therefore, E = (e)e. Let K 1 , S 1 and K 2 , S 2 be two possible interpretations of K , S in A such that the required conditions hold, and let E 1 , E 2 be the corresponding interpretations of E (actually, we have seen that E 1 = E 2 = (e)e) ; thus (E 1 )x = (E 2 )x = (e)x and (S 1 )x y z = (S 2 )x y z = ((x)z)(y)z ; by weak extensionality, it follows that : (e)(S 1 )x y = (e)(S 2 )x y, and we therefore obtain : (E 1 )(S 1 )x y = (E 2 )(S 2 )x y. Since C L holds, the axioms of C 10 are satisfied and we have : (E i )(S i )x y = (S i )x y(i = 1, 2) ; therefore (S 1 )x y = (S 2 )x y. By weak extensionality again, it follows that (e)(S 1 )x = (e)(S 2 )x, that is : (E 1 )(S 1 )x = (E 2 )(S 2 )x, and hence (S 1 )x = (S 2 )x (by C 10 ). Using the weak extensionality once more, we obtain (e)S 1 = (e)S 2 , that is to say (E 1 )S 1 = (E 2 )S 2 , and hence S 1 = S 2 (by C 10 ). The proof of K 1 = K 2 is similar. Existence : take k, s ∈ A such that (k)x y = y and (s)x y z = ((x)z)(y)z for all x, y, z ∈ A ; this is possible according to the first two axioms of ScottMeyer. For every term t with parameters in A (and containing variables), we now define, inductively, a term λ0 x t : λ0 x t = (e)(k)t if x does not occur in t ; λ0 x x = (e)i with i = (s)kk (thus (i )x = x for every x ∈ A) ; λ0 x t = (e)((s)λ0 x u)λ0 x v if t = (u)v and x occurs in t . Notice that (e)x y = x y, and hence, by weak extensionality (ScottMeyer’s axioms), (e)(e)x = (e)x. It follows immediately that (e)λ0 x t = λ0 x t for every term t. Moreover, we have (λ0 x t )x = t (by induction on t , as in proposition 6.1). Let K = λ0 xλ0 y x ; S = λ0 xλ0 yλ0 z((x)z)(y)z. We do have (K )x y = x, (S)x y z = ((x)z)(y)z ; moreover, since (S)x y = λ0 z . . ., we also have (S)x y = (e)(S)x y ; similarly, (e)(S)x = (S)x and (e)S = S. On the other hand, since (S)x y z = (s)x y z, we obtain (e)(s)x y = (e)(S)x y = (S)x y by weak extensionality ; similarly, (e)(k)x = (K )x. Therefore, we may restate the definition of λ0 x t this way : λ0 x t = (K )t if x does not occur in t ; λ0 x x = I with I = (S)K K (indeed, we have (I )x = (i )x, thus (e)I = (e)i ; but (e)I = I by definition of I ) ; λ0 x t = ((S)λ0 x u)λ0 x v if t = (u)v and x occurs in t . We see that this definition is the same as that of the term λx t ; thus λ0 x t = λx t . Now let E = λxλy(x)y ; thus (E )x = λy . . ., and therefore (e)(E )x = (E )x ; now (E )x y = (x)y, and hence, by weak extensionality, (e)(E )x = (e)x, that is to say (E )x = (e)x.
Chapter 6. Combinatory logic
101
This proves that the axiom Wext holds, as well as C 0 . Besides, we have : (E )λx t = λx t for every term t (since (e)λ0 x t = λ0 x t ). Since K = λxλy x and S = λxλyλz((x)z)(y)z, we may deduce, using C 0 , that (K )x = λy x ; (S)x = λyλz((x)z)(y)z ; (S)x y = λz((x)z)(y)z. Thus (E )K = K , (E )(K )x = (K )x, (E )S = S, (E )(S)x = (S)x and (E )(S)x y = (S)x y. Thus the axioms C 10 hold, and finally our model satisfies C 0 +C 10 + Wext, that is to say C L. Q.E.D.
3. Curry’s equations Let A be a model of C 0 +C 10 . We wish to construct an embedding of A in a model of Wext. Let k, s, e denote the interpretations in A of the symbols K , S and the closed term E of L . Define B = (e)A = {(e)a ; a ∈ A} = {a ∈ A, (e)a = a} (indeed, (e)(e)a = (e)a). We shall define an applicative structure over B : its binary operation will be denoted by [a]b, and defined by [a]b = (s)ab (note that we do have (s)ab ∈ B since (e)(s)ab = (s)ab, by C 10 ). We define a oneone function j : A → B by taking j (a) = (k)a (let us note that (k)a ∈ B since (e)(k)a = (k)a, by C 10 ) : indeed, if (k)a = (k)b, then (k)ax = (k)bx for arbitrary x ∈ A, which implies a = b. Let A 0 ⊂ B be the range of this function. We want j to be an isomorphism of applicative structures from A into B . This happens if and only if : [(k)a](k)b = (k)(a)b for all a, b ∈ A. In other words, j is an isomorphism if and only if A satisfies the following axiom : (C 2 )
((S)(K )x)(K )y = (K )(x)y ;
this will be assumed from now on. Notice that : B is a proper extension of A 0 (that is B ⊃ A 0 and B 6= A 0 ) if and only if A is nontrivial (that is A has at least two elements). In that case, i ∈ B \A 0 (where i = (s)kk is the interpretation of I ). Indeed, if i ∈ A 0 , then i = (k)a, thus (i )b = (k)ab, that is to say b = a, for every b ∈ A, and A is trivial. Conversely, if A contains only one element, then, obviously, A = B = A 0 . The interpretations of K , S in B are the same as in A 0 , namely : (k)k and (k)s. B satisfies C 0 if and only if : i) [[(k)k](e)a](e)b = (e)a and ii) [[[(k)s](e)a](e)b](e)c = [[(e)a](e)c][(e)b](e)c for all a, b, c ∈ A. (i) can be written ((s)((s)(k)k)(e)a)(e)b = (e)a. Now consider the axiom :
Lambdacalculus, types and models
102 (C 3 )
((S)((S)(K )K )x)y = (E )x.
It implies (i) since, by proposition 6.5, we have C 0 +C 10 ` (E )(E )x = (E )x. C 3 is equivalent, modulo C 0 , to : (C 30 )
((S)((S)(K )K )x)y = λz(x)z.
(ii) can be written : ((s)((s)((s)(k)s)(e)a)(e)b)(e)c = ((s)((s)(e)a)(e)c)((s)(e)b)(e)c. Now consider the axiom : (C 4 )
((S)((S)((S)(K )S)x)y)z = ((S)((S)x)z)((S)y)z.
At this point, we have proved the first part of : Lemma 6.9. Let A be a combinatory algebra satisfying C 0 + C 10 + C 2 + C 3 + C 4 . Then B is an extension of A 0 (a combinatory algebra, isomorphic with A) which satisfies C 0 . Moreover, if a ∈ A, then [ka]i = (e)a. Indeed, we have [ka]i = ((s)(k)a)i = (e)a (by proposition 6.4). Q.E.D.
Let t , u be two terms of L , and {x 1 , . . . , x n } the set of variables occurring in t or u. The formula t = u (in fact, its closure ∀x 1 . . . ∀x n {t = u}) is called an equation ; this equation is said to be closed if both t and u contain no variables (n = 0) ; the equation λx 1 . . . λx n t = λx 1 . . . λx n u will be called the λclosure of the equation t = u. For each axiom C i (2 ≤ i ≤ 4), let C L i denote its λclosure, that is to say : (C L 2 ) λxλy((S)(K )x)(K )y = λxλy(K )(x)y (C L 3 ) λxλy((S)((S)(K )K )x)y = λxλyλz(x)z (C L 4 ) λxλyλz((S)((S)((S)(K )S)x)y)z = λxλyλz((S)((S)x)z)((S)y)z. Proposition 6.10. Let A be a combinatory algebra, and Q a set of closed equations such that C 0 + Q ` C 10 . If A = C 0 + Q + C L 2 + C L 3 + C L 4 , then there exist an extension A 1 of A satisfying the same axioms, and an element ξ1 ∈ A 1 such that, for all a, b ∈ A : (a)ξ1 = (b)ξ1 ⇒ (e)a = (e)b. Indeed, C 0 + C L i ` C i (proposition 6.1), thus A = C 0 + C 10 + C 2 + C 3 + C 4 . By lemma 6.9, there exists an extension B of A 0 satisfying C 0 . Since A = C L i and A = Q, and C L i and Q are closed equations, we have B = C L i et B = Q. Now j is an isomorphism from A onto A 0 , and hence there exist an extension A 1 of A and an isomorphism J from A 1 onto B extending j . Let ξ1 = J −1 (i ) ; for every a, b ∈ A such that (a)ξ1 = (b)ξ1 , we have [Ja]J ξ1 = [J b]J ξ1 , that is [ka]i = [kb]i , and therefore (e)a = (e)b, by lemma 6.9. Q.E.D.
Chapter 6. Combinatory logic
103
Theorem 6.11. Let A be a combinatory algebra, and Q a set of closed equations such that C 0 + Q ` C 10 . Then there exists an extension A ∗ of A satisfying C 0 + Q+ Wext if and only if A = C 0 +Q +C L 2 +C L 3 +C L 4 . First, notice that the systems of axioms C 0 + Q+ Wext and C 0 + Q+ WEXT are equivalent (since C 0 +Q ` C 10 , and C 0 +C 10 ` Wext ⇔ WEXT). We shall denote by Q this system of axioms. The condition is necessary : it suffices to prove that Q ` C L i (2 ≤ i ≤ 4). By definition of the axiom scheme WEXT, we have WEXT ` C i ⇒ C L i , thus it is enough to prove : Q ` C i . We have : C 0 ` (((S)(K )x)(K )y)z = ((K x)z)(K y)z = (x)y ; thus C 0 ` (((S)(K )x)(K )y)z = ((K )(x)y)z. By weak extensionality, it follows that Q ` (E )((S)(K )x)(K )y = (E )(K )(x)y, and then, by C 10 , that : Q ` ((S)(K )x)(K )y = (K )(x)y ; therefore Q ` C 2 . The equation (C 3 ) is written ((S)((S)(K )K )x)y = (E )x. Now we have C 0 ` (((S)((S)(K )K )x)y)z = ((((S)(K )K )x)z)(y)z = (((K )K z)(x)z)(y)z = ((K )(x)z)(y)z = (x)z. Hence C 0 + Wext ` (E )((S)((S)(K )K )x)y = (E )x. Thus C 0 +C 10 + Wext ` ((S)((S)(K )K )x)y = (E )x, that is to say Q ` C 3 . The axiom (C 4 ) is written ((S)((S)((S)(K )S)x)y)z = ((S)((S)x)z)((S)y)z. Now we have C 0 ` (((S)((S)((S)(K )S)x)y)z)a = {[((S)((S)(K )S)x)y]a}(z)a = {[(((S)(K )S)x)a](y)a}(z)a = {[(((K )S)a)(x)a](y)a}(z)a = {[(S)(x)a](y)a}(z)a = (((x)a)(z)a)((y)a)(z)a. On the other hand : C 0 ` (((S)((S)x)z)((S)y)z)a = ((S)xza)(S)y za = (((x)a)(z)a)((y)a)(z)a. Therefore, C 0 ` (((S)((S)((S)(K )S)x)y)z)a = (((S)((S)x)z)((S)y)z)a. Thus C 0 + Wext ` (E )((S)((S)((S)(K )S)x)y)z = (E )((S)((S)x)z)((S)y)z. It follows that C 0 +C 10 + Wext ` ((S)((S)((S)(K )S)x)y)z = ((S)((S)x)z)((S)y)z ; that is to say Q ` C 4 . The condition is sufficient : Let A be a model of C 0 +Q +C L 2 +C L 3 +C L 4 . By proposition 6.10, we may define an increasing sequence : A = A 0 ⊂ A 1 ⊂ . . . ⊂ A n ⊂ . . . of combinatory algebras which satisfy the same axioms, and such that, for each n, there exists ξn+1 ∈ A n+1 such that : if a, b ∈ A n and (a)ξn+1 = (b)ξn+1 , then (e)a = (e)b. Let A ∗ = ∪n A n . Then A ∗ = C 0 +Q +C L i (2 ≤ i ≤ 4) as well as the weak extensionality axiom : if a, b ∈ A ∗ and (a)x = (b)x for every x ∈ A ∗ , then we have a, b ∈ A n for some n ; hence (a)ξn+1 = (b)ξn+1 and therefore (e)a = (e)b. Q.E.D.
Lambdacalculus, types and models
104
Intuitively, the extension of A constructed here is obtained by adding infinitely many “ variables ” which are the ξn ’s. Now we consider the system of axioms : (C L = )
C 0 +C 1 +C L 2 +C L 3 +C L 4 .
Theorem 6.12. Let A be a combinatory algebra. Then there exists an extension of A satisfying C L if and only if A satisfies C L = . It suffices to apply theorem 6.11, where Q is taken as the system of axioms C 1 . Q.E.D.
Corollary 6.13. The universal consequences of C L are those of C L = . Indeed, let A be a model of C L = , and F a universal formula which is a consequence of C L (see chapter 9). We need to prove that A = F . By theorem 6.12, A can be embedded in some model B of C L. Thus B = F and, since F is universal and A is a submodel of B , we deduce that A = F . Conversely, it follows from theorem 6.12 that every model of C L is a model of C L= . Q.E.D.
We now consider the axiom : (C L 5 )
E =I
that is to say (by definition of E ) : λx((S)(K )x)I = I .
(C L 5 ) Clearly, C 0 + C L 5 ` C 10 .
Moreover, C 3 is obviously equivalent, modulo C 0 + C L 5 ,
to : (C 300 ) Let C L 003 (C L 003 )
((S)((S)(K )K )x)y = x. denote the λclosure of C 300 , that is to say : λxλy((S)((S)(K )K )x)y = λxλy x.
We also define the following system of axioms EC L = : (EC L = )
C 0 +C L 2 +C L 003 +C L 4 +C L 5 .
Theorem 6.14. Let A be a combinatory algebra. Then there exists an extension of A satisfying EC L if and only if A satisfies EC L = . This follows immediately from theorem 6.11, where Q is taken as the axiom E = I. Q.E.D.
Corollary 6.15. The universal consequences of EC L are those of EC L = .
Chapter 6. Combinatory logic
105
Let A be a model of L . The diagram of A, denoted by D A , is defined as the set of all formulas of the form t = u or t 6= u which hold in A, t and u being arbitrary closed terms with parameters in A. The models of D A are those models of L which are extensions of A. Theorem 6.16. Let A be a model of C L = , and t , u two terms with parameters in A (and variables). Then : i) if D A +C 0 ` t = u, then D A +C 0 ` λx t = λx u ; ii) if D A + C 0 ` (t )x = (u)x, where x is a variable which does not occur in t , u, then D A +C 0 ` (E )t = (E )u. D A +C 0 ` F means : every extension of A satisfying C 0 satisfies F . Proof of (i) : let B be an extension of A satisfying C 0 . Then B satisfies C L = and, by theorem 6.12, there exists an extension B 0 of B which satisfies C L. By hypothesis, we have D A + C 0 ` t = u, and hence B 0 = t = u ; by weak extensionality, it follows that : B 0 = λx t = λx u ; therefore, B = λx t = λx u. Same proof for (ii). Q.E.D.
A similar proof yields the following theorem : Theorem 6.17. Let A be a model of EC L = , and t , u two terms with parameters in A where x does not occur. If D A +C 0 ` (t )x = (u)x, then D A +C 0 ` t = u.
4. Translation of λcalculus We define a model M0 of L , called “ model over λterms ”, as follows : the domain M 0 is the quotient set Λ/'β ; the constant symbols K , S are respectively interpreted by the (equivalence classes of) λterms λxλy x and λxλyλz((x)z)(y)z ; the function symbol Ap is interpreted by the function u, t 7→ (u)t from M 0 × M 0 to M 0 . Lemma 6.18. M0 is a model of C L. For every term t ∈ Λ, we have (E )t 'β λx(t )x, where x is any variable which does not occur free in t . Here we will only use the definition of βequivalence, not its properties shown in chapter 1. We first prove that M0 = C 0 : that is to say that (K )uv 'β u and (S)uv w 'β ((u)w)(v)w for all u, v, w ∈ Λ, which is clear in view of the interpretations of K and S.
Lambdacalculus, types and models
106
Now we come to the second part of the lemma : since M0 = C 0 , we have, by proposition 6.4 : (E )t = ((S)(K )t )I , with I = (S)K K . Looking again to the interpretations of K and S in M0 , we obtain easily : I 'β λx x, and then ((S)(K )t )I 'β λx(t )x, which gives the desired result. Then we prove that M0 = Wext : suppose that M0 = ∀x[(t )x = (u)x], with t , u ∈ Λ/'β . Take a variable x which does not occur in t , u ; then (t )x 'β (u)x, and therefore λx(t )x 'β λx(u)x. Now we have seen that λx(t )x 'β (E )t and λx(u)x 'β (E )u. Thus (E )t 'β (E )u and hence M0 = (E )t = (E )u. Finally, we show that M0 is a model of C 10 , in other words, of the formulas : (E )K = K ; (E )(K )x = (K )x ; (E )S = S ; (E )(S)x = (S)x ; (E )(S)x y = (S)x y. So we need to prove that (E )K 'β K ; (E )(K )x 'β (K )x ; (E )S 'β S ; (E )(S)x 'β (S)x ; (E )(S)x y 'β (S)x y. We have seen that (E )t 'β λz(t )z, where z does not occur in t . Thus it remains to prove that : λx(K )x 'β K ; λy(K )x y 'β (K )x ; λx(S)x 'β S ; λy(S)x y 'β (S)x ; λz(S)x y z 'β (S)x y. Now all these equivalences are trivial, in view of the interpretations of K and S in M0 . Q.E.D.
We define similarly a model M1 of L , over the domain M 1 = Λ/'βη (the quotient set of Λ by the βηequivalence relation) ; again, the constant symbols K and S are interpreted by the (equivalence classes of ) terms λxλy x and λxλyλz((x)z)(y)z, and the function symbol Ap is interpreted by the function u, t 7→ (u)t from M 1 × M 1 to M 1 . Lemma 6.19. M1 is a model of EC L. We only prove that M1 = Ext (the other axioms are checked as above). Let t , u ∈ Λ/'βη be such that M1 = ∀x[(t )x = (u)x]. Take a variable x not occurring in t , u : we have (t )x 'βη (u)x, thus λx(t )x 'βη λx(u)x, and hence t 'βη u. Therefore M1 = t = u. Q.E.D.
Recall that a combinatory algebra A (that is a model of C 0 ) is trivial if it contains only one element. Actually, A is trivial if and only if it is a model of the axiom 0 = 1, where 0 ≡ λxλy y ≡ (K )I and 1 ≡ λxλy x = (E )K : indeed, if A = 0 = 1, then, for all a, b ∈ A, we have A = (0)ab = (1)ab, thus A = b = a, and hence A has only one element. The axiom 0 = 1 is equivalent to K = S : indeed, from K = S, we deduce : (K )abc = (S)abc, thus (a)c = ((a)c)(b)c for all a, b, c ∈ A. Taking a = (K )I , b = (K )d , we obtain I = d for every d ∈ A, and therefore A is trivial.
Chapter 6. Combinatory logic
107
Theorem 6.20. C L and EC L have a nontrivial model, and are not equivalent theories. We have seen that M1 = EC L, thus both C L and EC L have a nontrivial model (to make sure that M1 is not trivial, notice, for instance, that two distinct variables of the λcalculus may not be βηequivalent, according to the ChurchRosser property for βη). On the other hand, M0 is a model of C L, but not of EC L : indeed, let ξ, ζ ∈ Λ be such that ξ is a variable of the λcalculus and ζ = λx(ξ)x, where x 6= ξ. Then (ξ)t 'β (ζ)t and hence M0 = (ξ)t = (ζ)t for every t ∈ Λ. Now ξ and ζ are not βequivalent and therefore M0 = ξ 6= ζ. Q.E.D.
For every λterm t , we define, inductively, a term t L of the language of combinatory logic : if t is a variable, then t L = t (by convention, we identify variables of the λcalculus and variables of the language L ) ; if t = (u)v, then t L = (u L )v L ; if t = λx u, then t L = λx u L . Notice that the symbol λ is used here in two different ways : on the one hand in the λterms, and on the other hand in the terms of L . Conversely, with each term t of the language L , we associate a λterm t Λ , defined by induction on t : K Λ = λxλy x ; S Λ = λxλyλz((x)z)(y)z ; if t = (u)v, then t Λ = (u Λ )v Λ . Clearly, for every term t of L (with or without variables), t Λ is the value of t in both models M0 and M1 , when each variable of L is interpreted by itself (considered as an element of Λ). Therefore : Lemma 6.21. Let t , u be two terms of L . If C L ` t = u, then t Λ 'β u Λ ; if EC L ` t = u, then t Λ 'βη u Λ . Lemma 6.22. For every λterm t , t L Λ 'β t . The proof is by induction on t . This is obvious in case t is a variable or t = (u)v. Suppose that t = λx u ; then t L = λx u L . Therefore, C L ` (t L )x = u L (proposition 6.1). Thus, by lemma 6.21, we have (t L Λ )x 'β u L Λ and, by induction hypothesis, u L Λ 'β u. It follows that (t L Λ )x 'β u, and hence : λx(t L Λ )x 'β λx u = t . Now t L = λx u L , and hence C L ` (E )t L = t L (proposition 6.5). Thus, by lemma 6.21, we have (E )t L Λ 'β t L Λ .
Lambdacalculus, types and models
108
On the other hand, by lemma 6.18, (E )t L Λ 'β λx(t L Λ )x. It follows, finally, that t L Λ 'β λx(t L Λ )x 'β t . Q.E.D.
Lemma 6.23. For every term t of L , C L ` t ΛL = t . The proof is by induction on t . This is immediate whenever t is a variable or t = (u)v. It remains to examine the cases where t = K or t = S. If t = K , then K Λ = λxλy x (λterm), thus K ΛL = λxλy x (term of L ). Now C L ` K x y = x (axioms C 0 ), and hence (by weak extensionality) : C L ` λxλy(K )x y = λxλy x = K ΛL . Since C L ` K = λxλy(K )x y (axioms C 1 ), it follows that C L ` K = K ΛL . If t = S, then S ΛL = λxλyλz((x)z)(y)z (term of L ). Now C L ` (S)x y z = ((x)z)(y)z, thus, by weak extensionality : C L ` λxλyλz(S)x y z = λxλyλz((x)z)(y)z = S ΛL . On the other hand : C L ` S = λxλyλz(S)x y z (axioms C 1 ), and therefore C L ` S = S ΛL . Q.E.D.
Lemma 6.24. Let t , u ∈ Λ and v = u[t /x]. Then C L ` v L = u L [t L /x]. The proof is by induction on u. This is immediate whenever u is a variable or u = (u 1 )u 2 . Suppose that u = λy u 0 ; then, we have v = λy v 0 , where v 0 = u 0 [t /x]. 0 0 0 Thus, by induction hypothesis, C L ` v L = uL [t L /x]. Now v L = λy v L and hence : 0 C L ` (v L )y = u L [t L /x] (proposition 6.1). 0 0 But we also have u L = λy u L , and therefore C L ` (u L )y = u L . It follows 0 that C L ` (u L [t L /x])y = u L [t L /x], and hence C L ` (u L [t L /x])y = (v L )y. By weak extensionality, we obtain C L ` (E )v L = (E )u L [t L /x]. 0 0 Now v L = λy v L , u L = λy u L , and therefore : C L ` (E )v L = v L and C L ` (E )u L = u L (proposition 6.5) ; thus C L ` (E )u L [t L /x] = u L [t L /x]. Finally, we have C L ` v L = u L [t L /x]. Q.E.D.
Theorem 6.25. Let t , u be two λterms. Then : i) t 'β u if and only if C L ` t L = u L . ii) t 'βη u if and only if EC L ` t L = u L . This theorem means that the β (resp. the βη)equivalence is represented by the notion of consequence in C L (resp. EC L). Proof of (i) : If C L ` t L = u L , then t L Λ 'β u L Λ by lemma 6.21, thus t 'β u by lemma 6.22.
Chapter 6. Combinatory logic
109
Conversely, suppose that t 'β u. To prove that C L ` t L = u L , we may suppose that t β0 u (that is to say : u is obtained from t by contracting one redex). The proof is then by induction on t ; t may not be a variable (there is no redex in a variable). 0 0 If t = λx t 0 , then u = λx u 0 with t 0 β0 u 0 . Thus C L ` t L = uL (induction hypoth0 0 esis) and, by weak extensionality, we have C L ` λx t L = λx u L , that is to say C L ` tL = uL . If t = (t 0 )t 00 , then there are three possible cases for u : 0 0 (induction hypothesis), and = uL u = (u 0 )t 00 , with t 0 β0 u 0 ; then C L ` t L 0 00 0 00 therefore C L ` (t L )t L = (u L )t L , that is to say C L ` t L = u L . u = (t 0 )u 00 , with t 00 β0 u 00 ; same proof. 0 00 t = (λx t 0 )t 00 and u = t 0 [t 00 /x]. By lemma 6.24, C L ` u L = t L [t L /x] ; on the 0 00 0 00 other hand, we have t L = (λx t L )t L and hence C L ` t L = t L [t L /x] (proposition 6.1). Thus C L ` t L = u L . Proof of (ii) : If EC L ` t L = u L , then t L Λ 'βη u L Λ by lemma 6.21, thus t 'βη u by lemma 6.22. Conversely, suppose that t 'βη u. To prove that EC L ` t L = u L , we may suppose that t β0 u or t η 0 u. If t β0 u, we obtain the desired result in view of (i). If t η 0 u, the proof proceeds by induction on t (which may not be a variable) ; if t = (t 0 )t 00 , then u = (u 0 )t 00 or u = (t 0 )u 00 , with t 0 η 0 u 0 or t 00 η 0 u 00 . Thus the result follows from the induction hypothesis. If t = λx t 0 , there are two possible cases for u : 0 0 u = λx u 0 , with t 0 η 0 u 0 . By induction hypothesis, EC L ` t L = uL ; 0 0 it follows, by weak extensionality, that EC L ` λx t L = λx u L , that is to say EC L ` t L = u L . t = λx(u)x, x having no occurrence in u ; then t L = λx(u L )x, and therefore, by proposition 6.1, EC L ` (t L )x = (u L )x. Using extensionality (since x does not occur free in t , u), we conclude that EC L ` t L = u L . Q.E.D.
There is a “ canonical ” method for constructing a model of C L (resp. EC L) : let T be the set of all terms of L (with variables). We define on T an equivalence relation ∼0 (resp. ∼1 ) by taking : t ∼0 u ⇔ C L ` t = u (resp. t ∼1 u ⇔ EC L ` t = u). Then we have a model N0 of C L (resp. a model N1 of EC L) over the domain T /∼0 (resp. T /∼1 ) where the symbols K , S, Ap have obvious interpretations (take the canonical definition on the set of terms and then pass to the quotient set). We now prove, for example, that N0 = C L : For those axioms of C L which are equations, the proof is immediate : for instance, the axiom (K )x y = x holds since, for all terms t , u of L , we have
Lambdacalculus, types and models
110
C L ` (K )t u = t , and thus, by definition of N0 , N0 = (K )t u = t . It remains to check the weak extensionality axiom. Therefore, let t , u ∈ N0 be such that N0 = (t )x = (u)x for every x ∈ N0 . Take x as a variable which does not occur in t , u. Then, by definition of N0 : C L ` (t )x = (u)x, thus C L ` (E )t = (E )u, and hence N0 = (E )t = (E )u, which is the desired conclusion. Proposition 6.26. M0 and N0 (resp. M1 and N1 ) are isomorphic models. Consider the mapping t 7→ t L of Λ into T ; we know from theorem 6.25 that : t 'β u ⇔ C L ` t L = u L ; that is to say : M0 = t = u ⇔ N0 = t L = u L . Therefore, this mapping induces an isomorphism from M0 into N0 . Now consider the mapping t 7→ t Λ of T into Λ. By lemma 6.21, we have : N0 = t = u ⇒ M0 = t Λ = u Λ . Therefore, this mapping induces a homomorphism from N0 into M0 . According to lemmas 6.22 and 6.23, these are inverse homomorphisms. The proof is similar for M1 and N1 . Q.E.D.
Theorem 6.27. Let t , t 0 be two normalizable closed λterms which are not βη0 equivalent. Then EC L ` t L = t L ↔ 0 = 1. 0 In other words, the theory EC L + t L = t L has no other model than the trivial one.
The proof is as follows : 0 We have seen that EC L ` 0 = 1 → ∀x∀y{x = y} ; thus EC L ` 0 = 1 → t L = t L . 0 Conversely, since t and t are normalizable closed terms which are not βηequivalent, in view of Böhm’s theorem (theorem 5.2), there exist closed terms t 1 , . . . , t n ∈ Λ such that (t )t 1 . . . t n 'βη 0 and (t 0 )t 1 . . . t n 'βη 1. It then follows from theorem 6.25 that : 1 n 0 1 n EC L ` (t L )t L . . . tL = 0 and EC L ` (t L )t L . . . tL = 1. Therefore : 0 EC L ` t L = t L → 0 = 1. Q.E.D.
References for chapter 6 [Bar84], [Cur58], [Hin86]. (The references are in the bibliography at the end of the book).
Chapter 7 Models of lambdacalculus 1. Functional models Given a set D, let F (D) denote the set of all functions from D N into D which depend only on a finite number of coordinates ; for every i ≥ 0, the i th coordinate function will be denoted by x i +1 . Therefore, any member of F (D) may be denoted by f (x 1 , . . . , x n ), for every large enough integer n. For any two functions f (x 1 , . . . , x n ), g (x 1 , . . . , x p ) in F (D), we will denote the function f (x 1 , . . . , x i −1 , g (x 1 , . . . , x p ), x i +1 , . . . , x n ) ∈ F (D) by f [g /x i ]. Clearly, if f does not depend on the coordinate x i , then f [g /x i ] = f . Let us consider a subset F of D D and two functions Φ : D → F , and Ψ : F → D. For all a, b ∈ D, define (a)b to be Φ(a)(b) (so D is an applicative structure). For every f ∈ F , Ψ( f ) will also be denoted by λx f (x). Let f , g ∈ F (D), f = f (x 1 , . . . , x n ), and g = g (x 1 , . . . , x n ). We define ( f )g ∈ F (D), by taking [( f )g ](a 1 , . . . , a n ) = ( f (a 1 , . . . , a n ))g (a 1 , . . . , a n ), for all a 1 , . . . , a n ∈ D. We now consider a subset F ∞ of F (D) such that : 0. If f ∈ F ∞ , f = f (x 1 , . . . , x n ), and a 1 , . . . , a i −1 , a i +1 , . . . , a n ∈ D, then f (a 1 , . . . , a i −1 , x, a i +1 , . . . , a n ) ∈ F (1 ≤ i ≤ n). For each f ∈ F ∞ , f = f (x 1 , . . . , x n ), and for each coordinate x i , let us define λx i f ∈ F (D) to be the function g (x 1 , . . . , x i −1 , x i +1 , . . . , x n ) such that : g (a 1 , . . . , a i −1 , a i +1 , . . . , a n ) = λx f (a 1 , . . . , a i −1 , x, a i +1 , . . . , a n ) for all a 1 , . . . , a i −1 , a i +1 , . . . , a n ∈ D. Thus λx i f does not depend on the coordinate x i . We now suppose that the following conditions hold : 1. Every coordinate function x i is in F ∞ ; 2. If f , g ∈ F ∞ , then ( f )g ∈ F ∞ . 3. If f ∈ F ∞ , then λx i f ∈ F ∞ for every i . 111
Lambdacalculus, types and models
112
Sets D, F , F ∞ , and functions Φ and Ψ satisfying conditions 0, 1, 2, 3 form what we will call a functional model of λcalculus. Lemma 7.1. Let f , h ∈ F ∞ , x, y be two distinct coordinate functions, and g = λy h. Then λy h[ f /x] = g [ f /x] provided that f does not depend on y. In particular, λy h[z/x] = g [z/x] for every coordinate z 6= y. Let f = f (x 1 , . . . , x n , x), h = h(x 1 , . . . , x n , x, y) ; for all a 1 , . . . , a n , b ∈ D, we have λy h(a 1 , . . . , a n , b, y) = g (a 1 , . . . , a n , b). In particular, if b = f (a 1 , . . . , a n , a), this gives : λy h(a 1 , . . . , a n , f (a 1 , . . . , a n , a), y) = g (a 1 , . . . , a n , f (a 1 , . . . , a n , a)) which is the desired result. Q.E.D.
Lemma 7.2. Let f ∈ F ∞ , and x, y be two distinct coordinates. If f does not depend on y, then λy f [y/x] = λx f . If f = f [x 1 , . . . , x n , x], then f [y/x] = f [x 1 , . . . , x n , y], which gives the result. Q.E.D.
We now define a mapping of the set L of λterms into F ∞ , denoted by t 7→ kt k. We assume that the variables of the λcalculus are x 1 , . . . , x n , . . . The definition is by induction on t : • if t is the variable x i , then kt k is the coordinate function x i ; • if t = (u)v, then kt k = (kuk)kvk ; • if t = λx u, then kt k = λxkuk. Clearly, if the free variables of t are among x 1 , . . . , x k , then the function kt k ∈ F ∞ depends only on the coordinates x 1 , . . . , x k . Lemma 7.3. Let t be a λterm, and f = kt k ; then kt k = f [z/x] for all variables z except a finite number. From now on, we will use the expression : “ for almost all variables z ” as an abbreviation for : “ for all variables z except a finite number ”. The proof is by induction on t ; the result is immediate if t is a variable, or t = (u)v, or t = λx u. Suppose t = λy u, where y 6= x, and let g = kuk ; then f = λy g . Now kt k = kλy uk = λykuk = λy g [z/x] for almost all variables z, by induction hypothesis. By lemma 7.1, we have λy g [z/x] = f [z/x] for almost all z ; this completes the proof. Q.E.D.
Chapter 7. Models of lambdacalculus
113
Proposition 7.4. Let t , t 0 be two λterms. If t ≡ t 0 (t is αequivalent to t 0 ), then kt k = kt 0 k. Proof by induction on t ; the result is immediate if t is a variable, or t = (u)v. If t = λx u, then t 0 = λx 0 u 0 and u ≡ u 0 for almost all variables z. Hence, by induction hypothesis, kuk = ku 0 k. Let g = kuk, g 0 = ku 0 k ; then kt k = λx g and kt 0 k = λx 0 g 0 . By lemma 7.3, we have kuk = g [z/x] and ku 0 k = g 0 [z/x 0 ] for almost all variables z. Thus g [z/x] = g 0 [z/x 0 ], and therefore : λz g [z/x] = λz g 0 [z/x 0 ] for almost all variables z. Hence, by lemma 7.2, λx g = λx 0 g 0 , that is kt k = kt 0 k. Q.E.D.
Therefore, we may consider t 7→ kt k as a mapping of Λ into F ∞ . Proposition 7.5. Let t , u ∈ Λ, and f = kt k, g = kuk. Then ku[t /x]k = g [ f /x]. Proof by induction on u ; this is immediate whenever u is a variable or u = (v)w. If u = λy v, then take y not free in t (thus f does not depend on the coordinate y), and let kvk = h. Then ku[t /x]k = kλy v[t /x]k = λykv[t /x]k = λy h[ f /x] (by induction hypothesis). Now, by definition of kuk, we have g = λy h. Therefore, by lemma 7.1, λy h[ f /x] = g [ f /x]. Q.E.D.
Now consider the following assumption : Φ◦Ψ is the identity function on F
(β) in other words :
(λx f (x))a = f (a) for all a ∈ D and f ∈ F .
(β)
Under this assumption, f 7→ λx f is obviously a oneone mapping of F into D. Any functional model satisfying (β) will be called a functional βmodel. Lemma 7.6. In any βmodel, we have (λx g ) f = g [ f /x], for every coordinate x and all f , g ∈ F ∞ . Let f = f [x 1 , . . . , x n , x], g = g [x 1 , . . . , x n , x] and λx g = g 0 [x 1 , . . . , x n ]. By (β), we have (g 0 [a 1 , . . . , a n ])b = g [a 1 , . . . , a n , b], for all a 1 , . . . , a n , b ∈ D. Thus, by taking b = f [a 1 , . . . , a n , a], we obtain : (g 0 [a 1 , . . . , a n ]) f [a 1 , . . . , a n , a] = g [a 1 , . . . , a n , f [a 1 , . . . , a n , a]] which yields the result. Q.E.D.
The following proposition explains the name “ βmodel ”. Proposition 7.7. In any βmodel, if t , t 0 ∈ Λ and t 'β t 0 , then kt k = kt 0 k.
Lambdacalculus, types and models
114
We may suppose that t β0 t 0 (t 0 is obtained from t by one single βreduction). The proof is by induction on t ; t is not a variable (if it were, no βreduction could be made on it). If t = λx u, then t 0 = λx u 0 , with u β0 u 0 ; by induction hypothesis, kuk = ku 0 k, thus λxkuk = λxku 0 k, that is to say kt k = kt 0 k. If t = (u)v, then there are three possible cases for t 0 : t 0 = (u 0 )v with u β0 u 0 ; then kuk = ku 0 k, by induction hypothesis, and therefore (kuk)kvk = (ku 0 k)kvk, that is kt k = kt 0 k. t 0 = (u)v 0 with v β0 v 0 ; same proof. t = (λx v)u and t 0 = v[u/x] ; let f = kuk, g = kvk ; then kt k = (λx g ) f and kt 0 k = g [ f /x] (proposition 7.5). Thus kt k = kt 0 k by lemma 7.6. Q.E.D.
Proposition 7.8. Every βmodel is a model of the ScottMeyer axioms (and hence it provides a model of CL, see chapter 6, pages 9899). We define a model of the ScottMeyer axioms, where the domain is D, Ap is the function (a, b) 7→ (a)b from D × D to D, e = λxλy(x)y, k = λxλy x, and s = λxλyλz((x)z)(y)z. Indeed, it is obvious from condition β that (k)x y = x, (s)x y z = (xz)y z and (e)x y = (x)y. In order to check the weak extensionality axiom, suppose that (a)x = (b)x for all x ∈ D ; define f [x, y] ∈ F ∞ by taking f [x, y] = (x)y (conditions 1, 2 of the definition of functional models). By definition of F , both functions x 7→ (a)x and x 7→ (b)x are in F ; now they are assumed to be equal, and hence λx(a)x = λx(b)x. Moreover, by definition of e, according to condition β, we have (e)a = λx(a)x, (e)b = λx(b)x. Thus (e)a = (e)b. Q.E.D.
A βmodel is called trivial if it has only one element. A nontrivial βmodel is necessarily infinite, since it is a model of the ScottMeyer axioms, and hence a combinatory algebra (cf. proposition 6.2). Remark. All functions of F ∞ used in the proof of proposition 7.8 have at most three arguments. Therefore, a model of the ScottMeyer axioms can be obtained whenever the following elements are given : • an applicative structure D ; thus we have a function a, b 7→ (a)b from D × D to D. • a set F 3 of functions from D × D × D to D, such that : the three coordinate functions are in F 3 ; whenever f , g ∈ F 3 , then ( f )g ∈ F 3 ; • a function f 7→ λx f from F to D such that (λx f )a = f (a) for all f ∈ F and a ∈ D ; here F is defined as the set of functions from D to D obtained by replacing, in every function of F 3 , two of the three variables by arbitrary elements of D ; • it is assumed that, whenever f (x 1 , x 2 , x 3 ) ∈ F 3 , then λx i f ∈ F 3 (i = 1, 2, 3).
Chapter 7. Models of lambdacalculus
115
Consider a βmodel (D, F , F ∞ ) ; we may define another model of the ScottMeyer axioms, over the domain F ∞ , where Ap is the function f , g 7→ ( f )g , and e = λxλy(x)y, k = λxλy x, s = λxλyλz(xz)(y)z. Indeed, by lemma 7.6, we have : (k) f g = f , (s) f g h = ( f h)g h, (e) f g = ( f )g , for all f , g , h ∈ F ∞ . We now check the weak extensionality axiom : suppose that ( f )h = (g )h for all h ∈ F ∞ ; take h as any coordinate function x, on which f and g do not depend. Then we have ( f )x = (g )x, thus λx( f )x = λx(g )x. It follows that (e) f = (e)g because, from the definition of e and lemma 7.6, we have (e) f = λx( f )x and (e)g = λx(g )x. Proposition 7.9. Let (D, F , F ∞ ) be a βmodel ; then the following conditions are equivalent : i) the extensionality axiom is satisfied in the model D ; ii) f → λx f is a mapping of F onto D (thus it is onetoone) ; iii) λx(a)x = a for every a ∈ D. iv) Ψ◦Φ is the identity function on D. If these conditions hold, then the βmodel under consideration is said to be extensional. (iii) ⇒ (ii) is obvious. (i) ⇒ (iii) : for every b ∈ D, we have (a)b = (a 0 )b, where a 0 = λx(a)x (by condition β). Therefore, a = a 0 by extensionality. (ii) ⇒ (i) : let a, b ∈ D be such that (a)c = (b)c for every c ∈ D ; by hypothesis, there exist f , g ∈ D such that a = λx f , b = λx g . Therefore (λx f )c = (λx g )c, and hence f (c) = g (c) (by β) for every c ∈ D. Thus f = g , and therefore λx f = λx g and a = b. Finally (ii) ⇔ (iv) : indeed, condition (ii) means that Ψ is onetoone ; since we know that Φ◦Ψ is the identity function on F , we see that Ψ◦Φ is the identity function on D. Q.E.D.
Remark. Conversely, every model D of the ScottMeyer axioms can be obtained from a functional βmodel : take F as the set of functions of the form x 7→ (a)x, where a ∈ D, and F ∞ as the set of functions of the form t [x 1 , . . . , x k ], where t is a term of L written with the indicated variables. For all a 2 , . . . , a k , there exists a ∈ D such that t [x, a 2 , . . . , a k ] = (a)x (combinatory completeness of D). Thus condition 0 of the definition of functional models is satisfied. Clearly, conditions 1 and 2 also hold. Let f ∈ F be such that f (x) = (a)x ; define λx f (x) = (e)a. This is a correct definition : indeed, if f (x) = (a 0 )x, then (e)a = (e)a 0 , by weak extensionality. Condition β is satisfied : (λx f (x))c = (e)ac = (a)c = f (c).
Lambdacalculus, types and models
116
Finally, we check that condition 3 is satisfied : let f ∈ F ∞ be defined by some term t [x, x 1 , . . . , x k ] of L ; consider the term u = λx t (here, and only here, λx is taken in the sense of chapter 6), and let g ∈ F ∞ be the corresponding function. Then we have (u)x = t in D, and hence (g )x = f . Thus (g (a 1 , . . . , a k ))c = f (c, a 1 , . . . , a k ) for all a 1 , . . . , a k , c ∈ D ; therefore, by definition : λx f (x, a 1 , . . . , a k ) = (e)g (a 1 , . . . , a k ). Thus we have λx f (x, x 1 , . . . , x k ) = (e)g (x 1 , . . . , x k ) and this function is defined by the term (e)u, so it is in F ∞ .
2. Spaces of continuous increasing functions We will say that an ordered set D is σcomplete if every increasing sequence a n (n ∈ N) of elements of D has a least upper bound. This least upper bound will be denoted by supn a n . Let D, D 0 be two σcomplete ordered sets, and f : D → D 0 an increasing function. We will say that f is σcontinuous increasing (σc.i.) if, for every increasing sequence (a n ) in D, we have f (supn a n ) = supn f (a n ). Let D, D 0 , E be σcomplete ordered sets. We may define a structure of σcomplete ordered set on the cartesian product D × D 0 , by putting : (a, b) ≤ (a 0 , b 0 ) ⇔ a ≤ a 0 and b ≤ b 0 . A function f : D ×D 0 → E is σcontinuous increasing if and only if it is separately σcontinuous increasing (that is to say : for all a ∈ D and a 0 ∈ D 0 , f (x, a 0 ) and f (a, x 0 ) are σc.i. functions). The proof is immediate. Let D, D 0 be two σcomplete ordered sets. We may define a structure of σcomplete ordered set on the set C (D, D 0 ) of all σc.i. functions from D to D 0 , by putting : f ≤ g ⇔ f (a) ≤ g (a) for every a ∈ D. If f n (n ∈ N) is an increasing sequence in C (D, D 0 ), its least upper bound is the function f : D → D 0 defined by f (a) = supn f n (a). Indeed, f is clearly increasing ; we show that it is also σcontinuous : let a k (k ∈ N) be an increasing sequence in D, and a = supk a k . Then f (a) = supn f n (a) = supn supk f n (a k ) = supn,k f n (a k ) = supk supn f n (a k ) = supk f (a k ). The next proposition provides a very useful method for constructing functional βmodels (and therefore models of combinatory logic). Proposition 7.10. The following data define a functional model of λcalculus : a σcomplete ordered set D ;
Chapter 7. Models of lambdacalculus
117
a σc.i. function Φ : D → C (D, D) ; a σc.i. function Ψ : C (D, D) → D. This model is a βmodel if and only if Φ◦Ψ = I d (on C (D, D)). This βmodel is extensional if and only if we have also : Ψ◦Φ = I d (on D). In this case, for all a, b ∈ D, a ≤ b if and only if (a)c ≤ (b)c for every c ∈ D. For all a, b ∈ D, define (a)b = Φ(a)(b) ; then D is an applicative structure, and the function (a, b) 7→ (a)b from D × D to D is σc.i. (obviously, it is separately σc.i.). Let F = C (D, D) and take F ∞ as the set of all σc.i. functions from D N to D which depend only on a finite number of coordinates. For every f ∈ F , we put, by definition, λx f (x) = Ψ( f ). It remains to check conditions 1, 2, 3 of the definition of functional models. It is obvious that each coordinate x i is in F ∞ . If f , g ∈ F ∞ , then ( f )g is σc.i. (since (a, b) 7→ (a)b is σc.i.) and depends only on a finite number of coordinates ; thus ( f )g ∈ F ∞ . Finally, let f (x, x 1 , . . . , x k ) ∈ F ∞ . Then (a 1 , . . . , a k ) 7→ f (x, a 1 , . . . , a k ) is a σc.i. function from D k to F . Hence (a 1 , . . . , a k ) 7→ λx f (x, a 1 , . . . , a k ) is σc.i. from D k to D, which proves that λx f ∈ F ∞ . The model obtained above is a βmodel if and only if Φ◦Ψ = I d on C (D, D) (by definition of βmodels). This βmodel is extensional if and only if we have, also : Ψ◦Φ = I d on D (according to proposition 7.9.iv). Finally, if (a)c ≤ (b)c for every c ∈ D, then Φ(a) ≤ Φ(b), thus Ψ(Φ(a)) ≤ Ψ(Φ(b)), since Ψ is increasing, and therefore a ≤ b. Q.E.D.
3. Spaces of initial segments Let D be a countable preordered set (recall that a preorder is a reflexive and transitive binary relation), the preorder on D being denoted by ≤. A subset a of D will be called an initial segment if, for all α ∈ a and β ≤ α, we have β ∈ a. Let a ⊂ D ; the least initial segment containing a is denoted by a¯ ; it is the set of lower bounds of the elements of a. We will denote by S (D) the space of initial segments of D ; the inclusion relation makes of S (D) a σcomplete ordered set. The set of finite subsets of D will be denoted by D ∗ . On D ∗ , we define a preorder, still denoted by ≤, by putting : a ≤ b ⇔ a¯ ⊂ b¯ ⇔ every member of a is a lower bound of an element of b. Consider two countable preordered sets D and E ; let D = S (D), E = S (E ).
Lambdacalculus, types and models
118
For every f ∈ C (D, E ), we define the trace of f , denoted by tr( f ), which is a subset of D ∗ ×E : ¯ tr( f ) = {(a, α) ∈ D ∗ ×E ; α ∈ f (a)}. Proposition 7.11. The function tr is an isomorphism of ordered sets from C (D, E ) onto the space S (D ∗ ×E ) of initial segments of D ∗ ×E with the product preorder : (a, α) ≤ (b, β) ⇔ a ≥ b and α ≤ β. For every X ∈ S (D ∗ ×E ), we have X = tr( f ), where f ∈ C (D, E ) is defined by : f (u) = {β ∈ E ; (∃a ∈ D ∗ )(a ⊂ u and (a, β) ∈ X )}. Let f ∈ C (D, E ) ; then tr( f ) is an initial segment of D ∗×E : indeed, if (b, β) ∈ tr( f ) ¯ a¯ ⊃ b¯ and α ≤ β. Thus α ∈ f (b) ¯ (since f (b) ¯ is and (a, α) ≤ (b, β), then β ∈ f (b), ¯ ¯ and an initial segment of E ) and, since f is increasing, we have f (b) ⊂ f (a), ¯ therefore α ∈ f (a). Let f , g ∈ C (D, E ) ; if f ≤ g , then tr( f ) ⊂ tr(g ) : indeed, if (a, α) ∈ tr( f ), then ¯ and hence α ∈ g (a), ¯ since f (a) ¯ ⊂ g (a). ¯ α ∈ f (a), Conversely, we prove that tr( f ) ⊂ tr(g ) ⇒ f ≤ g : first, let a be a finite subset of ¯ then α ∈ g (a), ¯ (since tr( f ) ⊂ tr(g )) and hence f (a) ¯ ⊂ g (a). ¯ D ; if α ∈ f (a), Now let a be an initial segment of D ; since D is countable, we have, for instance a = {α0 , . . . , αn , . . .}. Let a n = {α0 , . . . , αn } ∈ D ∗ ; a¯n is an increasing sequence, the union of which is a. From what has just been proved, we deduce that f (a¯n ) ⊂ g (a¯n ). Since both f and g are σc.i., we therefore have : f (a) = ∪n f (a¯n ) ⊂ ∪n g (a¯n ) = g (a). Thus, tr is an isomorphism of ordered sets from C (D, E ) into S (D ∗ ×E ). It remains to prove that its image is the whole set S (D ∗ ×E ). Let X ∈ S (D ∗ ×E ) ; we define f : D → E by taking f (u) = {β ∈ E ; ∃a ∈ D ∗ , a ⊂ u, (a, β) ∈ X } for every u ∈ D. Indeed, f (u) is an initial segment of E : if β0 ≤ β ∈ f (u), then there exists a ∈ D ∗ such that a ⊂ u and (a, β) ∈ X . We have (a, β0 ) ≤ (a, β) in D ∗ ×E , thus (a, β0 ) ∈ X , and hence β0 ∈ f (u). Obviously, f is increasing ; it is also σcontinuous : indeed, let u n be an increasing sequence in D, and u = ∪n u n . We have f (u n ) ⊂ f (u) for all n, thus ∪n f (u n ) ⊂ f (u). Conversely, if β ∈ f (u), then there exists a ∈ D ∗ such that a ⊂ u and (a, β) ∈ X . Since a is finite, we have a ⊂ u n for some n, and therefore β ∈ f (u n ). Thus f (u) ⊂ ∪n f (u n ). Finally, we prove that tr( f ) = X : indeed, if (a, β) ∈ X , then, by definition of f , ¯ (since a ⊂ a) ¯ ; thus (a, β) ∈ tr( f ). Conversely, if (a, β) ∈ tr( f ), we have β ∈ f (a) ¯ and hence, by definition of f , there exists a 0 ∈ D ∗ , a 0 ⊂ a, ¯ such then β ∈ f (a), 0 ¯ we have a 0 ≤ a, thus (a, β) ≤ (a 0 , β), and hence that (a , β) ∈ X . Since a 0 ⊂ a, (a, β) ∈ X , since X is an initial segment. Q.E.D.
Chapter 7. Models of lambdacalculus
119
We now consider a countable set D, and a function i : D ∗ ×D → D. If a = {α1 , . . . , αn } ∈ D ∗ and α ∈ D, then i (a, α) will be denoted by a → α, or {α1 , . . . , αn } → α. We assume that a preorder is given on D ; we denote it by ≤ (as well as its extension to D ∗ , defined above). Let D = S (D). We wish to define two σc.i. functions : Φ : D → C (D, D) and Ψ : C (D, D) → D. This will be done as follows : there is a natural way of associating with the function i : D ∗ ×D → D two functions on the power sets denoted by i and i −1 : i : P (D ∗ ×D) → P (D) and i −1 : P (D) → P (D ∗ ×D). Let s : P (D) → S (D) and s 0 : P (D ∗ ×D) → S (D ∗ ×D) be the functions defined by : s(X ) (resp. s 0 (X )) = X = the least initial segment containing X (X is any subset of D (resp. D ∗×D) and X is the set of lower bounds of elements of X ). Thus we may define : ϕ = s 0 ◦i −1 : S (D) → S (D ∗ ×D) and ψ = s ◦i : S (D ∗ ×D) → S (D). Now, by proposition 7.11, t r is an isomorphism of ordered sets from C (D, D) onto S (D ∗ ×D). Let t r −1 : S (D ∗ ×D) → C (D, D) be the inverse function. Then, we may define : Φ = t r −1 ◦ϕ : S (D) → C (D, D) and Ψ = ψ◦ t r : C (D, D) → S (D). Since i , i −1 , s, s 0 are σc.i. functions, Φ and Ψ are also σc.i. Thus, by proposition 7.10, (D, Φ, Ψ) defines a functional model of λcalculus. Lemma 7.12. 1. u ⊃ Ψ◦Φ(u)(≡ λx(u)x) for every u ∈ D if and only if, for all α, β ∈ D and a, b ∈ D ∗ : b ≥ a and β ≤ α ⇒ (b → β) ≤ (a → α). 2. u ⊂ Ψ◦Φ(u) for every u ∈ D if and only if, for every γ ∈ D, there exist α, β ∈ D and a, b ∈ D ∗ such that : b ≥ a, β ≤ α and (a → α) ≤ γ ≤ (b → β). In particular, if i is onto, then u ⊂ Ψ◦Φ(u) for every u ∈ D. Let u ∈ D ; then Ψ◦Φ(u) = ψ◦ϕ(u) = s ◦i ◦ s 0 ◦i −1 (u) ; now s 0 ◦i −1 (u) = {(b, β) ∈ D ∗ ×D ; (∃(a, α) ∈ D ∗ ×D) (b, β) ≤ (a, α), i (a, α) ∈ u}. Hence Ψ◦Φ(u) = {γ ∈ D ; (∃(a, α), (b, β) ∈ D ∗ ×D) γ ≤ i (b, β), (b, β) ≤ (a, α), i (a, α) ∈ u}. 1. Suppose that (b, β) ≤ (a, α) ⇒ i (b, β) ≤ i (a, α) (i is a homomorphism with respect to ≤) ; then it is immediate that Ψ◦Φ(u) ⊂ u for every u ∈ D. Conversely, suppose that Ψ◦Φ(u) ⊂ u for every u ∈ D, and let α, β ∈ D and a, b ∈ D ∗ be such that (b, β) ≤ (a, α). Take u as the set of lower bounds of i (a, α), and let γ = i (b, β). It follows immediately that γ ∈ Ψ◦Φ(u), thus γ ∈ u, and therefore i (b, β) ≤ i (a, α).
Lambdacalculus, types and models
120
2. Suppose that, for every γ ∈ D, there exist α, β ∈ D, a, b ∈ D ∗ such that : (b, β) ≤ (a, α) and i (a, α) ≤ γ ≤ i (b, β). If γ ∈ u, then i (a, α) ∈ u since u is an initial segment, thus γ ∈ Ψ◦Φ(u). Conversely, suppose u ⊂ Ψ◦Φ(u) for every u ∈ D. Let γ ∈ D, and take u as the set of lower bounds of γ. Then γ ∈ Ψ◦Φ(u), and hence there exist α, β ∈ D and a, b ∈ D ∗ such that : γ ≤ i (b, β) ; (b, β) ≤ (a, α) ; i (a, α) ∈ u. Therefore, i (a, α) ≤ γ. Q.E.D.
We may give explicit definitions of Ψ and Φ : let f ∈ C (D, D) ; then Ψ( f ) = s ◦i (tr( f )), that is : ¯ Ψ( f ) = {β ∈ D; (∃α ∈ D)(∃a ∈ D ∗ ) β ≤ i (a, α) and α ∈ f (a)}. Now let u, v ∈ D ; then tr(Φ(u)) = ϕ(u) ; thus, by proposition 7.11 (where we take X = ϕ(u)) : Φ(u)(v) = {β ∈ D ; ∃b ∈ D ∗ , b ⊂ v, (b, β) ∈ ϕ(u)}, that is to say Φ(u)(v) = {β ∈ D ; ∃a, b ∈ D ∗ , ∃α ∈ D, b ⊂ v, (b, β) ≤ (a, α), i (a, α) ∈ u}. Now condition (b, β) ≤ (a, α) may be written b¯ ⊃ a¯ and β ≤ α. Since v is an initial segment and b ⊂ v, we have b¯ ⊂ v, and hence a ⊂ v. Finally : Φ(u)(v) = {β ∈ D; (∃α ∈ D)(∃a ∈ D ∗ ) a ⊂ v, β ≤ α, i (a, α) ∈ u}. The model defined by (D, Φ, Ψ) is a functional βmodel if and only if Φ◦Ψ is the identity function on C (D, D), or, equivalently, ϕ◦ψ is the identity function on S (D ∗ ×D) (since tr is an isomorphism). Now, if X ∈ S (D ∗ ×D), then ψ(X ) = {β ; (∃(a, α) ∈ X ) β ≤ i (a, α)}. Thus : ϕ◦ψ(X ) = {(c, γ); (∃(a, α), (b, β) ∈ D ∗ ×D) (c, γ) ≤ (b, β), i (b, β) ≤ i (a, α) and (a, α) ∈ X }. Clearly, X ⊂ ϕ◦ψ(X ) ; ϕ◦ψ is the identity function if and only if, for every initial segment X of D ∗ ×D : (c, γ) ≤ (b, β), i (b, β) ≤ i (a, α), and (a, α) ∈ X ⇒ (c, γ) ∈ X . By taking (c, γ) = (b, β), and X as the set of lower bounds of (a, α), we see that this condition can be written : i (b, β) ≤ i (a, α) ⇒ (b, β) ≤ (a, α) or, equivalently : (b → β) ≤ (a → α) ⇒ b ≥ a and β ≤ α. Let us notice that, if D 6= ;, the βmodel (D, Φ, Ψ) is nontrivial : indeed, it has at least two elements, namely ; and D. The model (D, Φ, Ψ) is extensional if and only if we have, also, Ψ◦Φ(u) = u for every u ∈ D. By applying lemma 7.12, we obtain the following conditions : i (b, β) ≤ i (a, α) ⇔ (b, β) ≤ (a, α) for all α, β ∈ D and a, b ∈ D ∗ ;
Chapter 7. Models of lambdacalculus
121
for every γ ∈ D, there exist α, β ∈ D and a, b ∈ D ∗ such that i (a, α) ≤ γ ≤ i (b, β) and (b, β) ≤ (a, α). Now, by the previous condition, we therefore have i (b, β) ≤ i (a, α), and hence γ ≤ i (a, α) ≤ γ. So we have proved : Theorem 7.13. Let D be a countable preordered set, and i a function from D ∗×D into D. Define Φ : D → C (D, D) and Ψ : C (D, D) → D as follows : Φ(u)(v) (also denoted by (u)v) = {β ∈ D; (∃α ≥ β)(∃a ∈ D ∗ ) a ⊂ v and (a → α) ∈ u} ; Ψ( f ) (also denoted by λx f (x)) ¯ = {β ∈ D; (∃α ∈ D)(∃a ∈ D ∗ ) β ≤ (a → α) and α ∈ f (a)}. Then (D, Φ, Ψ) defines a functional model of λcalculus, which is a βmodel (necessarily nontrivial) if and only if : (b → β) ≤ (a → α) ⇒ b ≥ a and β ≤ α for all α, β ∈ D and a, b ∈ D ∗ . (D, Φ, Ψ) is an extensional βmodel if and only if : 1. (b → β) ≤ (a → α) ⇔ b ≥ a and β ≤ α for all α, β ∈ D and a, b ∈ D ∗ . 2. For every γ ∈ D, there exist α ∈ D and a ∈ D ∗ such that γ ≤ (a → α) ≤ γ. In particular, if i is onto, and if condition 1 is satisfied, then (D, Φ, Ψ) is an extensional βmodel.
Nonextensional models (P (ω) and Engeler’s model) Here we take D as any countable set with the trivial preorder : α ≤ β ⇔ α = β. The induced preorder on D ∗ is : a ≤ b ⇔ a ⊂ b. We have a¯ = a for every a ∈ D ∗ . Any subset of D is an initial segment, thus D = P (D). We take i as any oneone function from D ∗ ×D to D. Clearly, the following condition holds : (b → β) ≤ (a → α) ⇒ b ≥ a and β ≤ α. We therefore have a βmodel of λcalculus. Note that, in this case, the definitions of Φ and Ψ are : (u)v = {α ∈ D ; ∃a ∈ D ∗ , a ⊂ v and (a → α) ∈ u} for all u, v ∈ D ; λx f (x) = {a → α ; α ∈ D, a ∈ D ∗ and α ∈ f (a)} for every f ∈ C (D, D). By lemma 7.12(1), this model does not satisfy the condition u ⊃ Ψ◦Φ(u), so it cannot be extensional ; indeed, this condition can be written : b ≥ a and α ≤ β ⇒ (b → β) ≤ (a → α), or equivalently : b ⊃ a and α = β ⇒ b = a and α = β, which obviously does not hold.
Lambdacalculus, types and models
122
We obtain Plotkin and Scott’s model P (ω) by taking D = N ; i is the “ standard ” onetoone function from D ∗ ×D onto D defined by : X k 1 i (e, n) = (m + n)(m + n + 1) + n, where m = 2 . 2 k∈e Engeler’s model D A is obtained as follows : Let A be either a finite or a countable set, and D be the least set containing A such that α ∈ D, a ∈ D ∗ ⇒ (a, α) ∈ D (it is assumed that none of the members of A are ordered pairs). The oneone function i : D ∗ ×D → D is defined by taking i (a, α) = (a, α).
Extensional models Theorem 7.14. Let D be a countable set, i a onetoone mapping of D ∗ ×D into D, and ≤0 a preorder on D such that : (b → β) ≤0 (a → α) ⇒ a ≤0 b and β ≤0 α. Then, there exists a preorder on D, which we denote by ≤, as well as its extension to D ∗ , with the following properties : i) β ≤0 α ⇒ β ≤ α. ii) (b → β) ≤ (a → α) ⇔ a ≤ b and β ≤ α. Remark. In view of theorem 7.13, we therefore obtain a nontrivial extensional βmodel. We have the following definitions for functions Φ and Ψ in this βmodel (u, v range in D, while f ranges in C (D, D)) : Φ(u)v ≡ (u)v = {α ∈ D; ∃a ∈ D ∗ , a ⊂ v and (a → α) ∈ u} ; ¯ Ψ( f ) ≡ λx f (x) = {a → α; α ∈ D, a ∈ D ∗ , α ∈ f (a)}. ¯ then β = a 0 → α0 , with a ≤ a 0 (thus a¯ ⊂ a¯ 0 ) and α0 ≤ α. Indeed, if β ≤ a → α and α ∈ f (a), ¯ and finally α0 ∈ f (a¯ 0 ). Hence α0 ∈ f (a)
Proof of the theorem : let R be a preorder on D ; the corresponding preorder on D ∗ will be denoted by R ∗ . Thus, by definition, for all a, b ∈ D ∗ : a R ∗ b ⇔ (∀α ∈ a)(∃β ∈ b)α R β. Consider the following condition, relative to the preorder R : (C ) a R ∗ b and β R α ⇒ (b → β)R(a → α) for all a, b ∈ D ∗ and α, β ∈ D. The intersection S of any set R of preorders which satisfy condition (C ) still satisfies (C ) : indeed, if a S ∗ b and β S α, then, clearly, a R ∗ b and β R α for every R ∈ R. Hence, (b → β)R(a → α), and therefore (b → β)S(a → α). This allows us to define the least preorder R 0 on D which contains ≤0 and satisfies condition (C ) (R 0 is the intersection of all preorders which satisfy these conditions ; there exists at least one such preorder, namely D×D). Now, since i is onetoone, we can define a binary relation S 0 on D, by putting :
Chapter 7. Models of lambdacalculus
123
(b → β) S 0 (a → α) ⇔ a R 0∗ b and β R 0 α. Obviously, S 0 is a preorder, because i is oneone, and both R 0 and R 0∗ are preorders. Now S 0 ⊂ R 0 , since R 0 satisfies condition (C ). It follows immediately that S 0∗ ⊂ R 0∗ . Let a, b ∈ D ∗ , α, β ∈ D be such that a S 0∗ b and β S 0 α ; then we have a R 0∗ b and β R 0 α, and hence, by definition of S 0 : (b → β) S 0 (a → α). Thus S 0 satisfies condition (C ). Moreover, S 0 contains ≤0 : Indeed, if β ≤0 α, then α = (a 0 → α0 ) and β = (b 0 → β0 ) ; by the hypothesis on the preorder ≤0 , we have a 0 ≤0 b 0 and β0 ≤0 α0 , and thus (b 0 → β0 ) S 0 (a 0 → α0 ) by definition of S 0 . By the minimality of R 0 , it follows that R 0 ⊂ S 0 , and therefore R 0 = S 0 . Thus, by definition of S 0 : (b → β) R 0 (a → α) ⇔ a R 0∗ b and β R 0 α. So R 0 satisfies conditions (i) and (ii) of the theorem, and can be taken as the desired preorder ≤. Proposition 7.15. For all α, β ∈ D, we have β ≤ α if and only if there exist k ≥ 0, a 1 , . . . , a k , b 1 , . . . , b k ∈ D ∗ and α0 , β0 ∈ D, such that a i ≤ b i (1 ≤ i ≤ k), β0 ≤0 α0 and α = a 1 , . . . , a k → α0 , β = b 1 , . . . , b k → β0 . The notation a 1 , a 2 , . . . , a k → α0 stands for a 1 → (a 2 → . . . → (a k → α0 ) . . .). Remark. In case k = 0, we understand that the condition means β ≤0 α.
Proof of the proposition : we still use the notation R 0 for the preorder ≤. We define a binary relation R on D by : β R α ⇔ there exist k ≥ 0, a 1 , . . . , a k , b 1 , . . . , b k ∈ D ∗ and α0 , β0 ∈ D, such that a i R 0∗ b i (1 ≤ i ≤ k), β0 ≤0 α0 and α = a 1 , . . . , a k → α0 , β = b 1 , . . . , b k → β0 . We first prove that R is a preorder. Let α, β, γ ∈ D be such that β R α and γ R β. Thus we have : α = a 1 , . . . , a k → α0 , β = b 1 , . . . , b k → β0 , with a i R 0∗ b i , β0 ≤0 α0 and β = b 10 , . . . , b l0 → β00 , γ = c 1 , . . . , c l → γ0 , with b 0j R 0∗ c j , γ0 ≤0 β00 . If l ≥ k, then (using both expressions for β, and the fact that i is oneone) : 0 b 1 = b 10 , . . . , b k = b k0 and β0 = b k+1 , . . . , b l0 → β00 . Since β0 ≤0 α0 , we have, by the hypothesis on ≤0 and the fact that i is onto : 0 α0 = a k+1 , . . . , a l0 → α00 with β00 ≤0 α00 , and a i0 ≤0 b i0 (k + 1 ≤ i ≤ l ) ; therefore 0 00 γ ≤0 α and a i0 R 0∗ c i for k + 1 ≤ i ≤ l . 0 Thus α = a 1 , . . . , a k , a k+1 , . . . , a l0 → α00 and γ = c 1 , . . . , c k , c k+1 , . . . , c l → γ0 . 0 ∗ Now b i R 0 c i (1 ≤ i ≤ l ) and a i R 0∗ b i , and thus a i R 0∗ c i for 1 ≤ i ≤ k (since b i0 = b i ). It follows that γ R α. The proof is similar in case k ≥ l . We now prove that R ⊂ R 0 ; if β R α, then we have : α = a 1 , . . . , a k → α0 , β = b 1 , . . . , b k → β0 , with a i R 0∗ b i (1 ≤ i ≤ k) and β0 ≤0 α0 .
Lambdacalculus, types and models
124
We prove β R 0 α by induction on k : this is obvious when k = 0. Assume the result for k − 1 ; then β00 R 0 α00 , with α00 = a 2 , . . . , a k → α0 , β00 = b 2 , . . . , b k → β0 . Now α = a 1 → α00 , β = b 1 → β00 , and a 1 R 0∗ b 1 , β00 R 0 α00 ; thus β R 0 α. Finally, we prove that R satisfies condition (C ) : Let a, b ∈ D ∗ , α, β ∈ D be such that a R ∗ b and β R α. Since R ⊂ R 0 , it follows that a R 0∗ b. Now β R α and therefore, by definition of R : α = a 1 , . . . , a k → α0 , β = b 1 , . . . , b k → β0 , with a i R 0∗ b i (1 ≤ i ≤ k) and β0 ≤0 α0 . Thus : a → α = a, a 1 , . . . , a k → α0 , and b → β = b, b 1 , . . . , b k → β0 . Now a R 0∗ b, and hence (b → β) R (a → α). Since R ⊂ R 0 and R satisfies (C ), we see that R 0 = R : this is the expected conclusion. Q.E.D.
Models over a set of atoms (Scott’s model D ∞ ) Let A be a finite or countable nonempty set, the elements of which will be called atoms. It is convenient to assume that no element of A is an ordered pair. We define, inductively, a set D of formulas, and a onetoone function i : D ∗ ×D → D (i (a, α) will also be denoted by a → α) : • Every atom α is a formula ; Let α be a formula and a be a finite set of formulas ; then : • If α ∈ A (α is an atom) and a = ;, then we take ; → α = i (;, α) = α. • Otherwise, the ordered pair (a, α) is a formula, and we take : a → α = i (a, α) = (a, α). It follows that the atoms are the only formulas which are not ordered pairs. Clearly, i is onto ; it is also oneone : if a → α = b → β, then, either the formula a → α is an atom, and then a = b = ; and α = β, or it is not an atom, and then (a, α) = (b, β). Every formula α can be written in the form α = a 1 , . . . , a k → α0 , where k ≥ 0, α0 ∈ A, a i ∈ D ∗ . This expression is unique if we impose a k 6= ;, or k = 0. Thus the other possible expressions for α are : α = a 1 , . . . , a k , ;, . . . , ; → α0 . The rank of a formula α, denoted by r k(α), is now defined by induction : r k(α) = 0 whenever α is an atom ; r k(a → α) = 1 + sup(r k(α), sup{r k(ξ); ξ ∈ a}) if a 6= ; or α is not an atom. We consider a preorder on A, denoted by ≤. We extend it to the whole set D by defining β ≤ α by induction on r k(α) + r k(β), as follows : If α, β ∈ A, then β ≤ α is already defined.
Chapter 7. Models of lambdacalculus
125
If r k(α) + r k(β) ≥ 1, then we write α = a → α0 , β = b → β0 , and we put β ≤ α ⇔ β0 ≤ α0 and b ≥ a (every element of a is smaller than some element of b). Note that r k(α0 ) +r k(β0 ) < r k(α) +r k(β) ; also b ≥ a is already defined : indeed, if α0 ∈ a and β0 ∈ b, then r k(α0 ) + r k(β0 ) < r k(α) + r k(β). From this definition of the preorder ≤ on D, it follows that : b → β ≤ a → α ⇔ b ≥ a and β ≤ α ; this shows that we have defined an extensional βmodel of λcalculus (theorem 7.13). Remark. This model could be obtained by using theorem 7.14 : define α ≤0 β for α, β ∈ D, by α = β or (α, β ∈ A and α ≤ β). It is easy to check that ≤0 is a preorder and that (b → β) ≤0 (a → α) ⇒ a ≤0 b and β ≤0 α.
4. Applications In this section, we use the models over a set A of atoms defined page 124, taking the trivial preorder on A (α ≤ β ⇔ α = β). In that case, the atoms are the maximal elements ; among the upper bounds of a given formula a 1 , . . . , a k → α0 (α0 ∈ A), there is one and only one atom which is α0 . Let α, β ∈ D ; then α and β are not ≤comparable unless there is an atom greater than α and β. If α = a 1 , . . . , a k → α0 with α0 ∈ A, k ≥ 0, a k 6= ;, then α0 ≤ α if and only if α0 = a 10 , . . . , a l0 → α0 , with l ≥ k and a 1 ≤ a 10 , . . . , a k ≤ a k0 .
i) Embeddings of applicative structures Theorem 7.16. Every applicative structure may be embedded in a model of EC L (extensional combinatory logic, see page 99). Let A be an applicative structure (that is to say a set together with a binary function). We will assume that A is countable (the results below may be extended to the case where A is uncountable by means of the compactness theorem of predicate calculus). We consider the functional βmodel constructed as above (page 124), with A as the set of atoms. We define j : A → D and J : A → D by taking, for every α ∈ A : j (α) = {o} → α where o is some fixed element of A ; J (α) = { δ ∈ D ; (∃ k ≥ 0)(∃ α1 , . . . , αk ∈ A) δ ≤ { j (α1 )}, . . . , { j (αk )} → j (αα1 . . . αk ) }. Note that, if α, α1 , . . . , αk ∈ A, then αα1 . . . αk ∈ A (A is an applicative structure). For every α ∈ A, J (α) is clearly an initial segment of D.
Lambdacalculus, types and models
126
We have seen that D = S (D) is a model of EC L. Now we prove that J is the desired embedding of A into D. J is oneone : indeed, we have j (α) ∈ J (α) for every α ∈ A (take k = 0 in the definition of J (α)). Therefore, if J (α) = J (α0 ), then j (α0 ) ∈ J (α) that is : {o} → α0 ≤ { j (α1 )}, . . . , { j (αk )}, {o} → αα1 . . . αk . Now, since α0 and αα1 . . . αk are atoms, we have necessarily k = 0 and α0 = α. (J (α))J (α0 ) ⊂ J (αα0 ) : let ξ ∈ (J (α))J (α0 ). By theorem 7.13, there exists d ⊂ J (α0 ) such that d → ξ ∈ J (α), that is : d → ξ ≤ { j (α1 )}, . . . , { j (αk )} → j (αα1 . . . αk ). If k = 0, then d → ξ ≤ {o} → α ; then o ∈ d , which is impossible because o ∉ J (α0 ). If k ≥ 1, then j (α1 ) ∈ d¯, thus j (α1 ) ∈ J (α0 ), hence α1 = α0 (see above). Thus ξ ≤ { j (α2 )}, . . . , { j (αk )} → j (αα0 α2 . . . αk ), and therefore ξ ∈ J (αα0 ). J (αα0 ) ⊂ (J (α))J (α0 ) : If ξ ∈ J (αα0 ), then ξ ≤ { j (α1 )}, . . . , { j (αk )} → j (αα0 α1 . . . αk ). Let d = { j (α0 )} ; then d ⊂ J (α0 ). Moreover : d → ξ ≤ { j (α0 )}, { j (α1 )}, . . . , { j (αk )} → j (αα0 α1 . . . αk ). Therefore, d → ξ ∈ J (α) and it follows that ξ ∈ (J (α))J (α0 ). Q.E.D.
ii) Extensional combinatory logic with couple Let L be the language of combinatory logic (see chapter 6), with additional constant symbols c, p 1 , p 2 . The term (c)x y is called the couple (or ordered pair) x, y ; the term (p 1 )x(resp. (p 2 )x) is called the first (resp. the second) projection of x. We denote by EC LC (for extensional combinatory logic with couple) the following system of axioms, which an extension of EC L (extensional combinatory logic, see page 99) : EC L, (p 1 )(c)x y = x, (p 2 )(c)x y = y, ((c)(p 1 )x)(p 2 )x = x ; (p 1 )x y = (p 1 )(x)y, (p 2 )x y = (p 2 )(x)y. The first three axioms mean that x (resp. y) is the first (resp. the second) projection of the couple (c)x y, and that each x is identical to the couple formed by p 1 x, p 2 x. The last two axioms mean that, for every x, the function defined by p 1 x (resp. p 2 x) is p 1 ◦ x (resp. p 2 ◦ x). As a consequence of these axioms, we have : (c)x y z = ((c)(x)z)(y)z. Indeed, according to the third axiom, it is sufficient to prove both : (p 1 )(c)x y z = (p 1 )((c)(x)z)(y)z and (p 2 )(c)x y z = (p 2 )((c)(x)z)(y)z. Now we have : (p 1 )(c)x y z = ((p 1 )(c)x y)z (4th axiom) = (x)z (1st axiom)
Chapter 7. Models of lambdacalculus
127
= (p 1 )((c)(x)z)(y)z (1st axiom). Same proof for p 2 . We also deduce : p 1 c = 1, p 2 c = 0, (c)p 1 p 2 = I . Indeed (p 1 c)x y = ((p 1 )(c)x)y (4th axiom) = (p 1 )((c)x)y (4th axiom) = x (1st axiom), and hence p 1 c = 1 by extensionality. Moreover, (c p 1 p 2 )x = ((c)(p 1 )x)(p 2 )x (see above) = x (3rd axiom), thus cp 1 p 2 = I by extensionality. Theorem 7.17. EC LC has a nontrivial model (that is a model of cardinality > 1). Consider an infinite countable set of atoms A, with the trivial preorder. Let A = A 1 ∪ A 2 be some partition of A in two infinite subsets. Let D i (i = 1, 2) be the set of lower bounds in D of the elements of A i . Then D = D 1 ∪ D 2 is a partition of D in two initial segments. Let ϕ1 : A → A 1 , ϕ2 : A → A 2 be two onetoone mappings ; they can be extended to isomorphisms of ordered sets from D onto D 1 , D 2 : whenever α = a 1 , . . . , a k → α0 (α0 ∈ A), take ϕ1 (α) = a 1 , . . . , a k → ϕ1 (α0 ) and ϕ2 (α) = a 1 , . . . , a k → ϕ2 (α0 ). Let D = S (D) ; the function ϕ−1 1 : P (D) → P (D) maps S (D) into S (D), since ϕ1 is an isomorphism from D onto D 1 . Now this function is clearly σc.i., so there exists p 1 ∈ D such that (p 1 )u = ϕ−1 1 (u) for every u ∈ D. Similarly, there is a −1 p 2 ∈ D such that (p 2 )u = ϕ2 (u). Also, we may define c ∈ D such that (c)uv = ϕ1 (u) ∪ ϕ2 (v) for all u, v ∈ D : indeed, since ϕ1 and ϕ2 are isomorphisms of ordered sets, ϕ1 (u) ∪ ϕ2 (v) is an initial segment of D whenever u, v ∈ D. Thus, this function maps D×D into D, and it is σc.i. : this yields the existence of c. We therefore have : (p 1 )(c)uv = ϕ−1 1 (ϕ1 (u) ∪ ϕ2 (v)) = u and similarly (p 2 )(c)uv = v. −1 Also, ((c)(p 1 )u)(p 2 )u = ϕ1 (ϕ−1 1 u)∪ϕ2 (ϕ2 u) = (u ∩D 1 )∪(u ∩D 2 ) = u. Thus, the first three axioms of EC LC are satisfied in the model under consideration. Moreover, we have α ∈ (p 1 u)v ⇔ (∃a ⊂ v)(a → α) ∈ p 1 u (theorem 7.13) ; now, by definition of p 1 u, we have α ∈ (p 1 u)v ⇔ (∃a ⊂ v)ϕ1 (a → α) ∈ u ; on the other hand, ϕ1 (a → α) = a → ϕ1 (α) by definition of ϕ1 , and hence : α ∈ (p 1 u)v ⇔ (∃a ⊂ v)a → ϕ1 (α) ∈ u ; therefore, we obtain α ∈ (p 1 u)v ⇔ ϕ1 (α) ∈ (u)v, i.e. α ∈ (p 1 u)v ⇔ α ∈ ϕ−1 1 ((u)v), and finally (p 1 )uv = (p 1 )(u)v. This proves the last two axioms. Q.E.D.
We now give a set of equational formulas, denoted by EC LC = , which axiomatize the universal consequences of EC LC :
Lambdacalculus, types and models
128
EC L = (a set of equations which axiomatize the universal consequences of EC L, see chapter 6, page 104) ; λxλy(p 1 )x y = λxλy x ; λxλy(p 2 )x y = λxλy y ; λx((c)(p 1 )x)(p 2 )x = λx x ; λxλy(p 1 )x y = λxλy(p 1 )(x)y ; λxλy(p 2 )x y = λxλy(p 2 )(x)y. Clearly, these formulas are universal consequences of EC LC . Conversely, let M be a model of these formulas : since M satisfies EC L = , it can be embedded in a model of EC L, which satisfies the last five axioms (these are equations involving closed terms : since they hold in M , they also hold in any extension of M ). Thus M is embedded in a model of EC LC , and therefore it satisfies all universal consequences of EC LC . Theorem 7.18. EC LC is not equivalent to a system of universal axioms. It follows that neither C L nor EC L are equivalent to systems of universal axioms, since EC LC is obtained by adding universal axioms either to C L or to EC L. Proof : it suffices to exhibit a submodel of the above model of EC LC , in which the extensionality axiom fails. With each formula α ∈ D, we associate a value α ∈ {0, 1}, defined by induction on the rank of α, as follows : if α is an atom, then α = 0 ; if r k(α) ≥ 1, say α = a → β, then we define a = inf{γ; γ ∈ a} (note that γ is already defined since r k(γ) < r k(α) ; also, if a = ;, then a = 1). Then we take α = a → β, where ² → ²0 is defined in the usual way for ², ²0 ∈ {0, 1} (β is already defined since r k(β) < r k(α)). For every subset u of D (particularly for u ∈ D), we define u = inf{α; α ∈ u}. Lemma 7.19. If α, β ∈ D and α ≤ β, then α ≥ β. The proof is by induction on r k(α) + r k(β). If α, β are atoms, then α ≤ β ⇒ α = β. Otherwise, we have α = a → α0 , β = b → β0 . Since α ≤ β, we have a ≥ b and α0 ≤ β0 . Suppose α < β, that is α = 0 and β = 1 ; thus a = 1 and α0  = 0. Since a ≥ b, every element of b is smaller than some element of a ; therefore b = 1 (if b = ;, this is obvious ; if b 6= ;, it follows from the induction hypothesis). Since β = b → β0  = 1, it follows that β0  = 1 ; since α0 ≤ β0 , we have, by induction hypothesis, α0  ≥ β0 , and hence α0  = 1, which is a contradiction. Q.E.D.
Lemma 7.20. Let u ∈ D. Then u = 1 if and only if (u)v = 1 for every v ∈ D such that v = 1.
Chapter 7. Models of lambdacalculus
129
Let u, v ∈ D be such that u = v = 1 ; we prove that (u)v = 1 : if α ∈ uv, then a ⊂ v and a → α ∈ u ; thus a = a → α = 1, and therefore α = 1, by definition of a → α. Conversely, suppose that u = 0 ; then there exists α ∈ u such that α = 0. Since i is onto, we have α = b → β for some b ∈ D ∗ and β ∈ D. Thus b = 1 and β = 0. Let v ∈ D be the set of lower bounds of the elements of b. By lemma 7.19, we have v = 1 ; now β ∈ (u)v since b ⊂ v and b → β ∈ u. Since β = 0, we have (u)v = 0. Q.E.D.
Lemma 7.21. Let u ∈ D and k ∈ N. Then u = 1 if and only if (u)v 1 . . . v k  = 1 for all v 1 , . . . , v k ∈ D such that v 1  = . . . = v k  = 1. This follows immediately from lemma 7.20, by induction on k. Q.E.D.
Lemma 7.22. K  = S = p 1  = p 2  = c = 1. The considered model satisfies EC L, and therefore the axiom (K )x y = x. Thus (K )uv = u for all u, v ∈ D. Hence, u = v = 1 ⇒ (K )uv = 1 ; therefore, K  = 1, by lemma 7.21. Similarly, we have (S)uv w = ((u)w)(v)w for all u, v, w ∈ D. If u = v = w = 1, then ((u)w)(v)w = 1 by lemma 7.20, and hence (S)uv w = 1. Therefore, S = 1 (lemma 7.21). Note that, for every formula α ∈ D, we have α = ϕ1 (α) = ϕ2 (α) : this is immediate from the definition of ϕ1 , ϕ2 , by induction on r k(α). Now, by definition of p 1 , we have α ∈ (p 1 )u ⇔ ϕ1 (α) ∈ u, for every u ∈ D. Therefore, if u = 1, then α = 1 for every α ∈ (p 1 )u, and hence (p 1 )u = 1. It follows that p 1  = 1 (lemma 7.21). Similarly, p 2  = 1. Finally, for every formula α ∈ D, and all u, v ∈ D, we have α ∈ (c)uv ⇔ α ∈ ϕ1 (u) or α ∈ ϕ2 (v). If u = v = 1, then ϕ1 (u) = ϕ2 (v) = 1, and hence α = 1 for every α ∈ (c)uv ; thus (c)uv = 1, and therefore c = 1 by lemma 7.21. Q.E.D.
It follows that t  = 1 for every closed term t . Let D 0 = {α ∈ D ; α = 1} ; by lemma 7.19, D 0 is an initial segment of D. Then we define D0 ⊂ D by taking D0 = {u ∈ D ; u = 1}. So D0 is the set of initial segments of D 0 . By lemma 7.20, D0 is closed under Ap ; by lemma 7.22, it contains K , S, p 1 , p 2 , c. Thus it is a submodel of D. We will see that D0 is the desired submodel of D. We define a mapping ϕ : D → D by taking ϕ(u) = u ∩D 0 for every u ∈ D. Clearly, ϕ is σc.i. ; let f = λx ϕ(x) ∈ D, therefore ( f )u = u ∩ D 0 for every u ∈ D.
Lambdacalculus, types and models
130
If I = λx x, then ( f )u = (I )u = u for every u ∈ D0 . By lemma 7.20, it follows that  f  = I  = 1, and hence f , I ∈ D0 . Now D ∈ D (the whole set D is an initial segment), and ( f )D = D 0 6= D = (I )D (indeed, D 0 contains no atom). Thus f 6= I , and therefore D0 does not satisfy the extensionality axiom. Q.E.D.
In fact, D0 does not even satisfy the formula ∀a(∀x(ax = x) → a = I ). Therefore, we have proved the following strenghtening of theorem 7.18 : Theorem 7.23. The set of universal consequences of EC LC (and also, a fortiori, of EC L) does not imply the formula ∀a(∀x(ax = x) → a = I ). Recall that the set of universal consequences of EC LC (resp. EC L) is equivalent to the equations EC LC = (resp. EC L = ) given above, page 127 (resp. in chapter 6, page 104).
5. Retractions Let D = S (D) be a βmodel of λcalculus. Given f , g ∈ D, we define : f ◦ g = λx( f )(g )x ∈ D. Clearly, ◦ is an associative binary operation on D. An element ² ∈ D will be called a retraction if ²◦² = ². Then the image of ², which will be called a retract, and will be denoted by I m(²), is the set : {u ∈ D ; (²)u = u}. Remark. Since S (D) is a complete lattice and I m(²) is the set of fixed points of ² (considered as a σc.i. function from D to D), we see that every retract is a subset of S (D) which is a complete lattice ; this follows from a theorem due to Tarski, which claims that the set of fixed points of a monotone function on a complete lattice is a complete lattice [Tar55].
For every retraction ², the retract I m(²) is a σcomplete subspace of D : let u n (n ∈ N) be an increasing sequence in I m(²), and u = ∪n u n ; then u ∈ I m(²) (indeed, we have (²)u = u since ² defines a σc.i. function on D). Moreover, it is easy to prove that, if ²n (n ∈ N) is an increasing sequence of retractions, then also ² = ∪n ²n is a retraction (indeed, ( f , g ) 7→ f ◦ g is a σc.i. function on D×D). Proposition 7.24. If ², ²0 are retractions, then also ² × ²0 = λxλ f (( f )(²)(x)1)(²0 )(x)0 and ²;²0 = λyλx(²0 )(y)(²)x = λy ²0 ◦ y ◦² are retractions. Indeed, we have (² × ²0 )(² × ²0 )u = λ f [( f )(²)(² × ²0 )u1](²0 )(² × ²0 )u0. Now (² × ²0 )u1 = (²)(u)1 and (² × ²0 )u0 = (²0 )(u)0 ; thus (² × ²0 )(² × ²0 )u = (² × ²0 )u
Chapter 7. Models of lambdacalculus
131
for every u ∈ D. Therefore, (² × ²0 )◦(² × ²0 ) = λx(² × ²0 )(² × ²0 )x = λx(² × ²0 )x. Now λx(² × ²0 )x = ² × ²0 by definition of ² × ²0 . On the other hand, we have, for every v ∈ D : (²;²0 )(²;²0 )v = λx(²0 )((²;²0 )v)(²)x ; now, for every u ∈ D : ((²;²0 )v)(²)u = (²0 )(v)(²)(²)u = (²0 )(v)(²)u. Thus (²;²0 )(²;²0 )v = λx(²0 )(²0 )(v)(²)x = λx(²0 )(v)(²)x = (²;²0 )v for every v ∈ D. It follows that : (²;²0 )◦(²;²0 ) = λy(²;²0 )(²;²0 )y = λy(²;²0 )y. Now, by definition of ²;²0 , we have λy(²;²0 )y = ²;²0 . Q.E.D.
The retract I m(² × ²0 ) is the set of all “ couples ” λ f ( f )aa 0 such that a ∈ I m(²) and a 0 ∈ I m(²0 ). Proposition 7.25. The retract I m(²;²0 ) is canonically isomorphic with the space C (I m(²), I m(²0 )) of σc.i. functions from I m(²) to I m(²0 ). We now define two σc.i. functions : F : I m(²;²0 ) → C (I m(²), I m(²0 )) and G : C (I m(²), I m(²0 )) → I m(²;²0 ). Whenever a ∈ I m(²;²0 ), F (a) is the σc.i. function defined on I m(²) by : F (a)(u) = au. We do have au ∈ I m(²0 ) since a = (²;²0 )a = ²0 ◦ a ◦² and hence au = (²0 )(a)(²)u. Clearly, F is σc.i. Whenever ϕ ∈ C (I m(²), I m(²0 )), we define ψ ∈ C (D, D) by taking ψ(x) = ϕ(²x). Then we put a ϕ = λx ψ(x) and G(ϕ) = ²0 ◦ a ϕ ◦². Thus G(ϕ) = (²;²0 )a ϕ , and hence G(ϕ) ∈ I m(²;²0 ). Moreover, G is σc.i. since it is obtain by composing σc.i. functions. We now prove that F and G are isomorphisms, each of them being the inverse of the other. G(F (a)) = a for every a ∈ I m(²;²0 ) : Let F (a) = ϕ ; then G(F (a)) = ²0 ◦ a ϕ ◦². Now a ϕ = λx ϕ(²x) = λx(a)(²)x ; on the other hand, a = ²0 ◦ a ◦² since a ∈ I m(²;²0 ) ; thus (a)(²)x = (²0 )(a)(²)x. It follows that a ϕ = λx(²0 )(a)(²)x = ²0 ◦ a ◦² = a. Therefore G(F (a)) = ²0 ◦ a ◦² = a. F (G(ϕ)) = ϕ for every ϕ ∈ C (I m(²), I m(²0 )) : Let u ∈ I m(²). We have G(ϕ) = ²0 ◦ a ϕ ◦², thus : F (G(ϕ))(u) = (²0 ◦ a ϕ ◦²)u = (²0 )(a ϕ )(²)u = (²0 )(a ϕ )u since (²)u = u. Now : (a ϕ )u = ϕ(²u) (by definition of a ϕ ) = ϕ(u), and (²0 )(a ϕ )u = (²0 )ϕ(u) = ϕ(u) since ϕ(u) ∈ I m(²0 ). Thus F (G(ϕ))(u) = ϕ(u) for every u ∈ I m(²), and therefore F (G(ϕ)) = ϕ. Q.E.D.
132
Lambdacalculus, types and models
Extensional βmodel constructed from a retraction Let ² be a retraction 6= ;, such that ² = ²;² ; take D 0 = I m(²) and F 0 = C (D 0 , D 0 ). We shall define an extensional βmodel by applying proposition 7.10. We first notice that a, b ∈ D 0 ⇒ (a)b ∈ D 0 . Indeed, (²)ab = (²;²)ab = (²)(a)(²)b ; since (²)a = a and (²)b = b, it follows that ab = (²)(a)b, and hence ab ∈ D 0 . We define F : D 0 → F 0 and G : F 0 → D 0 as in the proof of proposition 7.25, with ² = ²0 = ²;²0 . We have D 0 = I m(²;²) and F 0 = C (I m(²), I m(²)). Thus F (a)(b) = (a)b for all a, b ∈ D 0 and G(ϕ) = ²◦ a ϕ ◦² = (²)a ϕ , where a ϕ = λx ϕ(²x). We have seen that F ◦G is the identity function on C (D 0 , D 0 ) and that G ◦F is the identity function on D 0 . Thus, by proposition 7.10, we have an extensional βmodel of λcalculus. In order to obtain a retraction ² with the required properties, it is enough to have a retraction ²0 6= ;, such that ²0 ⊂ (²0 ;²0 ). Indeed, if F = λz(z ;z) = λzλyλx(z)(y)(z)x, then ²0 ⊂ (F )²0 ; then we define a sequence ²n of retractions by taking ²n+1 = ²n ;²n = (F )²n . This is an increasing sequence (easy proof, by induction on n). Let ² = ∪n ²n ; then ² is a retraction 6= ;, and ²;² = (F )² = ∪n (F )²n = ∪n ²n+1 = ². Example. Obviously, I = λx x is a retraction ; we have I ; I = λyλx(I )(y)(I )x, that is I ; I = λyλx(y)x. Consider a nonextensional model D = S (D) (so that I 6= I ; I ), in which the mapping i : D ∗ ×D → D is onto (for instance, the model P (ω) defined above, page 121). Then, by lemma 7.12(2), we have u ⊂ λx(u)x for every u ∈ D. Thus λy y ≤ λyλx(y)x (since ϕ ≤ ψ ⇒ λyϕ(y) ≤ λyψ(y) whenever ϕ, ψ ∈ C (D, D)). Therefore, I ≤ I ; I ; this provides a retraction ² ≥ I such that ² = ²;². Thus I m(²) is an extensional βmodel of λcalculus.
Models over a set of atoms We consider an extensional model D = S (D) constructed over a set A of atoms (see page 124). Let ²0 be the initial segment of D generated by the set {{α} → α ; α ∈ A}. If β ∈ D and u ∈ D, then : β ∈ ²0 u ⇔ (∃b ⊂ u) b → β ∈ ²0 ⇔ (∃b ⊂ u, α ∈ A) β ≤ α ∈ b ⇔ (∃α ∈ A ∩ u) β ≤ α. It follows that ²0 u = A ∩ u (recall that this denotes the initial segment of D generated by A ∩ u). Let α ∈ A ; then α ∈ (²0 )(²0 )u ⇔ α ∈ (²0 )u ⇔ α ∈ u. It follows that (²0 )(²0 )u = (²0 )u and hence ²0 is a retraction. The retract I m(²0 ) is the set of all initial segments of D generated by the subsets of A ; this is a complete lattice which is isomorphic with the power set P (A). Let ²1 = ²0 ;²0 ; we wish to prove that ²0 ⊂ ²1 . To do so, it suffices to show that {α} → α ∈ ²1 for every α ∈ A. Let a = {α} ; then α ∈ (²0 )a (since {α} → α ∈ ²0 ) and
Chapter 7. Models of lambdacalculus
133
a = (a); (since α = ; → α) ; thus a = (a)(²0 );. Finally, we have α ∈ (²0 )(a)(²0 ); ; now since ²1 = λyλx(²0 )(y)(²0 )x, we conclude that {α}, ; → α ∈ ²1 , that is to say {α} → α ∈ ²1 . Now, consider the increasing sequence ²n of retractions and the retraction : ² = ∪n ²n defined above. We therefore have ² = ²;². Clearly, (²0 )u ⊂ u for every u ∈ D, thus ²0 ≤ I = λx x (case of extensional models in proposition 7.10). We prove, by induction on n, that ²n ≤ I for every n ∈ N : indeed, by induction hypothesis, ²n ≤ I ; thus ²n+1 = ²n ;²n ≤ I ; I = λyλx(y)x = I since D is an extensional model. Therefore ²n+1 ≤ I . It follows that ² ≤ λx x. Lemma 7.26. i) If α ∈ D and r k(α) ≤ n, then ({α} → α) ∈ ²n ; ii) ² = λx x. i) The proof is by induction on n. If r k(α) = 0, then α ∈ A, and hence {α} → α ∈ ²0 . Now let α ∈ D be such that r k(α) = n + 1 ; we may write α = b → β, and put a = {α}. We have b = {β1 , . . . , βk } ; by induction hypothesis, {βi } → βi ∈ ²n for 1 ≤ i ≤ k ; it follows that (²n )b¯ ⊃ b, and hence (²n )b¯ ⊃ b¯ ; since ²n ≤ λx x, we ¯ Now, clearly, β ∈ (a) ¯ since b → β ∈ a. By induction hypothesis, ¯ b, have (²n )b¯ = b. ¯ ¯ Therefore : ¯ b = (²n )(a)(² ¯ n )b. {β} → β ∈ ²n , thus β ∈ (²n )(a) (a, b → β) ∈ λyλx(²n )(y)(²n )x = ²n ;²n = ²n+1 . Now a, b → β = {α} → α ; this completes the inductive proof. ii) Since λx x is the initial segment of D generated by the elements of the form {α} → α, where α ∈ D, we have ² ⊃ λx x, and therefore ² = λx x. Q.E.D.
Lemma 7.27. ²n ◦²m = (²n+1 )²m = ²p , where p = inf(m, n). If n ≥ m, then (²n )(²m )u ≥ (²m )(²m )u = (²m )u since ²n ≥ ²m . Now ²n ≤ λx x, so we have (²n )(²m )u = (²m )u. Thus, by extensionality, ²n ◦²m = ²m . The case n ≤ m is similar. Now ²n+1 = ²n ;²n , and hence (²n+1 )²m u = (²n )(²m )(²n )u ; we have just seen that the latter is equal to (²m )u if n ≥ m and to (²n )u if n ≤ m. The result follows, by extensionality. Q.E.D.
Let Dn = I m(²n ) ⊂ D. By lemma 7.27, we have m ≥ n ⇒ (²m )(²n )u = (²n )u. Thus Dn is an increasing sequence of σcomplete ordered sets (since they are retracts). D0 is isomorphic with P (A) and Dn+1 is isomorphic with C (Dn , Dn ). For every u ∈ D, let u n = (²n )u. (u n ) is an increasing sequence, u n ∈ Dn , and supn u n = u (we have supn ²n = λx x by lemma 7.26).
Lambdacalculus, types and models
134
Thus we have a structure in the model D which is similar to that of Scott’s model D ∞ (see [Bar84], [Hin86]). Now let D n = {α ∈ D; r k(α) ≤ n} ; then we have D 0 = A, (D n ) is an increasing sequence and ∪n D n = D. The next proposition describes the structure of spaces Dn . Proposition 7.28. i) D 0 = A ; D n+1 (with the preorder induced by D) is isomorphic with D n∗ × D n ; ii) ²n is the initial segment of D generated by {{α} → α ; r k(α) ≤ n} ; iii) Dn is the set of all initial segments generated by the subsets of D n ; it is isomorphic with S (D n ). Proof of (i) : if b → β, c → γ have rank ≤ n + 1, then b, c ∈ D n∗ and β, γ ∈ D n ; moreover, (b → β) ≤ (c → γ) ⇔ b ≥ c et β ≤ γ ⇔ (b, β) ≤ (c, γ) in D n∗ × D n . We prove (ii) by induction on n. This is obvious when n = 0, by definition of ²0 . For all β ∈ D, u ∈ D, we have : β ∈ ²n u ⇔ ∃b ⊂ u, b → β ∈ ²n . By induction hypothesis, it follows that : β ∈ ²n u ⇔ ∃b ⊂ u, ∃α ∈ D n ,b → β ≤ {α} → α ⇔ ∃α ∈ D n , β ≤ α, α ∈ u (indeed, ¯ Therefore, ²n u = D n ∩ u (which proves part b → β ≤ {α} → α ⇔ β ≤ α et α ∈ b). (iii) of the proposition). Now let β be an arbitrary element of ²n+1 ; we are looking for some α ∈ D n+1 such that β ≤ {α} → α. We may write β = b, c → γ. ¯ n )c. ¯ ¯ Let d 0 = (²n )c¯ = D n ∩ c. Since ²n+1 = λyλx(²n )(y)(²n )x, we have γ ∈ (²n )(b)(² ¯ 0. ¯ 0 , that is γ ∈ D n ∩ bd Then γ ∈ (²n )(b)d ¯ 0 . Therefore, there exists d 00 ⊂ d 0 such that Hence γ ≤ δ for some δ ∈ D n ∩ bd ¯ Now d 00 is finite and d 00 ⊂ D n ∩ c. ¯ Thus there exists some finite d d 00 → δ ∈ b. 00 ¯ ¯ we have c → γ ≤ d → δ ; such that d ⊂ D n ∩ c¯ and d ⊂ d . Since γ ≤ δ and d ⊂ c, ¯ now d → δ ≤ d 00 → δ, and hence d → δ ∈ b. It follows that b, c → γ ≤ {d → δ}, d → δ. Take α = d → δ ; then α ∈ D n+1 (since d ⊂ D n and δ ∈ D n ), and b, c → γ ≤ {α} → α. This yields the result, since β = b, c → γ. Q.E.D.
6. Qualitative domains and stable functions Let E be a countable set. A subset D of P (E ) is called a qualitative domain if : i) for every increasing sequence u n ∈ D (n ∈ N), we have ∪n u n ∈ D ; ii) if u ∈ D and v ⊂ u, then v ∈ D. Let D0 be the set of finite elements of D. Thus every element of D is the union of an increasing sequence of elements of D0 .
Chapter 7. Models of lambdacalculus
135
We define the web D of D to be the union of all elements of D : D is the least subset of E such that D ⊂ P (D). We also have D = {α ∈ E ; {α} ∈ D}. Let D, D 0 be two qualitative domains, and D, D 0 their webs. Then D × D 0 is a qualitative domain (up to isomorphism), with web D ⊕D 0 (the disjoint union of D and D 0 , which can be represented by (D × {0}) ∪ (D 0 × {1})). Let D, D 0 be two qualitative domains. A σc.i. function f : D → D 0 is said to be stable if and only if : for every u, v, w ∈ D such that u, v ⊂ w, we have f (u ∩ v) = f (u) ∩ f (v). We will denote by S (D, D 0 ) the set of all stable functions from D to D 0 . Note that a σc.i. function f is stable if and only if : u ∪ v ∈ D ⇒ f (u ∩ v) ⊃ f (u) ∩ f (v). Let D1 , . . . , Dk , D be qualitative domains, and f : D1 × . . . × Dk → D a σc.i. function. Then f is stable (with respect to the above definition of the qualitative domain D1 × . . . × Dk ) if and only if : u 1 ∪ v 1 ∈ D1 , . . . , u k ∪ v k ∈ Dk ⇒ f (u 1 ∩ v 1 , . . . , u k ∩ v k ) = f (u 1 , . . . , u k ) ∩ f (v 1 , . . . , v k ). Clearly, every projection function p i : D1 × . . . × Dk → Di , defined by : p i (u 1 , . . . u k ) = u i is stable. Proposition 7.29. i) Let f i : D → Di (1 ≤ i ≤ k) be stable functions. Then the function : f : D → D1 ×. . .×Dk , defined by f (u) = ( f 1 (u), . . . , f k (u)) for every u ∈ D, is stable. ii) If f : D → D 0 and g : D 0 → D 00 are stable, then so is g ◦ f . i) Immediate, by definition of the qualitative domain D1 × . . . × Dk . ii) If u ∪ v ∈ D, then f (u ∩ v) = f (u)∩ f (v) ; now f (u), f (v) ⊂ f (u ∪ v), and hence g ( f (u) ∩ f (v)) = g ( f (u)) ∩ g ( f (v)). Therefore, g ( f (u ∩ v)) = g ( f (u)) ∩ g ( f (v)). Q.E.D.
It follows from this proposition that any composite function obtained from stable functions of several variables is also stable. Proposition 7.30. Let D, D 0 be qualitative domains, D, D 0 their webs, and f : D → D 0 a σc.i. function. Then the following conditions are equivalent : i) f is stable. ii) If u ∈ D, α ∈ D 0 and α ∈ f (u), then the set {v ⊂ u ; α ∈ f (v)} has a least element v 0 . iii) If u ∈ D, a ∈ D 0 , a is finite and a ⊂ f (u), then {v ⊂ u ; a ⊂ f (v)} has a least element v 0 . Moreover, if f is stable, then this least element v 0 is a finite set.
Lambdacalculus, types and models
136
It is obvious that (iii) ⇒ (ii). We now prove that (i) ⇒ (iii) : let f : D → D 0 be a stable function, u ∈ D, and a a finite subset of f (u). Then there exists a finite subset v of u such that a ⊂ f (v) : indeed, u is the union of an increasing sequence (u n ) of finite sets, and a ⊂ f (u) = ∪n f (u n ), thus a ⊂ f (u n ) for some n. On the other hand, if v, w ⊂ u and a ⊂ f (v), f (w), then a ⊂ f (v) ∩ f (w) = f (v ∩ w). Therefore, the least element v 0 is the intersection of all finite subsets v ⊂ u such that a ⊂ f (v). Proof of (ii) ⇒ (i) : let f : D → D 0 be a σc.i. function satisfying condition (ii), and α, u, v be such that u ∪ v ∈ D and α ∈ f (u) ∩ f (v). Let v 0 be the least element of {w ⊂ u ∪v ; α ∈ f (w)}. Since u and v are members of this set, we have v 0 ⊂ u, v, thus v 0 ⊂ u ∩ v. Since α ∈ f (v 0 ), we have α ∈ f (u ∩ v), and therefore f (u) ∩ f (v) ⊂ f (u ∩ v). Q.E.D.
Let D, D 0 be qualitative domains, D, D 0 their webs, and f : D → D 0 a stable function. The trace of f , denoted by tr( f ), is a subset of D0 × D 0 , defined as follows : tr( f ) = {(a, α) ∈ D0 × D 0 ; α ∈ f (a) and α ∉ f (a 0 ), for every a 0 ⊂ a, a 0 6= a}. If u ∈ D and α ∈ u, then α ∈ f (u) ⇔ there exists a ∈ D0 , such that a ⊂ u and (a, α) ∈ tr( f ). Therefore, a stable function is completely determined by its trace. We define a binary relation ≺ on S (D, D 0 ) by putting, for any two stable functions f , g : D → D 0 , f ≺ g ⇔ f (u) = f (v) ∩ g (u) for all u, v ∈ D such that u ⊂ v. This relation is seen to be an order on S (D, D 0 ), known as the Berry order. Thus, if f ≺ g , then f (u) ⊂ g (u) for every u ∈ D. Proposition 7.31. Let f , g be two stable functions from D to D 0 . Then : f ≺ g ⇔ tr( f ) ⊂ tr(g ). Suppose that f ≺ g and (a, α) ∈ tr( f ). Then α ∈ f (a) ⊂ g (a), and hence α ∈ g (a). Thus there exists a 0 ⊂ a such that (a 0 , α) ∈ tr(g ). Now f (a 0 ) = f (a) ∩ g (a 0 ), so α ∈ f (a 0 ), and hence a 0 = a, by definition of tr( f ). Thus (a, α) ∈ tr(g ) and therefore tr( f ) ⊂ tr(g ). Now suppose that tr( f ) ⊂ tr(g ), and let u, v ∈ D, u ⊂ v. If α ∈ f (u), then there exists a ⊂ u such that (a, α) ∈ tr( f ). Thus (a, α) ∈ tr(g ) and α ∈ g (a) ⊂ g (u). Therefore α ∈ f (v) ∩ g (u). Conversely, if α ∈ f (v) ∩ g (u), then there exist a ⊂ u, b ⊂ v, such that : (a, α) ∈ tr(g ), (b, α) ∈ tr( f ). Thus (a, α), (b, α) ∈ tr(g ) and a ∪ b ⊂ v ∈ D. It follows that a = b, hence (a, α) ∈ tr( f ), α ∈ f (a), and therefore α ∈ f (u). Q.E.D.
Proposition 7.32. Let us consider two qualitative domains D, D 0 , and their webs D, D 0 . Then the set of all traces of stable functions from D to D 0 is a qualitative domain with web D0 × D 0 .
Chapter 7. Models of lambdacalculus
137
Let f n (n ∈ N) be a sequence of stable functions, such that tr( f n ) ⊂ tr( f n+1 ), and therefore f n ≺ f n+1 . Define f : D → D 0 by taking f (u) = ∪n f n (u) for every u ∈ D (note that f n (u) is an increasing sequence in D). Then f is stable : indeed, if u∪v ∈ D, then f (u∩v) = ∪n f n (u∩v) = ∪n ( f n (u)∩ f n (v)) = f (u)∩ f (v). Moreover, f n ≺ f : if u ⊂ v, then f n (u) = f n (v) ∩ f p (u) for every p ≥ n, thus f n (u) = f n (v) ∩ ∪p f p (u) = f n (v) ∩ f (u). Therefore ∪n tr( f n ) ⊂ tr( f ). Conversely, if (a, α) ∈ tr( f ), then α ∈ f (a), and therefore there exists an integer n such that α ∈ f n (a). Thus (a 0 , α) ∈ tr( f n ) for some a 0 ⊂ a. Since tr( f n ) ⊂ tr( f ), we have (a 0 , α) ∈ tr( f ), and hence a = a 0 . Thus (a, α) ∈ tr( f n ) and therefore tr( f ) ⊂ ∪n tr( f n ). Finally, tr( f ) = ∪n tr( f n ). Now let f ∈ S (D, D 0 ) and X ⊂ tr( f ). We prove that X is the trace of some stable function g , which we define by putting : α ∈ g (u) ⇔ there exists a ⊂ u such that (a, α) ∈ X . Using proposition 7.30(ii), we prove that g is stable : let α ∈ g (u) ; then (a, α) ∈ X for some a ⊂ u. If v ⊂ u and α ∈ g (v), then (b, α) ∈ X for some b ⊂ v. Now (a, α), (b, α) ∈ tr( f ), and a, b ⊂ u, thus a = b. Hence a ⊂ v, and a is the least element of the set {v ∈ D ; α ∈ f (v)}. We have X = tr(g ) : indeed, if (a, α) ∈ tr(g ), then α ∈ g (a), thus (b, α) ∈ X for some b ⊂ a. So α ∈ g (b), and hence a = b, by definition of tr(g ). Therefore (a, α) ∈ X . Conversely, if (a, α) ∈ X , then α ∈ g (a), thus (b, α) ∈ tr(g ) for some b ⊂ a. It follows that (b, α) ∈ X (see above). Hence (a, α), (b, α) ∈ tr( f ), and therefore a = b, and (a, α) ∈ tr(g ). Q.E.D.
In view of the previous proposition, the space S (D, D 0 ) of all stable functions from D to D 0 , equipped with the order ≺, may be identified with a qualitative domain with web D0 × D 0 (note that D0 × D 0 is countable). Proposition 7.33. Let us consider two qualitative domains D, D 0 , and their webs D, D 0 . Then the function Eval : S (D, D 0 ) × D → D 0 , defined by Eval( f , u) = f (u), is stable. Let u, v ∈ D, such that u ∪ v ∈ D, and f , g , h ∈ S (D, D 0 ) such that : tr( f )∪tr(g ) = tr(h). We need to prove f (u)∩g (v) ⊂ k(u ∩v), where k ∈ S (D, D 0 ) is defined by tr(k) = tr( f ) ∩ tr(g ). Let α ∈ f (u), g (v). Then there exist a ⊂ u, b ⊂ v such that : (a, α) ∈ tr( f ), (b, α) ∈ tr(g ). Thus (a, α), (b, α) ∈ tr(h), and a, b ⊂ u ∪ v. It follows that a = b ⊂ u ∩ v and (a, α) ∈ tr( f ) ∩ tr(g ) = tr(k). Thus α ∈ k(u ∩ v). Q.E.D.
Proposition 7.34. Let D, D 0 , D 00 be qualitative domains, and f : D × D 0 → D 00 a stable function.
Lambdacalculus, types and models
138
Then the function Cur f : D → S (D 0 , D 00 ), defined by Cur f (u)(u 0 ) = f (u, u 0 ) for all u ∈ D, u 0 ∈ D 0 , is also stable. Remark. The operation f 7→ Cur f is sometimes called “ curryfication ”.
We first prove that, if u ⊂ v, then Cur f (u) ≺ Cur f (v) : let u 0 , v 0 ∈ D 0 be such that u 0 ⊂ v 0 ; since f is stable, we have : f (u, v 0 ) ∩ f (v, u 0 ) = f (u ∩ v, u 0 ∩ v 0 ) = f (u, u 0 ). In other words : Cur f (u)(v 0 )∩ Cur f (v)(u 0 ) = Cur f (u)(u 0 ), which is the desired property. Thus Cur f is an increasing function ; it is also σcontinuous : let u n (n ∈ N) be an increasing sequence in D, and u = ∪n u n . We need to prove that Cur f (u)(u 0 ) = ∪n Cur f (u n )(u 0 ) for every u 0 ∈ D 0 , i.e. : f (u, u 0 ) = ∪n f (u n , u 0 ), which is clear, since f is σcontinuous. Finally, we show that Cur f is stable : let u, v ∈ D be such that u ∪ v ∈ D. We have to prove tr(Cur f (u ∩ v)) ⊃ tr(Cur f (u)) ∩ tr(Cur f (v)). Let (a, α) ∈ tr(Cur f (u)) ∩ tr(Cur f (v)) ; we have : α ∈ Cur f (u)(a) = f (u, a) and α ∈ f (v, a). Since f is stable, α ∈ f (u ∩ v, a) = Cur f (u ∩ v)(a). Thus there exists b ⊂ a such that (b, α) ∈ tr(Cur f (u ∩ v)) ⊂ tr(Cur f (u)). Since (a, α) ∈ tr(Cur f (u)), we have b = a, and therefore (a, α) ∈ tr(Cur f (u ∩ v)). Q.E.D.
The next proposition provides a new method for constructing βmodels : Proposition 7.35. Let D be a qualitative domain, D its web ; let : Φ : S (D, D) → D, Ψ : D → S (D, D) two stable functions. Then D is a functional model of λcalculus. D is a βmodel provided that Φ◦Ψ is the identity function on D ; in that case, the βmodel is extensional if and only if Ψ◦Φ is the identity function on S (D, D). In order to define the functional model, we take F = S (D, D), and we take F ∞ as the set of those stable functions from D N to D which depend only on a finite number of coordinates. Remark. More precisely, let f : D N → D be a function which depends only on a finite number of coordinates. Thus, we may consider f as a function from D n to D for some integer n ; we say that f ∈ F ∞ if, and only if this function is stable.
We put (u)v = Φ(u)(v) for all u, v ∈ D, and λx f (x) = Ψ( f ) for every f ∈ F . Let Ap : D × D → D be defined by Ap(u, v) = (u)v ; it is a stable function : indeed, we have Ap(u, v) = Eval(Φ(u), v) (composition of stable functions Eval and Φ). We now check conditions 1, 2, 3 of the definition of functional models of λcalculus : (1) Every coordinate function x i is in F ∞ : already seen, page 135.
Chapter 7. Models of lambdacalculus
139
(2) If f , g ∈ F ∞ , then ( f )g ∈ F ∞ : Indeed ( f )g is stable, since ( f )g = Ap( f , g ) is given by composition of stable functions Ap, f , g . (3) If f (x 1 , . . . , x n ) ∈ F ∞ , then λx i f ∈ F ∞ : For simpler notations, we suppose i = n and we put : g (x 1 , . . . , x n−1 ) = λx n f (x 1 , . . . , x n−1 ). We need to prove that g is stable. Now, if u 1 , . . . , u n−1 ∈ D, then g (u 1 , . . . , u n−1 ) = Ψ(Cur f (u 1 , . . . , u n−1 )) (we consider f as a stable function from D n−1 × D to D). Thus g is stable, since it is obtained by composing the stable functions Ψ and Cur f . Q.E.D.
Coherence spaces A coherence space D is a finite or countable nonempty set, equipped with a coherence relation denoted by ³ (a reflexive and symmetric binary relation) ; α ³ β should be read : “ α is coherent with β ”. If D, D 0 are two coherence spaces, then we can make of the product set D × D 0 a coherence space, by putting : (α, α0 ) ³ (β, β0 ) ⇔ α ³ α0 and β ³ β0 . An antichain of D is a subset A of D such that α, β ∈ A, α ³ β ⇒ α = β. The set of all antichains (resp. all finite antichains) of D is denoted by A (D) (resp. A0 (D)). The space D = A (D) is a qualitative domain, with web D, called the qualitative domain associated with the coherence space D. The set D0 = A0 (D) of all finite antichains of D is a coherence space, the coherence relation being : a ³ b ⇔ a ∪ b ∈ A0 (D), for all a, b ∈ A0 (D). Let D, D 0 be two coherence spaces, and D, D 0 the associated qualitative domains. It follows from the above properties that D0 × D 0 can be considered as a coherence space. A qualitative domain D, with web D, is associated with a coherence space if and only if it satisfies the following property : For every u ⊂ D, if every twoelement subset of u is in D, then u is in D. Indeed, if this property holds, then we may define a coherence relation on D by putting : α ³ α0 ⇔ α = α0 or {α, α0 } ∉ D, for all α, α0 ∈ D ; then it can be seen easily that D = A (D). Proposition 7.36. Let D, D 0 be two coherence spaces, D = A (D), D 0 = A (D 0 ) the corresponding qualitative domains. Then a subset X of the coherence space D0 × D 0 is an antichain if and only if it is the trace of some stable function from D to D 0 . Let X be an antichain in D0 × D 0 . We define f : D → D 0 by taking : α ∈ f (u) ⇔ there exists a ⊂ u such that (a, α) ∈ X (for all u ∈ D, α ∈ D 0 ).
Lambdacalculus, types and models
140
Then f (u) is an antichain of D 0 : if α, β ∈ f (u) and α ³ β, then there exist a, b ⊂ u such that (a, α), (b, β) ∈ X . Thus (a, α) ³ (b, β), and, since X is an antichain, we have α = β (and a = b). The function f is obviously σc.i. We now prove that f is stable : if u ∪ v ∈ D and α ∈ f (u)∩ f (v), then there exist a ⊂ u, b ⊂ v such that (a, α), (b, α) ∈ X . Now (a, α) ³ (b, α) since a ∪ b ∈ D0 . It follows that a = b, and hence a ⊂ u ∩ v, and α ∈ f (u ∩ v) by definition of f . Thus f (u) ∩ f (v) ⊂ f (u ∩ v). Finally, X is the trace of f : if (a, α) ∈ tr( f ), then α ∈ f (a), and hence (b, α) ∈ X for some b ⊂ a. Therefore, α ∈ f (b), by definition of f , so b = a by definition of tr( f ). Thus (a, α) ∈ X . Conversely, if (a, α) ∈ X , then α ∈ f (a), and hence (b, α) ∈ tr( f ) for some b ⊂ a. Then (b, α) ∈ X , as proved above. Since (a, α) ³ (b, α) and X is an antichain, it follows that a = b, and therefore (a, α) ∈ tr( f ). Now let f : D → D 0 be a stable function. It remains to prove that tr( f ) is an antichain in D0 × D 0 . If (a, α) ³ (b, β) and both are in tr( f ), then a ∪ b ∈ D, and α ³ β. Now α ∈ f (a), β ∈ f (b), and hence α, β ∈ f (a ∪ b). Since f (a ∪ b) is an antichain in D 0 , we have α = β. Therefore, (a, α), (b, α) ∈ tr( f ) and a ∪ b ∈ D. It then follows from the definition of tr( f ) that a = b. Q.E.D.
Therefore, for any two coherence spaces D, D 0 , the space of all stable functions from A (D) to A (D 0 ) may be identified with A (D0 × D 0 ), where D0 = A0 (D). Proposition 7.37. Let D be a coherence space, D = A (D) the corresponding qualitative domain, and D0 = A0 (D). Let i be an isomorphism of coherence spaces from D0 × D onto D. Then, with the following definitions, D is an extensional βmodel : (u)v = {α ∈ D; (∃a ⊂ v)i (a, α) ∈ u} for all u, v ∈ D ; λx f (x) = {i (a, α); a ∈ A0 (D), α ∈ f (a) and α ∉ f (a 0 ) for every a 0 ⊂ a, a 0 6= a} for every f ∈ S (D, D). Define Φ : D → S (D, D) by taking, for every u ∈ D, tr(Φ(u)) = i −1 (u) = {(a, α) ; i (a, α) ∈ u} which is an antichain in D0 × D 0 , and therefore the trace of some stable function from D to D. Thus Φ is an isomorphism of qualitative domains from D onto S (D, D). Now, define Ψ : S (D, D) → D by taking Ψ( f ) = i (tr( f )) = {i (a, α) ; (a, α) ∈ tr( f )} which is, indeed, an antichain in D (an isomorphism of coherence spaces takes antichains to antichains). Then Φ and Ψ are inverse isomorphisms, so they are stable ; thus, it follows from proposition 7.35 that D is an extensional βmodel of λcalculus. For all u, v ∈ D, we have (u)v = Φ(u)(v) = {α ∈ D ; (a, α) ∈ tr(Φ(u)) for some a ⊂ v} = {α ∈ D ; i (a, α) ∈ u for some a ⊂ v}. Q.E.D.
Chapter 7. Models of lambdacalculus
141
Models over a set of atoms Let A be a finite or countable nonempty set ; the elements of A will be called atoms. We are going to repeat (see page 124) the construction of the set of “formulas” over A, already used in the definition of Scott’s model. Here it will be denoted by ∆, D being used to denote the coherence space which will be defined after. So we suppose that none of the atoms are ordered pairs, and we give an inductive definition of ∆ and the onetoone function i : ∆∗ × ∆ → ∆ (i (a, α) will be denoted by a → α) : • every atom is a formula ; • whenever a is a finite set of formulas and α is a formula, if a 6= ; or α ∉ A, then (a, α) is a formula and we take a → α = i (a, α) = (a, α). • if α ∈ A, then we take ; → α = i (;, α) = α. As above (page 124), we define the rank of a formula α ∈ ∆, which is denoted by r k(α). Let ∆n be the set of all formulas with rank ≤ n. We now consider a coherence relation, denoted by ³, on A = ∆0 . Let D 0 be the coherence space therefore obtained. We define, by induction on n, a coherence space D n ⊂ ∆n : if α ∈ ∆n , then α ∈ D n ⇔ there exist β ∈ D n−1 , b ∈ A0 (D n−1 ) such that α = (b → β). Thus the restriction of i to A0 (D n−1 ) × D n−1 is a onetoone mapping of A0 (D n−1 ) × D n−1 into D n . We define the coherence relation on D n in such a way as to make of this mapping an isomorphism of coherence spaces. Now we prove, by induction on n, that D n is a coherence subspace of D n+1 . If n = 0, then A ⊂ D 1 , since α ∈ A ⇒ α = (; → α). If α, β ∈ A, then α ³ β holds in D 0 if and only if (; → α) ³ (; → β) holds in D 1 . Thus D 0 is a coherence subspace of D 1 . Assume that D n−1 is a coherence subspace of D n . Then A0 (D n−1 ) × D n−1 is a coherence subspace of A0 (D n ) × D n . Since i is an isomorphism from A0 (D n ) × D n onto D n+1 , and also from A0 (D n−1 ) × D n−1 onto D n , it follows that D n is a coherence subspace of D n+1 . Now we may define a coherence space D as the union of the D n ’s ; i is therefore an isomorphism of coherence spaces from A0 (D) × D onto D. We will call D the coherence space constructed over the set of atoms (A, ³). If the coherence relation on A is taken as the least one (α ³ β ⇔ α = β), then D is called the coherence space constructed over A. The qualitative domain D = A (D) associated with D is therefore an extensional βmodel of λcalculus.
Lambdacalculus, types and models
142
Universal retractions Let D be a βmodel of λcalculus. Recall that by a retraction in D, we mean an element ² such that ²◦² = ². The image of ² is called the retract associated with ². The coherence models constructed above have a universal retraction : this means that the set of all retractions of the model is a retract. This final section is devoted to the proof of : Theorem 7.38. Let ρ be a constant symbol added to the language of combinatory logic, and U R be the set of formulas : ρ ◦ρ = ρ ; ∀x[(ρx)◦(ρx) = ρx] ; ∀x[x ◦ x = x → ρx = x]. Then the system of axioms EC L +U R has a nontrivial model. We shall prove that this system of axioms is indeed satisfied in the model D = A (D), where D is the coherence space constructed over a set of atoms. This result is due to S. Berardi [Bera91]. The proof below is Amadio’s [Ama95]. See also [Berl92]. The first lemma is about a simple combinatorial property of any function f : X → X , with finite range. The notation f n will stand for f ◦ . . . ◦ f ( f occurs n times) ; f 0 = I d . Lemma 7.39. Let f : X → X be a function with finite range. Then there is one and only one retraction in { f n ; n ≥ 1}. Uniqueness : if both f m and f n are retractions, then ( f m )n = f m (since n ≥ 1), and ( f n )m = f n (since m ≥ 1). Thus f m = f n . Existence : let X n be the image of f n . X n (n ≥ 1) is a decreasing sequence of finite sets, thus there exists an integer k ≥ 1 such that X k = X n for all n ≥ k. Let f k be the restriction of f to X k . Then f k is a permutation of X k , and hence ( f k )N is the identity function on X k if N = (card(X k ))!. It follows that f N is the identity on X k , thus so is f N k . Now the image of f N k = ( f k )N is X k and therefore f N k is a retraction from X into X k . Q.E.D.
Let D0 be the set of all finite elements of D (finite antichains of D). If f ∈ D0 , then { f u; u ∈ D} is a finite set : indeed, if we put K f = {α ∈ D; there exists a ∈ D0 such that (a → α) ∈ f }, then K f is clearly a finite subset of D and, for every u ∈ D, f u is an antichain of K f . By the previous lemma, we may associate with each f ∈ D0 a retraction ρ 0 ( f ) : D → D, with finite range. Since ρ 0 ( f ) = f n , we have ρ 0 ( f ) ∈ D0 , and therefore ρ 0 : D0 → D0 . ρ 0 is an increasing function : let f , g ∈ D0 , f ⊂ g ; then ρ 0 ( f ) = f m , ρ 0 (g ) = g n . Now f m = ( f m )n ⊂ (g m )n = (g n )m = g n since both f m and g n are retractions.
Chapter 7. Models of lambdacalculus
143
Now we may define ρ : D → D by taking ρ(u) = ∪i ρ 0 (u i ), where u i is any increasing sequence in D0 such that u = ∪i u i . In order to verify the soundness of this definition, let u i0 be any other such sequence ; then we have u i ⊂ u 0j for a suitable j (since u i is finite), thus ρ 0 (u i ) ⊂ ∪ j ρ 0 (u 0j ), and hence ∪i ρ 0 (u i ) ⊂ ∪ j ρ 0 (u 0j ). We also have the inverse inclusion, since u i and u 0j play symmetric parts. Obviously, ρ : D → D is an increasing function ; moreover, it is σcontinuous : indeed, if u i (i ∈ N) is an increasing sequence in D, and u = ∪i u i , then we may take an increasing sequence v i of finite sets such that v i ⊂ u i and u = ∪i v i . Then we have ρ(u) = ∪i ρ 0 (v i ), and hence ρ(u) ⊂ ∪i ρ 0 (u i ). Since ρ is increasing, we obtain immediately the inverse inclusion. Finally, ρ is a stable function from D to D : indeed, consider first f , g ∈ D0 such that f ∪ g ∈ D0 . We have ρ 0 ( f ) = f m , ρ 0 (g ) = g n and ρ 0 ( f ∩ g ) = ( f ∩ g )p . Since f m , g n and ( f ∩ g )p are retractions, and x → x mnp is a stable function (all functions represented by a λterm are stable), we obtain : ( f ∩ g )p = ( f ∩ g )mnp = f mnp ∩ g mnp . Now f mnp = f m and g mnp = g n , thus ( f ∩ g )p = f m ∩ g n , that is to say ρ 0 ( f ∩ g ) = ρ 0 ( f ) ∩ ρ 0 (g ). Now, let u, v ∈ D be such that u ∪ v ∈ D. Let u i , v i ∈ D0 be two increasing sequences, such that u = ∪i u i , v = ∪i v i . Then we have ρ(u ∩ v) = ∪i ρ 0 (u i ∩ v i ) = ∪i [ρ 0 (u i ) ∩ ρ 0 (v i )] (according to the property which was previously proved) = ∪i ρ 0 (u i ) ∩ ∪i ρ 0 (v i ) = ρ(u) ∩ ρ(v). Therefore, ρ ∈ D. Now we will see that ρ is a universal retraction. Lemma 7.40. ρ ◦ρ = ρ ; (ρu)◦(ρu) = ρu for every u ∈ D. It can be seen easily that ρ 0 ◦ρ 0 = ρ 0 : if f ∈ D0 , then ρ 0 ( f ) = f m for the least m ≥ 1 such that f m is a retraction. Thus ρ 0 ( f m ) = f m . Now let u ∈ D ; we have u = ∪i u i , where u i is an increasing sequence in D0 . Therefore : ρ(u) = ∪i ρ 0 (u i ) = ∪i ρ 0 (ρ 0 (u i )) = ρ ◦ρ(u), since ρ 0 (u i ) is an increasing sequence in D0 such that its union is ρ(u). The proof of (ρu)◦(ρu) = ρu is immediate, since (ρ 0 u i )◦(ρ 0 u i ) = ρ 0 u i , and (x, y) → x ◦ y is a σc.i. function from D × D to D. Q.E.D.
We will now prove that r ◦r = r ⇒ ρr = r , that is to say that the image of ρ contains all the retractions of D. Let r be a retraction of D, and r i ∈ D0 an increasing sequence such that r = ∪i r i . k We have ρ(r ) ⊂ r : indeed, ρ 0 (r i ) = r i i ⊂ r ki = r . Thus ρ(r ) = ∪i ρ 0 (r i ) ⊂ r . So it remains to prove that r ⊂ ρ(r ). Lemma 7.41. Let a, u, r ∈ D be such that r = r ◦r , a ⊂ r u and a is finite. Then there exists a finite c ∈ D such that a ⊂ r c, c ⊂ r c, c ⊂ r u.
Lambdacalculus, types and models
144
Since r = r ◦r , we have a ⊂ r r u. According to proposition 7.30(iii), there exists a least finite c such that a ⊂ r c and c ⊂ r u. Now, if we put d = r c, we have r d = r r c = r c, thus a ⊂ r d ; on the other hand, c ⊂ r u, thus r c ⊂ r r u, that is d ⊂ r u. Since c is the least element satisfying these properties, we have c ⊂ d , thus c ⊂ r c. Q.E.D.
Lemma 7.42. Let a, r ∈ D be such that r ◦r = r , a ⊂ r a and a is finite. Then r a = ρ(r )a. We have a ⊂ r a = ∪i r i a, thus, for some i 0 , a ⊂ r i a holds for every i ≥ i 0 . By applying r i on both sides of this inclusion, we obtain : a ⊂ r i a ⊂ r i2 a ⊂ . . . ⊂ r in a ⊂ . . . k
Now ρ 0 (r i ) = r i i for some k i ≥ 1 ; thus r i a ⊂ ρ 0 (r i )a for every i ≥ i 0 . It suffices to take the limits to obtain r a ⊂ ρ(r )a. The inverse inclusion is immediate, since ρ(r ) ⊂ r . Q.E.D.
Now we are able to complete the proof of theorem 7.38. Take u ∈ D and a ∈ D0 such that a ⊂ r u. By lemma 7.41, there exists c ∈ D0 such that a ⊂ r c, c ⊂ r c and c ⊂ r u. By lemma 7.42, we have r c = ρ(r )c and hence a ⊂ ρ(r )c. Since c is finite and contained in r u and r c, there exists i ≥ 1 such that c ⊂ r i u, c ⊂ r i c. By applying r i on both sides, we obtain c ⊂ r i c ⊂ r i2 c ⊂ . . . ⊂ r in c ⊂ k
k −1
k
. . . Now ρ 0 (r i ) = r i i for some k i ≥ 1. Since c ⊂ r i u, we have r i i c ⊂ r i i u = ρ(r i )u ⊂ ρ(r )u. Thus c ⊂ ρ(r )u. Since a ⊂ ρ(r )c and ρ(r ) is a retraction, we have a ⊂ ρ(r )◦ρ(r )u = ρ(r )u. Now a is an arbitrary finite subset of r u, and hence we obtain r u ⊂ ρ(r )u. The inverse inclusion ρ(r )u ⊂ r u follows from ρ(r ) ⊂ r . Finally, ρ(r )u = r u, thus ρ(r ) = r since u is an arbitrary element in D and D is extensional.
References for chapter 7 [Ama95], [Bar84], [Bera91], [Berl92], [Berr78], [Cop84], [Gir86], [Gir89], [Eng81], [Hin86], [Lon83], [Mey82], [Plo74], [Plo78], [Sco73], [Sco76], [Sco80], [Sco82], [Sto77], [Tar55]. (The references are in the bibliography at the end of the book).
Chapter 8 System F 1. Definition of system F types In this chapter, we deal with the second order propositional calculus, i.e. the set of formulas built up with : • a countable set of variables X , Y ,. . . , (called type variables or propositional variables) • the connective → and the quantifier ∀. Remark. We observe that the second order propositional calculus is exactly the same as the set L of λterms defined in chapter 1 (page 7), with simply a change of notation : → instead of application, ∀ instead of λ. Indeed, we could define inductively an isomorphism as follows (denoting by t A the λterm associated with the formula A) : if X is a type variable, then t X is X itself, considered as a λvariable ; if A, B are formulas, then t A→B is (t A )t B and t ∀X A is λX t A . For instance, the λterm which corresponds to the formula : ∀X ∀Y (X , Y → X ) → ∀Z (Z → Z ) would be (λX λY (X )(Y )X )λZ (Z )Z . In fact, we are not interested in the λterm associated with a formula. We simply observe that this isomorphism allows us to define, for second order propositional calculus, all the notions defined in chapter 1 for the set L of λterms : simple substitution, αequivalence, . . .
Thus, let F, A 1 , . . . , A k be formulas and X 1 , . . . , X k distinct variables. The formula F , obtained by simple substitution, is defined as in chapter 1 (page 8), and has exactly the same properties. We similarly define the αequivalence of formulas, denoted by F ≡ G, by induction on F : • if X is a propositional variable, then X ≡ G if and only if G = X ; • if F = A → B , then F ≡ G if and only if G = A 0 → B 0 , where A ≡ A 0 and B ≡ B0 ; 145
Lambdacalculus, types and models
146
• if F = ∀X A, then F ≡ G if and only if G = ∀Y B and A ≡ B for all variables Z but a finite number. We shall identify αequivalent formulas. Like in chapter 1, this allows the definition of substitution : We define the formula F [A 1 /X 1 , . . . , A k /X k ] as F , provided that we choose a representative of F , no bound variable of which occurs free in A1, . . . , Ak . All the lemmas about substitution in chapter 1 still hold. The types of system F are, by definition, the equivalence classes of formulas, relative to the αequivalence.
2. Typing rules for system F We wish to build typings of the form Γ `F t : A, where Γ is a context, that is an expression of the form x 1 :A 1 , . . . , x k :A k , where x 1 , . . . , x k are distinct variables, A 1 , . . . , A k , A are types of system F , and t is a λterm. The typing rules are the following : 1. If x is a variable not declared in Γ, then Γ, x:A `F x:A ; 2. If Γ, x:A `F t :B , then Γ `F λx t : A → B ; 3. If Γ `F t : A and Γ `F u : A → B , then Γ `F (u)t : B ; 4. If Γ `F t : ∀X A, then Γ `F t : A[F /X ] for every type F ; 5. If Γ `F t : A, then Γ `F t : ∀X A for every variable X such that no type in Γ contains a free occurrence of X . From now on, throughout this chapter, the notation Γ ` t :A will stand for Γ `F t :A. Obviously, if Γ ` t :A, then all free variables of t are declared in the context Γ. Proposition 8.1. If Γ ` t :A and Γ ⊂ Γ0 , then Γ0 ` t :A. Same proof as proposition 3.3. Q.E.D.
Proposition 8.2. Let Γ be a context, and x 1 , . . . , x k be variables which are not declared in Γ. If Γ ` t i :A i (1 ≤ i ≤ k) and Γ, x 1 :A 1 , . . . , x k :A k ` u:B , then : Γ ` u[t 1 /x 1 , . . . , t k /x k ] : B . In particular : If x 1 , . . . , x k do not occur free in u, and if Γ, x 1 :A 1 , . . . , x k :A k ` u:B , then Γ ` u:B . The proof is by induction on the number of rules used to obtain the typing Γ, x 1 :A 1 , . . . , x k :A k ` u:B . Consider the last one :
Chapter 8. System F
147
If it is rule 1, 2 or 3, the proof is the same as that of proposition 4.1. If it is rule 4, then B ≡ A[F /X ], and the previous step was : Γ, x 1 :A 1 , . . . , x k :A k ` u : ∀X A. By induction hypothesis, we get : Γ ` u[t 1 /x 1 , . . . , t k /x k ] : ∀X A, and therefore, by rule 4 : Γ ` u[t 1 /x 1 , . . . , t k /x k ] : A[F /X ]. If it is rule 5, then B ≡ ∀X A, and Γ, x 1 :A 1 , . . . , x k :A k ` u:A is a previous typing such that X does not occur free in Γ, A 1 , . . . , A k . By induction hypothesis, we get Γ ` u[t 1 /x 1 , . . . , t k /x k ] : A, and therefore, by rule 5, Γ ` u[t 1 /x 1 , . . . , t k /x k ] : ∀X A. Q.E.D.
Lemma 8.3. If Γ ` t : ∀X 1 . . . ∀X k A, then Γ ` t : A[B 1 /X 1 , . . . , B k /X k ]. Indeed, suppose that X 1 , . . . , X k have no occurrence in B 1 , . . . , B k (this is possible by taking a suitable representative of ∀X 1 . . . ∀X k A). By rule 4, we get Γ ` t : A[B 1 /X 1 ] . . . [B k /X k ]. Now A[B 1 /X 1 ] . . . [B k /X k ] ≡ A[B 1 /X 1 , . . . , B k /X k ] by lemma 1.13. Q.E.D.
The part of the quantifier ∀ in system F is similar to that of the connective ∧ in system D. The next proposition is the analogue of lemma 3.22 : Proposition 8.4. If Γ, x : F [A 1 /X 1 , . . . , A k /X k ] ` t : B , then Γ, x : ∀X 1 . . . ∀X k F ` t : B . The proof is done by induction on the number of rules used to obtain : Γ, x : F [A 1 /X 1 , . . . , A k /X k ] ` t : B . Consider the last one ; the only nontrivial case is that of rule 1, when t is the variable x. Then B ≡F [A 1 /X 1 , . . . , A k /X k ] and the result follows from lemma 8.3. Q.E.D.
Notation. Let Γ be the context x 1 :A 1 , . . . , x n :A n . We define Γ[B 1 /X 1 , . . . , B k /X k ] as the context x 1 : A 1 [B 1 /X 1 , . . . , B k /X k ], . . . , x n : A n [B 1 /X 1 , . . . , B k /X k ]. Proposition 8.5. If Γ ` t : A, then Γ[B 1 /X 1 , . . . , B k /X k ] ` t : A[B 1 /X 1 , . . . , B k /X k ]. By induction on the length of the proof of Γ ` t : A ; we also prove that the length of the proof of Γ[B 1 /X 1 , . . . , B k /X k ] ` t : A[B 1 /X 1 , . . . , B k /X k ] is the same as that of Γ ` t : A. Consider the last rule used. The result is obvious whenever it is rule 1, 2 or 3. If it is rule 4, then A ≡ A 0 [C /Y ] and we have a previous typing of the form Γ ` t : ∀Y A 0 . By induction hypothesis, we have : Γ[B 1 /X 1 , . . . , B k /X k ] ` t : ∀Y A 0 [B 1 /X 1 , . . . , B k /X k ] (Y 6= X 1 , . . . , X k and Y does
Lambdacalculus, types and models
148
not occur free in B 1 , . . . , B k ). Moreover, the length of the proof of this typing is the same as that of Γ ` t : ∀Y A 0 . Thus, by rule 4, we have : Γ[B 1 /X 1 , . . . , B k /X k ] ` t : A 0 [B 1 /X 1 , . . . , B k /X k ][C 0 /Y ] for any formula C 0 . Since Y does not occur free in B 1 , . . . , B k , by lemma1.13, this is equivalent to : Γ[B 1 /X 1 , . . . , B k /X k ] ` t : A 0 [B 1 /X 1 , . . . , B k /X k ,C 0 /Y ]. Now take C 0 ≡ C [B 1 /X 1 , . . . , B k /X k ]. Again by lemma 1.13, we have : A 0 [B 1 /X 1 , . . . , B k /X k ,C 0 /Y ] ≡ A 0 [C /Y ][B 1 /X 1 , . . . , B k /X k ] ≡ A[B 1 /X 1 , . . . , B k /X k ]. Hence Γ[B 1 /X 1 , . . . , B k /X k ] ` t : A[B 1 /X 1 , . . . , B k /X k ], and we obtain a proof of the same length as that of Γ ` t : A. If it is rule 5, we have Γ ` t : A 0 as a previous typing, and A ≡ ∀Y A 0 , where Y does not occur free in Γ. Take a variable Z 6= X , which does not occur in Γ, A 0 , B 1 , . . . , B k . By induction hypothesis, we have : Γ[Z /Y ] ` t : A 00 , where A 00 ≡ A 0 [Z /Y ]. In other words, Γ ` t : A 00 (since Y does not occur in Γ). Moreover, the length of the proof is the same, so we may use the induction hypothesis, and obtain : Γ[B 1 /X 1 , . . . , B k /X k ] ` t : A 00 [B 1 /X 1 , . . . , B k /X k ]. Since Z does not occur in Γ, B 1 , . . . , B k , it does not occur in [B 1 /X 1 , . . . , B k /X k ] ; therefore, by rule 5 : Γ[B 1 /X 1 , . . . , B k /X k ] ` t : ∀Z A 00 [B 1 /X 1 , . . . , B k /X k ]. 00 Now ∀Z A ≡ ∀Y A 0 (lemma 1.10) ≡ A ; hence : ∀Z A 00 [B 1 /X 1 , . . . , B k /X k ] ≡ A[B 1 /X 1 , . . . , B k /X k ], and therefore : Γ[B 1 /X 1 , . . . , B k /X k ] ` t : A[B 1 /X 1 , . . . , B k /X k ]. Q.E.D.
By an open formula, we mean a formula of which the first symbol is different from ∀ ; so it is either a type variable or a formula of the form B → C . For every formula A, we denote by A 0 the unique open formula such that : A ≡ ∀X 1 . . . ∀X n A 0 (n ∈ N). 0 This formula A will be called the interior of A. Let Γ be a context (resp. F be a formula), X 1 , . . . , X k type variables with no free occurrence in Γ (resp. F ), and A a formula. Any formula of the form A[B 1 /X 1 , . . . , B k /X k ] will be called a Γinstance of A (resp. F instance of A). Therefore : If A ≡ ∀X 1 . . . ∀X k A 0 , then any formula of the form A 0 [B 1 /X 1 , . . . , B k /X k ] is an Ainstance of A 0 . The next lemma is the analogue of lemma 4.2. Lemma 8.6. Suppose that Γ ` t : A, where A is an open formula. i) if t is a variable x, then Γ contains a declaration x : B such that A is a B instance of B 0 .
Chapter 8. System F
149
ii) if t = λx u, then A ≡ (B → C ), and Γ, x : B ` u : C . iii) if t = (u)v, then Γ ` u : C → B , Γ ` v : C , where B is such that A is a Γinstance of B 0 . In the proof of Γ ` t : A, consider the first step at which one obtains Γ ` t : B , for some formula B such that A is a Γinstance of B 0 (this happens at least once, for example with B = A). Examine the typing rule (page 146) used at that step. It is not rule 4 : indeed, if it were, we would have obtained at the previous step Γ ` t : ∀XC , with B = C [U /X ]. We may suppose that X does not occur in Γ. We have C = ∀X 1 . . . ∀X n C 0 , where C 0 is an open formula ; thus C 0 is either a variable or a formula of the form F → G. If C 0 = X , then every formula (therefore particularly A) is a Γinstance of C 0 ; this contradicts the definition of B . If C 0 is a variable Y 6= X , then B = C [U /X ] = C , so B 0 = C 0 , and A is a Γinstance of C 0 ; again, this contradicts the definition of B . If C 0 = F → G, then B = ∀X 1 . . . ∀X n C 0 [U /X ]. Now C 0 [U /X ] = F 0 → G 0 is an open formula. Thus B 0 = C 0 [U /X ]. Since A is a Γinstance of B 0 , we have, by lemma 1.13 : A = B 0 [U1 /Z1 , . . . ,Uk /Zk ] = C 0 [U /X ][U1 /Z1 , . . . ,Uk /Zk ] = C 0 [U1 /Z1 , . . . ,Uk /Zk ,U 0 /X ] 0 where U = U [U1 /Z1 , . . . ,Uk /Zk ]. Now, by hypothesis, Z1 , . . . , Zk are variables which do not occur in Γ, and neither does X . Thus A is a Γinstance of C 0 , contradicting the definition of B . It is not rule 5 : suppose it were ; then B = ∀X C , and therefore B 0 = C 0 . Hence Γ ` t : C at the previous step, and A is a Γinstance of C 0 ; this contradicts the definition of B . Now we can prove the lemma : In case (i), the rule applied at that step needs to be rule 1, since t is a variable x. Therefore Γ contains the declaration x : B , and A is a Γinstance of B 0 . Since the formula B = ∀X 1 . . . ∀X k B 0 appears in the context Γ, the free variables of B 0 which do not occur free in Γ are X 1 , . . . , X k . Thus A is a B instance of B 0 . In case (ii), the rule applied is rule 2. Thus : B = (C → D), and Γ, x : C ` u : D. Now B is an open formula, so A is a Γinstance of B 0 = B . Hence, we have A = C 0 → D 0 , with : C 0 = C [U1 /X 1 , . . . ,Uk /X k ] and D 0 = D[U1 /X 1 , . . . ,Uk /X k ]. By proposition 8.5, one deduces from Γ, x : C ` u : D that : Γ[U1 /X 1 , . . . ,Uk /X k ], x : C [U1 /X 1 , . . . ,Uk /X k ] ` u : D[U1 /X 1 , . . . ,Uk /X k ]. Since X 1 , . . . , X k do not occur in Γ, we finally obtain Γ, x : C 0 ` u : D 0 and A = C 0 → D 0 .
Lambdacalculus, types and models
150
In case (iii), the rule applied at that step is rule 3 since the term t is (u)v. Hence Γ ` u : C → B and Γ ` v : C , so A is a Γinstance of B 0 . Q.E.D.
Theorem 8.7. If Γ ` t : A and t β t 0 , then Γ ` t 0 : A. Recall that t β t 0 means that t 0 is obtained from t by βreduction.
It is sufficient to repeat the proof of proposition 4.3 (which is the corresponding statement for system D), using lemma 8.6(ii) instead of lemma 4.2(ii) and proposition 8.2 instead of proposition 4.1. Q.E.D.
Theorem 8.7 fails if one replaces the assumption t β t 0 with t 'β t 0 . Take for instance t = λx x, t 0 = λx(λy x)(x)x ; then ` t : X → X , where X is a variable. Yet ` t 0 : X → X does not hold : indeed, by lemma 8.6, this would imply : x : X ` (λy x)(x)x : X , and therefore x : X ` (x)x : A for some formula A, which is clearly impossible (again by lemma 8.6). We shall denote by ⊥ the formula ∀X X ; thus we have Γ, x : ⊥ ` x : A for every formula A (rules 1 and 4, page 146). We define the connective ¬ by taking ¬A ≡ A → ⊥ for every formula A. Proposition 8.8. Every normal term t is typable in system F , in the context x 1 : ⊥, . . . , x k : ⊥, where x 1 , . . . , x k are the free variables of t . Proof by induction on the length of t . Let Γ be the context x 1 : ⊥, . . . , x k : ⊥, where x 1 , . . . , x k are the free variables of t . If t = λx u, then, by induction hypothesis, we have Γ, x : ⊥ ` u : A ; thus : Γ ` λx u : ⊥ → A. If t does not start with λ, then t = (x 1 )t 1 . . . t n . By induction hypothesis, Γ ` t i : A i . On the other hand, Γ ` x 1 : ⊥, so Γ ` x 1 : A 1 , . . . , A n → X (rule 4). Therefore, Γ ` t : X . Q.E.D.
Nevertheless, there are strongly normalizable closed terms which are not typable in system F (see [Gia88]).
3. The strong normalization theorem In this section, we will prove the following theorem of J.Y. Girard [Gir71] : Theorem 8.9. Every term which is typable in system F is strongly normalizable.
Chapter 8. System F
151
We shall follow the proof of the corresponding theorem for system D (theorem 3.20). As there, N denotes the set of strongly normalizable terms and N0 the set of terms of the form (x)t 1 . . . t n , where x is a variable and t 1 , . . . , t n ∈ N . A subset X of Λ is N saturated if and only if : (λx u)t t 1 . . . t n ∈ X whenever t ∈ N and (u[t /x])t 1 . . . t n ∈ X . We proved in chapter 3 (page 53) that (N0 , N ) is an adapted pair, that is : i) N is N saturated ; ii) N0 ⊂ N ; N0 ⊂ (N → N0 ) ; (N0 → N ) ⊂ N . An N interpretation I is a mapping X → X I of the set of type variables into the set of N saturated subsets of N which contain N0 . Let I be an N interpretation, X a type variable, and X an N saturated subset of Λ such that N0 ⊂ X ⊂ N . We define an N interpretation J = I [X ← X ] by taking Y J = Y I for every variable Y 6= X and X J = X . For every type A, the value AI of A in an N interpretation I is a set of terms defined as follows, by induction on A : • if A is a type variable, then AI is given with I ; • A → B I = (AI → B I ), in other words : for every term t , t ∈ A → B I if and only if (t )u ∈ B I for every u ∈ AI ; T • ∀X AI = {AI [X ←X ] ; X is N saturated, N0 ⊂ X ⊂ N }, in other words : for every term t , t ∈ ∀X AI if and only if t ∈ AI [X ←X ] for every N saturated subset X of Λ such that N0 ⊂ X ⊂ N . Clearly, the value AI of a type A in an N interpretation I depends only on the values in I of the free variables of A. In particular, if A is a closed type, then AI is independent of the interpretation I . Lemma 8.10. For every type A and every N interpretation I , the value AI is an N saturated subset of N which contains N0 . The proof is by induction on A : If A is a type variable, this is obvious from the definition of N interpretations. If A = B → C , then, by induction hypothesis, N0 ⊂ B I and C I ⊂ N . Therefore, B → C I = B I → C I ⊂ N0 → N . Now N0 → N ⊂ N (definition of the adapted pairs) ; hence B → C I ⊂ N . Also by induction hypothesis, we have N0 ⊂ C I and B I ⊂ N . It follows that B → C I = (B I → C I ) ⊃ N → N0 . Now N → N0 ⊃ N0 , and therefore B → C I ⊃ N0 . On the other hand, AI = (B I → C I ) is N saturated since C I is (proposition 3.15). If A = ∀X B , then ∀X B I ⊂ B I ⊂ N (by induction hypothesis) ; now N0 ⊂ B J for any N interpretation J (induction hypothesis), and therefore
Lambdacalculus, types and models
152
N0 ⊂ ∀X B I . Finally, ∀X B I is N saturated, as the intersection of a set of N saturated subsets of Λ. Q.E.D.
Lemma 8.11. Let A,U be two types, X a variable, I an N interpretation and X = U I . Then A[U /X ]I = AJ , where J = I [X ← X ]. Proof by induction on A. This is obvious whenever A is a type variable or A = B → C . Suppose A = ∀Y B (Y 6= X , and Y does not occur in U ). For each term t ∈ Λ, we have : i) t ∈ ∀Y B [U /X ]I if and only if t ∈ B [U /X ]I [Y ←Y ] for every N saturated subset Y of Λ such that N0 ⊂ Y ⊂ N ; ii) t ∈ ∀Y B J if and only if t ∈ B J [Y ←Y ] for every N saturated subset Y of Λ such that N0 ⊂ Y ⊂ N . Let I0 = I [Y ← Y ] and J0 = J [Y ← Y ] ; then J0 = I0 [X ← X ] since Y 6= X . On the other hand, X = U I = U I0 since Y is not a free variable in U . Hence, by induction hypothesis, B [U /X ]I0 = B J0 . Thus, it follows from (i) and (ii) that ∀Y B [U /X ]I = ∀Y B J . Q.E.D.
Lemma 8.12 (Adequacy lemma). Let I be an N interpretation. If x 1 : A 1 , . . . , x k : A k ` u : A and t i ∈ A i I (1 ≤ i ≤ k), then : u[t 1 /x 1 , . . . , t k /x k ] ∈ AI . The proof is by induction on the number of rules used to obtain the given typing x 1 : A 1 , . . . , x k : A k ` u : A. Consider the last one. If it is rule 1, 2 or 3, then the proof is the same as for the second adequacy lemma 3.16. If it is rule 4, then A = B [U /X ], and we have : x 1 : A 1 , . . . , x k : A k ` u : ∀X B as a previous typing. By induction hypothesis, u[t 1 /x 1 , . . . , t k /x k ] ∈ ∀X B I ; thus u[t 1 /x 1 , . . . , t k /x k ] ∈ B J , where J = I [X ← X ], for every N saturated subset X of Λ such that N0 ⊂ X ⊂ N . By taking X = U I , we obtain B J = B [U /X ]I , in view of lemma 8.11. Therefore u[t 1 /x 1 , . . . , t k /x k ] ∈ B [U /X ]I . If it is rule 5, then A = ∀X B , and we have a previous typing : x 1 : A 1 , . . . , x k : A k ` u : B ; moreover, X does not occur free in A 1 , . . . , A k . Let X be an N saturated subset of Λ such that N0 ⊂ X ⊂ N , and let J = I [X ← X ]. Thus A i I = A i J , since X does not occur free in A i . Hence t i ∈ A i J . By induction hypothesis, we have u[t 1 /x 1 , . . . , t k /x k ] ∈ B J and therefore : u[t 1 /x 1 , . . . , t k /x k ] ∈ ∀X B I . Q.E.D.
Chapter 8. System F
153
Now the proof of the strong normalization theorem easily follows : Suppose x 1 : A 1 , . . . , x k : A k ` t : A and consider the N interpretation I defined by taking X I = N for every variable X . By lemma 2, we have N0 ⊂ A i I , so x i ∈ A i I . Thus, by the adequacy lemma 8.12, t [x 1 /x 1 , . . . , x k /x k ] = t ∈ AI . Now AI ⊂ N (by lemma 2), and therefore t ∈ N . Q.E.D.
4. Data types in system F Recall some definitions from chapter 3 : A subset X of Λ is saturated if and only if (λx u)t t 1 . . . t n ∈ X whenever (u[t /x])t 1 . . . t n ∈ X . An interpretation I is a mapping X → X I of the set of type variables into the set of saturated subsets of Λ. Let I be an interpretation, X a type variable, and X a saturated subset of Λ. We define an interpretation J = I [X ← X ] by taking Y J = Y I for every variable Y 6= X and X J = X . For every type A, the value AI of A in an interpretation I is a set of terms defined as follows, by induction on A : • if A is a type variable, then AI is given with I ; • A → B I = AI → B I , in other words : for every term t , t ∈ A → B I if and only if t u ∈ B I for every u ∈ AI ; T • ∀X AI = {AI [X ←X ] ; X is any saturated subset of Λ}, in other words : for every term t , t ∈ ∀X AI if and only if t ∈ AI [X ←X ] for every saturated subset X of Λ. Lemma 8.13 (Adequacy lemma). Let I be an interpretation ; if x 1 : A 1 , . . . , x k : A k ` u : A and t i ∈ A i I (1 ≤ i ≤ k), then : u[t 1 /x 1 , . . . , t k /x k ] ∈ AI . Same proof as above. Q.E.D.
The value of a closed type A (that is a type with no free variables) is the same in all interpretations ; it will be denoted by A. A closed type A will be called a data type if : i) A 6= ; ; ii) every term t ∈ A is βequivalent to a closed term. Condition (ii) can also be stated this way : ii’) every term t ∈ A can be transformed in a closed term by βreduction.
154
Lambdacalculus, types and models
Indeed, if (ii) holds, then t 'β u for some closed term u ; by the ChurchRosser theorem, t and u reduce to the same term v by βreduction. Now βreduction applied to a closed term produces only closed terms. Thus v is closed. Proposition 8.14. The types : Id = ∀X (X → X ) (identity type) ; Bool = ∀X {X , X → X } (Booleans type) ; Int = ∀X {(X → X ) → (X → X )} (integers type) are data types. More precisely : t ∈ Id ⇔ t 'β λx x ; t ∈ Bool ⇔ t 'β λxλy x or t 'β λxλy y ; t ∈ Int ⇔ t 'β λ f λx( f )n x for some integer n or t 'β λ f f . Note that, in view of the adequacy lemma 8.13, we have the following consequences : If ` t : Id, then t 'β λx x. If ` t : Bool, then t 'β λxλy x or t 'β λxλy y ; If ` t : Int, then t 'β λ f λx( f )n x for some integer n or t 'β λ f f . Proof of the proposition : we first show the implications ⇒. 1. Identity type : Let t ∈ Id and x be a variable of the λcalculus which does not occur in t ; we define an interpretation I by taking X I = {τ ∈ Λ; τ 'β x} for every type variable X . Since t ∈ Id, we have t ∈ X → X . Now x ∈ X , so (t )x ∈ X , and therefore (t )x 'β x. Thus t is normalizable (t 'βη λx x). Let t 0 be its normal form ; then t 0 = λx 1 . . . λx m (y)t 1 . . . t n . If m = 0, then (t 0 )x 'β (y 0 )u 1 . . . u n x, where y 0 is a variable. This term cannot be equal to x, so we have a contradiction. If m ≥ 1, then we have t 0 = λx u. So (t 0 )x 'β u ; therefore u 'β x, and t 0 'β λx x. Since t 0 is normal, t 0 = λx x. 2. Booleans type : Let t ∈ Bool and x, y be variables of the λcalculus which do not occur in t ; we define an interpretation I by taking X I = {τ ∈ Λ; τ 'β x or τ 'β y}. Since t ∈ Bool, we have t ∈ X , X → X . Now x, y ∈ X , so (t )x y ∈ X , that is, for instance, (t )x y 'β x. Thus t 'βη λxλy x, and t is normalizable. Let t 0 be its normal form ; then t 0 = λx 1 . . . λx m (z)t 1 . . . t n . If m = 0 or 1, then (t 0 )x y 'β (z 0 )u 1 . . . u n x y or (z 0 )u 1 . . . u n y, where z 0 is a variable. None of these terms can be equal to x, so we have a contradiction. If m ≥ 2, then we have t 0 = λxλy u, thus (t 0 )x y 'β u. Therefore u 'β x and t 0 'β λxλy x. Since t 0 is normal, t 0 = λxλy x.
Chapter 8. System F
155
3. Integers type : Let t ∈ Int and f , x be variables of the λcalculus which do not occur in t ; we define an interpretation I by taking X I = {τ ∈ Λ; τ 'β ( f )k x for some k ≥ 0} for every type variable X . Thus x ∈ X  and f ∈ X → X . Since t ∈ Int, we have t ∈ (X → X ), X → X . Thus (t ) f x ∈ X , and hence (t ) f x 'β ( f )k x. It follows that t 'βη λ f λx( f )k x, so t is normalizable. Let t 0 be its normal form ; then t 0 = λx 1 . . . λx m (y)t 1 . . . t n . If m = 0, then (t 0 ) f x 'β (y 0 )u 1 . . . u n f x, where y 0 is a variable. This term cannot be equal to ( f )k x, so we have a contradiction. If m = 1, then we have t 0 = λ f (y)t 1 . . . t n . So (t 0 ) f x 'β (y)t 1 . . . t n x. Since this term needs to be equal to ( f )k x, we necessarily have y = f and n = 0 ; thus t0 = λf f . If m ≥ 2, then we have t 0 = λ f λx u ; so (t 0 ) f x 'β u. Therefore u 'β ( f )k x and t 0 'β λ f λx( f )k x. Since t 0 is normal, we conclude that t 0 = λ f λx( f )k x. Now we come to the implications ⇐ . We shall treat for instance the case of the type Int. Suppose t 'β λ f f or t 'β λ f λx( f )k x for some k ≥ 0. In system DΩ, we have `DΩ λ f f : (X → X ) → (X → X ) and `DΩ λ f λx( f )k x : (X → X ) → (X → X ). Thus, by theorem 4.7, we have `DΩ t : (X → X ) → (X → X ). In view of the adequacy lemma for system DΩ (lemma 3.5), we have : t ∈ (X → X ) → (X → X )I for every interpretation I . Hence t ∈ ∀X {(X → X ) → (X → X )} = Int. Q.E.D.
We can similarly define the type ∀X {(X → X ), (X → X ), X → X } of binary lists (finite sequences of 0’s and 1’s), the type ∀X {(X , X → X ), X → X } of binary trees, etc. All of them are data types. In the next section, we give a syntactic condition which is sufficient in order that a formula be a data type (corollary 8.19). The type Int → Int (of the functions from the integers to the integers) is not a data type. Indeed, let ξ = λn nI 0y where y is a variable and I = λx x. Then ξ is a nonclosed normal term, so it is not βequivalent to any closed term. Now ξ ∈ Int → Int : suppose ν ∈ Int, then ν is βequivalent to a Church numeral, and therefore ξν 'β λx x ∈ Int. Indeed, even the type Id → Id is not a data type : apply the same method to ξ0 = λ f f 0y. The next proposition shows that it is possible to obtain new data types from given ones : Proposition 8.15. Let A, B be two data types. Then the types :
156
Lambdacalculus, types and models
A ∧ B : ∀X {(A, B → X ) → X } (product of A and B ) ; A ∨ B : ∀X {(A → X ), (B → X ) → X } (disjoint sum of A and B ) ; L[A] : ∀X {(A, X → X ), X → X } (type of the lists of objects of type A) are data types. More precisely : If t ∈ A ∧ B , then t 'β λ f ( f )ab, where a ∈ A and b ∈ B . If t ∈ A ∨ B , then either t 'β λ f λg ( f )a for some a ∈ A or t 'β λ f λg (g )b for some b ∈ B . If t ∈ L[A], then either t 'β λ f λx( f a 1 )( f a 2 ) . . . ( f a n )x, where n ≥ 0 and a i ∈ A for 1 ≤ i ≤ n, or t 'β λ f ( f )a for some a ∈ A. Remark. The term λ f λx( f a 1 )( f a 2 ) . . . ( f a n )x represents the ntuple (a 1 , . . . , a n ) in the λcalculus ; if n = 0, this term is λ f λx x which represents the empty sequence ; if n = 1, the one element sequence (a) is represented either by λ f λx( f a)x or by λ f ( f )a which are ηequivalent.
Product type : Let t ∈ A ∧ B  and f be a variable with no free occurrence in t . Define an interpretation I by : X I = {τ ∈ Λ ; τ 'β ( f )ab for some a ∈ A and b ∈ B }. Then f ∈ A, B → X I ; since t ∈ (A, B → X ) → X I , we see that (t ) f ∈ X I . Thus there exist a ∈ A, b ∈ B  such that (t ) f 'β ( f )ab. It follows that t is solvable ; let t 0 be a head normal form of t . If t 0 starts with λ, say t 0 = λ f u, then (t ) f 'β (t 0 ) f 'β u, and therefore u 'β ( f )ab. Hence t 'β t 0 'β λ f ( f )ab, which is βequivalent to a closed term since so are a and b, by hypothesis. Otherwise, t 0 = (x)t 1 . . . t n , thus (t 0 ) f 'β (x)t 1 . . . t n f 'β (t ) f 'β ( f )ab. Now (x)t 1 . . . t n f 'β ( f )ab, so we have n = 1 and b 'β f . But this is impossible since b is βequivalent to a closed term. Disjoint sum type : Let t ∈ A ∨ B  and f , g be two distinct variables which do not occur free in t . Define an interpretation I by : X I = {τ ∈ Λ ; τ 'β ( f )a for some a ∈ A or τ 'β (g )b for some b ∈ B } ; then f ∈ A → X I and g ∈ B → X I . Since t ∈ (A → X ), (B → X ) → X I , we can see that (t ) f g ∈ X I . So we have, for instance, (t ) f g 'β ( f )a for some a ∈ A. It follows that t is solvable ; let t 0 be a head normal form of t . If t 0 starts with at least two occurrences of λ, say t 0 = λ f λg u, then we have (t ) f g 'β (t 0 ) f g 'β u, and therefore u 'β ( f )a. Thus t 'β t 0 'β λ f λg ( f )a, which is βequivalent to a closed term since so is a, by hypothesis. If t 0 starts with only one occurrence of λ, then t 0 = λ f (x)t 1 . . . t n (x need not be distinct from f ) ; thus (t 0 ) f g 'β (x)u 1 . . . u n g 'β (t ) f g 'β ( f )a.
Chapter 8. System F
157
Now (x)u 1 . . . u n g 'β ( f )a, so we have n = 0 and a 'β g . But this is impossible since a is βequivalent to a closed term. If t 0 does not start with λ, then t 0 = (x)t 1 . . . t n ; so we have : (t 0 ) f g 'β (x)t 1 . . . t n f g 'β (t ) f g 'β ( f )a. It follows that (x)t 1 . . . t n f g 'β ( f )a, but this is impossible : the head variable has at least two arguments in the first term, but only one in the second. List type : Let t ∈ L[A] and f , x be two variables which do not occur free in t . Define an interpretation I by : X I = {τ ∈ Λ; τ 'β ( f a 1 )( f a 2 ) . . . ( f a n )x, with n ≥ 0 and a i ∈ A}. Then f ∈ A, X → X I and x ∈ X I ; since t ∈ (A, X → X ), X → X I , we get (t ) f x ∈ X I . So we have (t ) f x 'β ( f a 1 )( f a 2 ) . . . ( f a n )x. It follows that t is solvable ; let t 0 be a head normal form of t . If t 0 starts with at least two occurrences of λ, say t 0 = λ f λx u, then we have (t ) f x 'β (t 0 ) f x 'β u, and therefore u 'β ( f a 1 )( f a 2 ) . . . ( f a n )x. Thus t 'β t 0 'β λ f λx( f a 1 )( f a 2 ) . . . ( f a n )x, which is a closed term since so are the a i ’s, by hypothesis. If t 0 starts with only one occurrence of λ, then t 0 = λ f (y)t 1 . . . t n (y may be equal to f ) ; thus : (t 0 ) f x 'β (y)u 1 . . . u n x 'β (t ) f x 'β ( f a 1 )( f a 2 ) . . . ( f a n )x. So we have (y)u 1 . . . u n x 'β ( f a 1 )( f a 2 ) . . . ( f a n )x, and therefore y = f , n = 1 and u 1 'β a 1 (in both terms, the head variable is the same and its arguments are βequivalent). It follows that t 'β t 0 'β λ f ( f )a 1 . If t 0 does not start with λ, then t 0 = (y)t 1 . . . t n , so we have : (t 0 ) f x 'β (y)t 1 . . . t n f x 'β (t ) f x 'β ( f a 1 )( f a 2 ) . . . ( f a n )x. Therefore : (y)t 1 . . . t n f x 'β ( f a 1 )( f a 2 ) . . . ( f a n )x ; as before, it follows that n = 0, y = f , and a n = f ; but this is impossible since, by hypothesis, a n is βequivalent to a closed term. Q.E.D.
Proposition 8.15 gives some particular cases of a general construction on data types, which will be developed in the next section (theorem 8.28). Let us, for the moment, consider one more instance. Proposition 8.16. For every data type A, the type BT [A] = ∀X {(A, X , X → X ), X → X } is also a data type, called the type of binary trees indexed by objects of type A. Let A = {t ∈ Λ; there exists a ∈ A such that t 'β a}. Thus A 6= ; and every element of A is βequivalent to a closed term. We choose two distinct variables f , x, and we define E f x as the least subset of Λ with the following properties :
Lambdacalculus, types and models
158
x ∈ E f x ; if a ∈ A and t , u ∈ E f x , then ( f a)t u ∈ E f x .
(?)
In other words, E f x is the intersection of all subsets of Λ which have these properties. It follows that : If τ ∈ E f x , then τ is βequivalent to a term which has the only free variables f , x ; if τ 6= x, then f , x are free in τ ; either τ = x, or τ = ( f a)t u with a ∈ A and t , u ∈ E f x ; if τ β τ0 then τ0 ∈ E f x . Indeed, the set of λterms which have these properties has the properties (?). Proposition 8.17 below shows, in particular, that every term in BT [A] is βequivalent to a closed term. This proves proposition 8.16. Q.E.D.
Proposition 8.17. If t ∈ BT [A] and f , x are not free in t , then there is a τ ∈ E f x such that t β λ f λx τ. Remark. The terms of the form λ f λx τ, with τ ∈ E f x , are exactly the λterms which represent binary trees indexed by elements of A .
We define an interpretation I by setting, for every type variable X : X I = {ξ ∈ Λ; there exists τ ∈ E f x such that ξ β τ}. Then, by definition of E f x , we have : x ∈ X I and f ∈ A, X , X → X I . Since t ∈ (A, X , X → X ), X → X I , we get (t ) f x ∈ X I . In other words : (t ) f x β τ for some τ ∈ E f x . Since every element of E f x is a head normal form, it follows that t is solvable ; thus, t β t 0 where t 0 is a head normal form of t . If t 0 starts with at least two occurrences of λ, say t 0 = λ f λx u, then we have (t ) f x β (t 0 ) f x β u β τ ∈ E f x . Therefore, t β t 0 β λ f λx τ. If t 0 starts with only one occurrence of λ, then t 0 = λ f (y)t 1 . . . t n for some variable y ; thus (t ) f x β (t 0 ) f x β (y)t 1 . . . t n x β τ ∈ E f x . Since τ 'β (y)t 1 . . . t n x, we cannot have τ = x. Therefore, τ = ( f a)uv with a ∈ A and u, v ∈ E f x . Now, we have (y)t 1 . . . t n x β ( f )auv and therefore y = f , n = 2, t 1 β a, t 2 β u and v = x. Thus, t β t 0 β λ f ( f )au with u ∈ E f x . But x is free in u ∈ E f x , and therefore is also free in t , which is a contradiction. If t 0 does not start with λ, then t 0 = (y)t 1 . . . t n , so we have : (t ) f x β (t 0 ) f x β (y)t 1 . . . t n f x β τ ∈ E f x . Thus τ 6= x, so that τ = ( f a)uv with a ∈ A and u, v ∈ E f x . Therefore y = f and it follows that f is free in t 0 ; thus, f is also free in t (because t β t 0 ), which is a contradiction. Q.E.D.
Chapter 8. System F
159
5. Positive second order quantifiers We define formulas with positive (resp. negative) second order quantifiers, also called ∀+ formulas (resp. ∀− formulas), by the following rules : Every type variable is a ∀+ and ∀− formula. If A is a ∀+ formula, then ∀X A is also a ∀+ formula. If A is ∀− (resp. ∀+ ) and B is ∀+ (resp. ∀− ), then A → B is ∀+ (resp. ∀− ). Remark. Every quantifier free formula is ∀+ and ∀− . There is no closed ∀− formula.
We shall now prove the following : Theorem 8.18. If A is a closed ∀+ formula and t ∈ A, then t is βequivalent to a normal closed λterm. Corollary 8.19. Every closed ∀+ formula which is provable in system F is a data type. Let A be such a formula. By theorem 8.18, every term in A is 'β to a closed term ; so we only need to prove that A 6= ;. But, since A is provable in system F , there is a λterm t such that ` t : A. By the adequacy lemma 8.13, we deduce that t ∈ A. Q.E.D.
In order to prove theorem 8.18, we need to generalize the notion of “value of a formula”, defined page 153. A truth value set is, by definition, a non empty set V of saturated subsets of Λ, which is closed by → and arbitrary intersection. In other words : • V 6= ; ; X ∈ V ⇒ X is a saturated subset of Λ ; • the intersection of any non empty subset of V is in V ; • X , Y ∈ V ⇒ (X → Y ) ∈ V. For example, the set V0 of all saturated subsets of Λ is a truth value set ; other trivial examples are the twoelements set {;, Λ} and the singleton {Λ}. A Vinterpretation I is, by definition, a mapping X 7→ X V of the set of type I variables into V. Let I be a Vinterpretation, X a type variable and X ∈ V. We define a Vinterpretation J = I [X ← X ] by taking Y V = Y V for every type variable J I Y 6= X , and X V =X. J
For every type A, the value AV of A in a Vinterpretation I is an element of V I defined as follows, by induction on A : • if A is a type variable, then AV is given with I ; I
Lambdacalculus, types and models
160
, in other words : → B V = AV • A → B V I I I V for every term t , t ∈ A → B I if and only if t u ∈ B V for every u ∈ AV ; I I T V V • ∀X AI = {AI [X ←X ] ; X ∈ V}, in other words : for every term t , t ∈ ∀X AV if and only if t ∈ AV for every X ∈ V. I I [X ←X ] Remarks. i) The value AI of a formula, defined page 153, is the particular case when the truth value set is the set V0 of all saturated subsets of Λ. does not really depends on the interpetation I , but only on the reii) The value AV I striction of I to the set of free variables of A. In particular, if A is a closed formula, this value does not depends on I at all and will be denoted AV .
Lemma 8.20. Let V ⊂ W be two truth value sets and I a Vinterpretation. If A is a ∀+ (resp. a ∀− )formula then AW ⊂ AV (resp. AV ⊂ AW ). I I I I Proof by induction on the length of the formula A. The result is trivial if A is a variable, because we have AV = AW . I I + If A ≡ B → C and A is ∀ , then B is ∀− and C is ∀+ . By induction hypothesis, we get B V ⊂ B W and C W ⊂ C V . I I I I W It follows that B → C I ⊂ B → C V which is the result. I − If A ≡ B → C and A is ∀ , the proof is the same. T If A ≡ ∀X B and B is ∀+ , then AV = {B V ; X ∈ V} and I I [X ←X ] T W W AI = {B I [X ←X ] ; X ∈ W}. By induction hypothesis, we have : B W ⊂ B V ; now, since V ⊂ W, it follows that AW ⊂ AV . I [X ←X ] I I [X ←X ] I Q.E.D.
Corollary 8.21. If A is a closed ∀+ formula, then A ⊂ AV for every truth value set V. Immediate from lemma 8.20, since A = AV0 and V ⊂ V0 for every truth value set V. Q.E.D.
Consider now the pair (N0 , N ) of subets of Λ defined page 47 : N is the set of all terms which are normalizable by leftmost βreduction ; N0 = {(x)t 1 . . . t n ; n ∈ N, t 1 , . . . , t n ∈ N }. We put V = {X ⊂ Λ; X is saturated, N0 ⊂ X ⊂ N }. Lemma 8.22. V is a truth value set. V is obviously closed by arbitrary (non void) intersection. Now, if X , Y ∈ V, we have N0 ⊂ X , Y ⊂ N and therefore : (N → N0 ) ⊂ (X → Y ) ⊂ (N0 → N ). But we have proved, page 47, that (N0 , N )
Chapter 8. System F
161
is an adapted pair, and therefore that N0 ⊂ (N → N0 ) and (N0 → N ) ⊂ N . It follows that N0 ⊂ (X → Y ) ⊂ N . Q.E.D.
We now choose a fixed λvariable x ; let Λx ⊂ Λ be the set of λterms the only free variable of which is x (every closed term is in Λx ). We put : N x = {t ∈ Λ; (∃u ∈ Λx ) t reduces to u by leftmost βreduction} N0x = {(x)t 1 . . . t n ; n ∈ N, t 1 , . . . , t n ∈ N }. Lemma 8.23. i) N0x ⊂ N x ; ii) N0x ⊂ (N
x
→ N0x ) ; iii) (N0x → N x ) ⊂ N x .
Remark. This lemma means that the pair (N0x , N x ) is an adapted pair, as defined page 46.
i) and ii) follow immediately from the definitions of N x and N0x . iii) Let t ∈ (N0x → N x ) ; since x ∈ N0x , we have t x ∈ N x , so that t x reduces to u ∈ Λx by leftmost reduction. If this reduction takes place in t , then u = v x and t reduces to v ∈ Λx by leftmost reduction. Otherwise, t reduces to λy t 0 and t 0 [x/y] reduces to u by leftmost reduction. Thus, there exists a λterm u 0 with the only free variables x, y, such that t 0 reduces to u 0 by leftmost reduction. Therefore, by leftmost reduction, t reduces to λy t 0 , then to λy u 0 and x is the only free variable of λy u 0 . Q.E.D.
Now, we define Vx = {X ; X is a saturated subset of Λ, N0x ⊂ X ⊂ N x }. Lemma 8.24. Vx is a truth value set. We have only to check that (X → Y ) ∈ Vx if X , Y ∈ Vx . By definition of Vx , we have N0x ⊂ X , Y ⊂ N x and therefore : (N x → N0x ) ⊂ (X → Y ) ⊂ (N0x → N x ). Using lemma 8.23, we get N0x ⊂ (X → Y ) ⊂ N x . Q.E.D.
We can now prove theorem 8.18. Let A be a closed ∀+ formula and t ∈ A. By corollary 8.21 and lemma 8.22, we have A ⊂ AV ⊂ N . It follows that t ∈ N , which means that t is normalizable. Now, choose a λvariable x which is not free in t . By corollary 8.21 and lemma 8.24, we get A ⊂ AVx ⊂ N x . It follows that t ∈ N x , which means that t reduces, by leftmost reduction, to a term with the only free variable x. Since x is not free in t , this reduction gives a closed term. Q.E.D.
The next theorem gives another interesting truth value set.
Lambdacalculus, types and models
162
Theorem 8.25. Let C = {t ∈ Λ; there exists a closed term t 0 such that t β t 0 }. Then {C } is a (oneelement) truth value set. Remark. By the ChurchRosser theorem 1.24, C is also the set of λterms which are βequivalent to closed terms.
Lemma 8.26. Let ω = (λz zz)λz zz and t ∈ Λ. A step of βreduction in t [ω/x] gives t 0 [ω/x], where t 0 = t or t 0 is obtained by a step of βreduction in t . Proof, by induction on the length of t . The result is immediate if t is a variable or if t = λx u. If t = uv, then a redex in t [ω/x] = u[ω/x]v[ω/x] is either a redex in u[ω/x], or a redex in v[ω/x], or t [ω/x] itself. In the first two cases, we simply apply the induction hypothesis. In the last case, u[ω/x] begins with a λ and, therefore, u = λy u 0 and t = (λy u 0 )v. The redex we consider is (λy u 0 [ω/x])v[ω/x] and its reduction gives u 0 [ω/x][v[ω/x]/y] = t 0 [ω/x] with t 0 = u 0 [v/y]. Q.E.D.
Lemma 8.27. Let t ∈ Λ ; if there is a closed term u such that t [ω/x] β u, then there is a term u 0 with the only free variable x, such that t β u 0 . Proof by induction on the length of the given βreduction from t [ω/x] to u. If this length is 0, then t [ω/x] is closed and t has the only free variable x. Otherwise, by lemma 8.26, after one step of βreduction, we get t 0 [ω/x] with t β t 0 . By the induction hypothesis, we have t 0 β u 0 (u 0 has the only free variable x) and, therefore, t β u 0 . Q.E.D.
We can now prove the theorem 8.25. It is clear that C is a saturated set ; thus, we only have to show : C = (C → C ) and, in fact only : (C → C ) ⊂ C , because the reverse inclusion is trivial. Let t ∈ (C → C ), so that we have t ω ∈ C and, therefore, t ω β u where u is closed. If this βreduction takes place entirely in t , we have t β t 0 and t 0 ω = u ; thus, t 0 is closed and t ∈ C . Otherwise, we have t β λx t 0 and t 0 [ω/x] β u. By lemma 8.27, we have t 0 β u 0 (u 0 has the only free variable x) and, therefore, t β λx u 0 . Since λx u 0 is closed, we get t ∈ C . Q.E.D.
This gives another proof of the second part of theorem 8.18 : if A is a closed ∀+ formula, then by corollary 8.21 with V = {C }, we obtain A ⊂ AV = C . This shows that every term in A is βequivalent to a closed term. Consider a formula F and a type variable X ; for each free occurrence of X in F , we define its sign (positive or negative), inductively on the length of F : • if F ≡ X , the occurrence of X is positive ;
Chapter 8. System F
163
• if F ≡ (G → H ), the positive (resp. negative) free occurrences of X in F are the positive (resp. negative) free occurrences of X in H and the negative (resp. positive) free occurrences of X in G ; • if F ≡ ∀Y G, with Y 6= X , the positive (resp. negative) free occurrences of X in F are the positive (resp. negative) free occurrences of X in G. Theorem 8.28. Suppose that ∀X 1 . . . ∀X k F is a closed ∀+ formula which is provable in system F , and that every free occurrence of X 1 , . . . , X k in F is positive. If A 1 , . . . , A k are data types, then F [A 1 /X 1 , . . . , A k /X k ] is a data type. Remark. In fact, we may suppose only that A 1 , . . . , A k  ⊂ C ; the hypothesis A i  6= ; is useless.
Lemma 8.29. Let X 1 , . . . , X k be distinct type variables, and I , J be two Vinterpretations such that : X i V ⊃ X i V for 1 ≤ i ≤ k and X V = X V for every type I J I J variable X 6= X 1 , . . . , X k . If X 1 , . . . , X k have only positive (resp. negative) free occurrences in a formula F , then F V ⊃ F V (resp. F V ⊂ F V ). I J I J Easy proof, by induction on the length of F . Q.E.D.
Proof of theorem 8.28. By hypothesis, we have `F t : ∀X 1 . . . ∀X k F for some t ∈ Λ. By the adequacy lemma 8.13, we deduce that t ∈ ∀X 1 . . . ∀X k F  and, therefore t ∈ F [A 1 /X 1 , . . . , A k /X k ]. This shows F [A 1 /X 1 , . . . , A k /X k ] 6= ;. In lemma 8.20, we take V = {C } and W = V0 (the set of all saturated subsets of Λ) ; I is the single Vinterpretation, which is defined by X I = C for every type variable X . We apply this lemma to the ∀+ formula F and we obtain : F I = F W ⊂ F V =C. I I We define an interpetation J as follows : X i J = A i  for 1 ≤ i ≤ k and X J = C for any type variable X 6= X 1 , . . . , X k . Now, one hypothesis of the theorem is that A 1 , . . . , A k  ⊂ C . Moreover, the variables X 1 , . . . , X k have only positive occurrences in the formula F . Therefore, the hypothesis of lemma 8.29 are fulfilled (the truth value set being W = V0 ) and it follows that F J ⊂ F I ; thus, F J ⊂ C . Now, F J is the same as F [A 1 /X 1 , . . . , A k /X k ], and therefore we obtain the desired result : F [A 1 /X 1 , . . . , A k /X k ] ⊂ C . Q.E.D.
References for chapter 8 [Boh85], [For83], [Gia88], [Gir71], [Gir72], [Gir86]. (The references are in the bibliography at the end of the book).
164
Lambdacalculus, types and models
Chapter 9 Second order functional arithmetic 1. Second order predicate calculus In this chapter, we will deal with the classical second order predicate calculus, with a syntax using the following symbols : the logical symbols → and ∀ (and no other ones) ; individual variables : x, y, . . . (also called first order variables) ; nary relation variables (n = 0, 1, . . .) : X , Y , . . . (also called second order variables) ; nary function symbols (n = 0, 1, . . .) (on individuals) ; nary relation symbols (n = 0, 1, . . .) (on individuals). Each relation variable, each function or relation symbol, has a fixed arity n ≥ 0. Function symbols of arity 0 are called constant symbols. Relation variables of arity 0 are also called propositional variables. It is assumed that there are infinitely many individual variables and, for each n ≥ 0, infinitely many nary relation variables. The function and relation symbols determine what we call a language ; the other symbols are common to all languages. Let L be a language. The (individual) terms of L are built up in the usual way, that is by the following rules : each individual variable, and each constant symbol, is a term ; whenever f is an nary function symbol and t 1 , . . . , t n are terms, f (t 1 , . . . , t n ) is a term. The atomic formulas are the expressions of the form A(t 1 , . . . , t k ), where A is a kary relation variable or symbol and t 1 , . . . , t k are terms. The formulas are the expressions obtained by the following rules : every atomic formula is a formula ; 165
166
Lambdacalculus, types and models
whenever F,G are formulas, (F → G) is a formula ; whenever F is a formula, x is an individual variable and X is a relation variable, ∀x F and ∀X F are formulas. Definitions and notations A closed term of L is a term which contains no variable. A closed formula is a formula in which no variable occurs free. The closure of a formula F is the formula obtained by universal quantification of all the free variables of F . A universal formula consists of a (finite) sequence of universal quantifiers followed by a quantifier free formula. The formula F 1 → (F 2 → (. . . → (F n → G) . . .)) will also be denoted by : F 1 , F 2 , . . . , F n → G. Let X be a 0ary relation variable, ξ any individual or relation variable which is 6= X , and F,G arbitrary formulas in which X does not occur free. The formula ∀X X is denoted by ⊥ (read “ false ”). The formula F → ⊥ is denoted by ¬F (read “ not F ”). The formula ∀X [(F → X ), (G → X ) → X ] is denoted by F ∨G (read “ F or G ”). The formula ∀X [(F,G → X ) → X ] is denoted by F ∧G (read “ F and G ”). The formula (F → G) ∧ (G → F ) is denoted by F ↔ G (read “ F is equivalent to G ”). The formula ∀X [∀ξ(F → X ) → X ] is denoted by ∃ξ F (read “ there exists a ξ such that F ”).
αequivalent formulas and substitution Let F be a formula, ξ a variable, and η the same sort of symbol as ξ (if ξ is an individual variable, then so is η ; if ξ is an nary relation variable, then η is an nary relation variable or symbol) ; we define the formula F by replacing in F all free occurrences of ξ by η. We now define, by induction on F , the αequivalence of two formulas F,G, denoted by F ≡ G : • if F is an atomic formula, then F ≡ G if and only if F = G ; • if F = A → B , then F ≡ G if and only if G = A 0 → B 0 , where A ≡ A 0 and B ≡ B0 ; • if F = ∀ξ A, ξ being an individual or relation variable, then F ≡ G if and only if G = ∀η B , where η is the same sort of variable as ξ, and A ≡ B for all variables ζ of the same sort as ξ but a finite number.
Chapter 9. Second order functional arithmetic
167
From now on, we shall identify αequivalent formulas. If V is a finite set of variables (of any kind), and A is a formula, then there exists a formula A 0 ≡ A, such that no variable of V is bound in A 0 . A 0 has the same length as A (the only difference between A and A 0 is the name of the bound variables). Let A be a formula, x 1 , . . . , x k individual variables, and t 1 , . . . , t k terms. The formula A[t 1 /x 1 , . . . , t k /x k ] is defined by choosing a representative of A such that none of its bound variables occur in the t i ’s, and then by replacing in it each free occurrence of x i by t i (1 ≤ i ≤ n). Consider two formulas A and F , an nary relation variable X , and n individual variables x 1 , . . . , x n . We define the substitution of F to X (x 1 , . . . , x n ) in A : this produces a formula, denoted by A[F /X x 1 . . . x n ] ; the definition is by induction on A and requires a representative of A such that its bound variables do not occur in F : • if A is an atomic formula of the form X (t 1 , . . . , t n ), then A[F /X x 1 . . . x n ] is the formula F [t 1 /x 1 , . . . , t n /x n ] ; • if A is atomic and does not start with X , then A[F /X x 1 . . . x n ] = A ; • if A = B → C , then A[F /X x 1 . . . x n ] = B [F /X x 1 . . . x n ] → C [F /X x 1 . . . x n ] ; • if A = ∀ξ B , where ξ is an individual variable, or a relation variable different from X , then A[F /X x 1 . . . x n ] = ∀ξ B [F /X x 1 . . . x n ] ; • if A = ∀X B , then A[F /X x 1 . . . x n ] = A.
Models Recall briefly some classical definitions of model theory. A second order model for the language L is a structure M consisting of : • a domain M  (the set of individuals, assumed nonempty) ; • for each integer n ≥ 0, a subset M n of P (M n ), which is the range for the values of the nary relation variables. If n = 0, we assume that M 0 = P (M 0 ) = {0, 1} ; • an interpretation, in M , of the function and relation symbols of the language L : namely, a mapping which associates with each nary function symbol f of L , an nary function f M : M n → M , and with each nary relation symbol S of L , an nary relation on M , that is a subset
168
Lambdacalculus, types and models S M ⊂ M n . In particular, it associates with each constant symbol c an element c M ∈ M .
We will say that an nary relation R on M  (in other words a subset of M n ) is part of the model M whenever R ∈ M n . The elements of M 1 are called the classes of M . The model M is called a full model if, for each n ≥ 0, M n = P (M n ) (that is to say : if all relations on M  are part of the model M ). Let L M denote the language obtained by adding to L every element of M  as a constant symbol, and, for each n ≥ 0, every element of P (M n ) as an nary relation symbol (of course, we suppose that no symbol in L is an element of M  or of P (M n )). The terms and formulas of L M are respectively called terms and formulas of L with parameters in M . There is an obvious way of extending the model M to a model for the language L M : the new symbols of L M are their own interpretation. With each closed term of L , with parameters in M , we associate its value t M ∈ M , which is defined by induction on t : if t is a constant symbol of L M , then t M is already defined ; 1 n if t = f (t 1 , . . . , t n ), then t M = f M (t M , . . . , tM ).
Let F be a closed formula of L , with parameters in M . We define, by induction on F , the expression M satisfies F , which is denoted by M = F : if F is an atomic formula, say R(t 1 , . . . , t n ), where R is an nary relation symbol of L M , and t 1 , . . . , t n are closed terms of L M , then M = F if and only if 1 n (t M , . . . , tM ) ∈ RM . if F = G → H , then M = F if and only if M = G ⇒ M = H . if F = ∀x G, x being the only free variable in G, then M = F if and only if M = G for every a ∈ M . if F = ∀X G, where the nary relation variable X is the only free variable in G, then M = F if and only if M = G for every R ∈ M n . Let A be a system of axioms of the language L (that is to say a set of closed formulas, also called a theory). By a model of A , we mean a model which satisfies all formulas of A . A closed formula F is said to be a consequence of A (which is denoted by A ` F ) if every model of A satisfies F . A closed formula F is said to be valid (we write ` F ) if it is a consequence of ;, in other words, if it is satisfied in every model. Clearly, for every 0ary relation variable X , no model satisfies the formula ∀X X . This is a justification for the definition of ⊥. Proposition 9.1. Let A, F be two formulas with parameters in M , such that the only free variable in A is an nary relation variable X , and all the free variables
Chapter 9. Second order functional arithmetic
169
in F are among the individual variables x 1 , . . . , x n . Let Φ = {(a 1 , . . . , a n ) ∈ M n ; M = F [a 1 /x 1 , . . . , a n /x n ]} (which is an nary relation on M ). Then M = A[F /X x 1 . . . x n ] ⇔ M = A. The proof is by induction on A. If A is atomic and starts with X , then A = X t 1 . . . t n , so : n 1 /x 1 , . . . , t M /x n ] M = A[F /X x 1 . . . x n ] ⇔ M = F [t M 1 n ⇔ M = Φ(t M , . . . , tM ) ⇔ M = A. If A = ∀x B , where x is an individual variable, then : M = ∀x B [F /X x 1 . . . x n ] ⇔ (∀a ∈ M )M = B [F /X x 1 . . . x n ] ⇔ (∀a ∈ M )M = B [F /X x 1 . . . x n ] ⇔ (∀a ∈ M )M = B (by induction hypothesis) ⇔ (∀a ∈ M )M = B ⇔ M = ∀x B . Same proof when A = ∀Y B , for some relation variable Y 6= X . The other cases of the inductive proof are trivial. Q.E.D.
The comprehension axiom This is an axiom scheme, denoted by C A ; it consists of the closure of all formulas of the following form : (C A)
∀X A → A[F /X x 1 . . . x n ]
where A and F are arbitrary formulas, X is an nary relation variable (n ≥ 0), and x 1 , . . . , x n are n individual variables. Proposition 9.2. Every full model satisfies the comprehension axiom. Let M be a full model, X an nary relation variable, x 1 , . . . , x n , n individual variables, A a formula with parameters in M in which X is the only free variable, and F a formula with parameters in M in which all the free variables are among x 1 , . . . , x n . Suppose M = ∀X A, and let : Φ = {(a 1 , . . . , a n ) ∈ M n ; M = F [a 1 /x 1 , . . . , a n /x n ]}. We have Φ ∈ P (M n ) and M is full : thus Φ ∈ M n . Since M = ∀X A, we have M = A ; therefore, by proposition 9.1, M = A[F /X x 1 . . . x n ]. Q.E.D.
Given a language L , the second order predicate calculus on L is the theory consisting of all the axioms of the comprehension scheme. Thus a model of the second order predicate calculus on the language L is a second order model M for L such that M = C A.
Lambdacalculus, types and models
170
Proposition 9.3. The comprehension axiom is equivalent to the following axiom scheme : (C A 0 ) ∃Y ∀x 1 . . . ∀x n [Y (x 1 , . . . , x n ) ↔ F ] where Y is an nary relation variable (n ≥ 0) and F an arbitrary formula. (In fact, as above, we consider the closure of the formulas of C A 0 ). Clearly, we have ` ∀X A, where A is the formula : ∃Y ∀x 1 . . . ∀x n [Y (x 1 , . . . , x n ) ↔ X (x 1 , . . . , x n )]. Therefore C A ` A[F /X x 1 . . . x n ], that is to say C A ` C A 0 . Conversely, consider any model M of C A 0 . Suppose that M = ∀X A, where X is an nary relation variable, and A a formula with parameters in M , where the only free variable is X . Let F be a formula with parameters in M and free variables among x 1 , . . . , x n . Let Φ = {(a 1 , . . . , a n ) ∈ M n ; M = F [a 1 /x 1 , . . . , a n /x n ]}. We have M = ∃Y ∀x 1 . . . ∀x n [Y (x 1 , . . . , x n ) ↔ F ] by hypothesis. Hence M = ∀x 1 . . . ∀x n [Ψ(x 1 , . . . , x n ) ↔ F ] for some Ψ ∈ M n . Therefore : M = ∀x 1 . . . ∀x n [Ψ(x 1 , . . . , x n ) ↔ Φ(x 1 , . . . , x n )]. It follows that Φ = Ψ, so Φ ∈ M n . Since M = ∀X A, we have M = A ; thus, by proposition 9.1, M = A[F /X x 1 . . . x n ]. Q.E.D.
Equational formulas We consider a second order language L . The formula ∀X [X (x) → X (y)] will be denoted by x = y. Obviously, we have ` x = x and ` x = y, y = z → x = z. Moreover, C A ` x = y → y = x (apply C A, taking A as the formula X (x) → X (y), then replace X (y) with the formula y = x). We also have, clearly, for every formula F (x), C A, x = y ` F (x) → F (y). It follows that, in every model M of the second order predicate calculus, the formula x = y defines an equivalence relation which is compatible with the whole structure of the model. By taking the quotient, we thus obtain a model M 0 in which the interpretation of the formula x = y is the identity relation. Such a model will be called an identity model. Now it is clear that the models M and M 0 satisfy exactly the same formulas of L . This allows us, when we deal with models of C A, to consider only identity models ; from now on, it is what we will do. By an equation (or an equational formula), we mean the closure of any formula of the form t = u (where t , u are terms). A set of equations will also be called a system of equational axioms. A particular case of an equation t = u is, by definition, a formula of one of two forms : t [v 1 /x 1 , . . . , v k /x k ] = u[v 1 /x 1 , . . . , v k /x k ] or
Chapter 9. Second order functional arithmetic
171
u[v 1 /x 1 , . . . , v k /x k ] = t [v 1 /x 1 , . . . , v k /x k ], where v 1 , . . . , v k are terms. Proposition 9.4. Let E be a system of equational axioms in some language L , and u, v two terms of L . Then C A + E ` u = v if and only if the expression `E u = v can be obtained by means of the following rules : i) if u = v is a particular case of an axiom of E , then `E u = v ; ii) for all terms u, v, w of L , we have `E u = u ; if `E u = v and `E v = w, then `E u = w ; iii) if f is an nary function symbol of L , and if `E u 1 = v 1 , . . ., `E u n = v n , then `E f (u 1 , . . . , u n ) = f (v 1 , . . . , v n ). Clearly, if one obtains `E u = v by these rules, then every model of C A + E satisfies u = v. In order to prove the converse, we first show that `E u = v ⇒ `E v = u, by induction on the length of the derivation of `E u = v by rules (i), (ii), (iii). Consider the last rule used. If it is rule (i), then the result is clear (if u = v is a particular case of an axiom of E , then so is v = u). If it is rule (ii), then `E u = w and `E w = v are already deduced ; thus, by induction hypothesis, `E w = u and `E v = w ; therefore `E v = u. The proof is similar in the case of rule (iii). Thus the relation `E u = v defined by these rules is an equivalence relation on the set T of individual terms of L : indeed, it is reflexive and transitive by rule (ii), and it has just been proved that it is symmetric. By rule (iii), it is compatible with the natural interpretation of the functional symbols of L in T . It follows that the quotient set of T by this equivalence relation is a (first order) model M for the language L . By rule (i), this model satisfies E . By taking the full model over M , we obtain a model of C A + E . Now let u, v be two terms of L , such that C A + E ` u = v ; it is clear that the considered model satisfies u = v, which means that `E u = v. Q.E.D.
Notice that the system of axioms C A + E cannot be contradictory. Indeed, the full model over a one element set (with the unique possible interpretation of the function symbols) is clearly seen to satisfy C A + E .
Deduction rules for the second order predicate calculus Consider a second order language L , and a system E of equational axioms of L . Let A be a formula, and A = {A 1 , . . . , A k } a finite set of formulas of L . By the completeness theorem of predicate calculus (applied to the system of axioms
172
Lambdacalculus, types and models
C A + E ), A is a consequence of C A + E + A if and only if the expression A `E A can be obtained by means of the following “ deduction rules ” : D0. For every formula A and every finite set of formulas A : A , ¬¬A `E A. D1. For every formula A and every finite set of formulas A : A , A `E A. D2. If A , A `E B , then A `E A → B . D3. If A `E A and A `E A → B , then A `E B . D4. If A `E ∀x A, then A `E A[u/x] for every term u of L . D5. If A `E A and if the individual variable x does not occur free in A , then A `E ∀x A. D6. If A `E ∀X A, where X is an nary relation variable, and if F is any formula of L , then A `E A[F /X x 1 . . . x n ]. D7. If A `E A and if the nary relation variable X does not occur free in A , then A `E ∀X A. D8. Let A be a formula, x an individual variable and u, v two terms of L such that u = v is a particular case of an axiom of E . If A `E A[u/x], then A `E A[v/x]. So the meaning of the expression A `E A is : “ A is a consequence of A with the system of equational axioms E , in the classical second order predicate calculus ”. Similarly, we define the expression : “ A is a consequence of A with the system of equational axioms E , in the intuitionistic second order predicate calculus ” ; this will be denoted by A `iE A. The definition uses rules D1 through D8 above, but not D0.
2. System F A 2 We consider a second order language L , and a system E of equational axioms of L . We are going to describe a system of typed λcalculus, called second order functional arithmetic (F A 2 ), where the types are the formulas of L (modulo αequivalence). When writing the typed terms of this system, we will use the same symbols to denote the variables of the λcalculus and the individual variables of the language L . A context Γ is a set of the form x 1 : A 1 , x 2 : A 2 , . . . , x k : A k , where x 1 , x 2 , . . . , x k are distinct variables of the λcalculus, and A 1 , A 2 , . . . , A k are formulas of L . We will say that an individual variable x (or a relation variable X ) of L is not free in Γ if it does not occur free in A 1 , A 2 , . . . , A k . The rules of typing are the following (t stands for a term of the λcalculus) : T1. Γ, x : A `E x : A whenever x is a variable of the λcalculus which is not declared in Γ.
Chapter 9. Second order functional arithmetic
173
T2. If Γ, x : A `E t : B , then Γ `E λx t : A → B . T3. If Γ `E t : A and Γ `E u : A → B , then Γ `E (u)t : B . T4. If Γ `E t : ∀x A, and if u is a term of L , then Γ `E t : A[u/x]. T5. If Γ `E t : A, and if the individual variable x is not free in Γ, then Γ `E t : ∀x A. T6. If Γ `E t : ∀X A, where X is an nary relation variable, then Γ `E t : A[F /X x 1 . . . x n ] for every formula F of L . T7. If Γ `E t : A, and if the relation variable X is not free in Γ, then Γ `E t : ∀X A. T8. Let u, v be two terms of L , such that u = v is a particular case of an axiom of E , and A a formula of L . If Γ `E t : A[u/x], then Γ `E t : A[v/x]. Whenever we obtain the typing Γ `E t : A by means of these rules, we will say that “ the λterm t is of type A (or may be given type A) with the axioms of E , in the context Γ ”. Clearly, if Γ `E t : A, then all the free variables of t are declared in Γ. Thus all terms which are typable in the empty context are closed. The following statement, which is a form of the so called CurryHoward correspondence, is an immediate consequence of the above definitions : There exists a term which may be given type A with the equational system E in the context x 1 : A 1 , x 2 : A 2 , . . . , x k : A k if and only if A 1 , A 2 , . . . , A k `iE A. Indeed, the constructions of typed terms by means of rules T1 through T8 correspond, in an obvious and canonical way, to the intuitionistic proofs with rules D1 through D8.
System F and the normalization theorem The types of system F are, by definition (see chapter 8), the formulas built up with the logical symbols ∀, →, and the 0ary relation variables X , Y , . . . (propositional variables). So these formulas are seen to appear in all second order languages. The typing rules of system F form a subsystem of the above rules : they are rules T1, T2, T3, and T6, T7 restricted to the case n = 0. Proposition 9.5. Given a language L and a system E of equations of L , a λterm t is typable with E if and only if it is typable in system F . The condition is obviously sufficient, since the typing rules of system F form a subsystem of rules T1, . . . , T8. To prove the converse, we associate with each formula A of L , a formula A − of system F , obtained by “ forgetting in A the first order part ”. The definition of A − is by induction on A :
Lambdacalculus, types and models
174
if A is atomic, say A = X (t 1 , . . . , t n )(X being an nary relation variable or symbol), then A − = X (which is, here, a propositional variable) ; if A = B → C , then A − = B − → C − ; if A = ∀x B (x being an individual variable), then A − = B − . if A = ∀X B (X being an nary relation variable), then A − = ∀X B − (X being, here, a propositional variable). Now consider a derivation of a typing x 1 : A 1 , . . . , x k : A k `E t : A, with the system of equations E . In this derivation, replace each formula F of L by F − . We therefore obtain a derivation, in system F , of the typing : − − − x1 : A − 1 , x2 : A 2 , . . . , xk : A k ` t : A . Note that rules T4, T5 and T8 disappear after this transformation, since we have (∀x A)− = A − and A[u/x]− = A[v/x]− . Q.E.D.
Theorem 9.6 (Normalization theorem). Let L be a second order language and E a system of equations of L . Then, every term of the λcalculus which is typable with E is strongly normalizable. By proposition 9.5, a λterm which is typable with E is also typable in system F , so the result follows from the normalization theorem for that system (theorem 8.9). Q.E.D.
Derived rules for constructing typed terms Let L be a second order language, and E a system of equations of L . Proposition 9.7. If Γ `E t : A and Γ ⊂ Γ0 , then Γ0 `E t : A. Immediate proof, by induction on the length of the derivation of Γ `E t : A. Q.E.D.
Proposition 9.8. Let Γ be a context, and x 1 , . . . , x k variables which are not declared in Γ. If Γ `E t i : A i (1 ≤ i ≤ k) and Γ, x 1 : A 1 , . . . , x k : A k `E u : B , then Γ `E u[t 1 /x 1 , . . . , t k /x k ] : B . In particular, if x 1 , . . . x k do not occur free in u, and Γ, x 1 : A 1 , . . . , x k : A k `E u : B , then Γ `E u : B . The proof is the same as that of proposition 8.2. Q.E.D.
Our purpose now is to prove :
Chapter 9. Second order functional arithmetic
175
Theorem 9.9. Let t , t 0 be two λterms such that t β t 0 ; if Γ `E t : A, then Γ `E t 0 : A. Recall that t β t 0 means that t 0 is obtained from t by βreduction. Lemma 9.10. Let u be a term and x a variable of L . If Γ `E τ : A, then Γ[u/x] `E τ : A[u/x]. The proof is by induction on the length l of the derivation of Γ `E τ : A ; in fact we will show that Γ[u/x] `E τ : A[u/x] also has a derivation of length l . Consider the last rule used. The result is immediate if it is T1,T2 or T3. If it is T4, then we have Γ `E τ : ∀y A 0 (as a previous typing), and A = A 0 [v/y]. By induction hypothesis, Γ[u/x] `E τ : ∀y A 0 [u/x] and therefore, by applying T4, Γ[u/x] `E τ : A 0 [u/x][v 0 /y]. Take v 0 = v[u/x] ; then A 0 [u/x][v 0 /y] = A 0 [v/y][u/x] since y does not occur in u. Hence Γ[u/x] `E τ : A[u/x]. If it is T5, then we have Γ `E τ : A 0 (previous typing) and A = ∀y A 0 , where y is an individual variable which is not free in Γ. If we take a variable z with no occurrence in Γ, A 0 , u, then, by induction hypothesis : Γ[z/y] `E τ : A 0 [z/y], and the length of this derivation is l . Now Γ[z/y] is identical to Γ. Let A 00 = A 0 [z/y] ; then Γ `E τ : A 00 , and therefore Γ[u/x] `E τ : A 00 [u/x]. Since z does not occur in Γ[u/x], we may apply T5, so we obtain Γ[u/x] `E τ : ∀z A 00 [u/x]. Now ∀z A 00 ≡ ∀y A 0 ≡ A ; thus Γ[u/x] `E τ : A[u/x]. If it is T6, then we have Γ `E ∀X A 0 (previous typing), X being an nary relation variable, and A = A 0 [F /X x 1 . . . x n ]. By induction hypothesis, Γ[u/x] `E τ : ∀X A 0 [u/x] ; therefore, by applying T6, we obtain Γ[u/x] `E τ : A 0 [u/x][F 0 /X x 1 . . . x n ]. Take F 0 as F [u/x] ; then : A 0 [u/x][F 0 /X x 1 . . . x n ] = A 0 [F /X x 1 . . . x n ][u/x] (since we may assume that x 1 , . . . , x n do not occur in u) = A[u/x]. If it is T7, the proof is the same as for T5. If it is T8, we have Γ `E τ : A 0 [v/y] (previous typing) and A = A 0 [w/y], v = w being a particular case of E . By induction hypothesis, Γ[u/x] `E τ : A 0 [v/y][u/x] ; now, since we may assume that y does not occur in u, we also have : A 0 [v/y][u/x] = A 0 [u/x][v 0 /y], where v 0 = v[u/x]. Thus Γ[u/x] `E τ : A 0 [u/x][v 0 /y]. Let w 0 = w[u/x] : we see that v 0 = w 0 is a particular case of E . By rule T8, we obtain Γ[u/x] `E τ : A 0 [u/x][w 0 /y]. Now we have A 0 [u/x][w 0 /y] = A 0 [w/y][u/x] = A[u/x]. This yields the expected conclusion. Q.E.D.
Lemma 9.11. Let X be an nary relation variable of the language L . If Γ `E τ : A, then Γ[F /X x 1 . . . x n ] `E τ : A[F /X x 1 . . . x n ].
Lambdacalculus, types and models
176
The proof of the previous lemma applies in cases 1, 2, 3, 4, 5, 7 and 8. Suppose that the last rule applied is T6 ; then we have Γ `E τ : ∀Y A 0 (as a previous typing) and A = A 0 [G/Y y 1 . . . y p ]. By induction hypothesis, Γ[F /X x 1 . . . x n ] `E τ : ∀Y A 0 [F /X x 1 . . . x n ] ; by applying T6, we obtain : Γ[F /X x 1 . . . x n ] `E τ : A 0 [F /X x 1 . . . x n ][G 0 /Y y 1 . . . y p ] ; if we take G 0 as G[F /X x 1 . . . x n ], we see that : A 0 [F /X x 1 . . . x n ][G 0 /Y y 1 . . . y p ] = A 0 [G/Y y 1 . . . y p ][F /X x 1 . . . x n ] (since Y does not occur in F ) = A[F /X x 1 . . . x n ] ; this ends the proof. Q.E.D.
Lemma 9.12. If u = v is a particular case of E and Γ[u/x] `E τ : A[u/x], then Γ[v/x] `E τ : A[v/x]. Let Γ = x 1 : A 1 , . . . , x k : A k . By hypothesis, we have Γ[u/x] `E τ : A[u/x], therefore, by rule T8, x 1 : A 1 [u/x], . . . , x k : A k [u/x] `E τ : A[v/x]. Now Γ[v/x] `E x i : A i [v/x] (rule T1) ; thus, by rule T8, Γ[v/x] `E x i : A i [u/x]. Then it follows from proposition 9.8 that Γ[v/x] `E τ : A[v/x]. Q.E.D.
Let Γ be a context and A a formula. We define the class C Γ,A of Γinstances of A, which is the least class C of formulas of L which contains A and is such that : if B ∈ C , then B [t /x] ∈ C whenever x is an individual variable not free in Γ, and t is a term. if B ∈ C , then B [F /X x 1 . . . x n ] ∈ C whenever X is an nary relation variable not free in Γ, and F is a formula. if B [t /x] ∈ C , then B [u/x] ∈ C whenever t = u is a particular case of E . A formula is said to be open if it does not start with ∀ (so it is either atomic or of the form B → C ). Every formula F can be written ∀ξ1 . . . ∀ξk F 0 where F 0 is an open formula called the interior of F (ξ1 , . . . , ξk are individual or relation variables). Lemma 9.13. If A 0 is an open formula, and if Γ `E t : A 0 can be deduced from Γ `E t : A using only rules T4 through T8, then A 0 is a Γinstance of A 0 . The proof is by induction on the number of steps in the deduction by means of rules T4 through T8. Consider the first rule used. If it is T5 or T7, then the first step is to pass from Γ `E t : A to Γ `E t : ∀ξ A ; the result follows immediately, since A and ∀ξ A have the same interior. If it is T4, then A can be written ∀x∀ξ1 . . . ∀ξk A 0 , and the first step of the derivation gives Γ `E t : ∀ξ1 . . . ∀ξk A 0 [u/x]. By induction hypothesis A 0 is a Γinstance of A 0 [u/x], and thus also of A 0 . If it is T6, then A can be written ∀X ∀ξ1 . . . ∀ξk A 0 , and the first step of the derivation gives Γ `E t : ∀ξ1 . . . ∀ξk A 0 [F /X x 1 . . . x n ]. Now A 0 is an open formula :
Chapter 9. Second order functional arithmetic
177
If A 0 is either an atomic formula not beginning with X , or a formula of the form B → C , then A 0 [F /X x 1 . . . x n ] is of the same form, so it is open. By induction hypothesis, A 0 is a Γinstance of A 0 [F /X x 1 . . . x n ], thus also of A 0 . Otherwise, A 0 is of the form X t 1 . . . t n ; then : A 0 [F /X x 1 . . . x n ] ≡ F [t 1 /x 1 , . . . , t n /x n ] and it follows from the induction hypothesis that A 0 is a Γinstance of F 0 [t 1 /x 1 , . . . , t n /x n ], in other words a Γinstance of A 0 [F 0 /X x 1 . . . x n ], thus also of A 0 . If it is T8, then A is written B [u/x], and the first step of the derivation gives Γ `E t : B [v/x], u = v being a particular case of E . We have A 0 = B 0 [u/x], and the interior of B [v/x] is B 0 [v/x]. By induction hypothesis, A 0 is a Γinstance of B 0 [v/x], thus also of A 0 . Q.E.D.
Lemma 9.14. Suppose that Γ `E t : A, where A is an open formula. i) If t is some variable x, then Γ contains a declaration x : B , and A is a Γinstance of B 0 . ii) If t = λx u, then A = B → C and Γ, x : B `E u : C . iii) If t = (v)u, then Γ `E v : C → B and Γ `E u : C , and A is a Γinstance of B 0 . Consider, in the derivation of Γ `E t : A, the last step where rules T1, T2 or T3 occur. Suppose that the typing obtained at this step is Γ `E t : B ; we can then go on to Γ `E t : A using only rules T4, . . . , T8. Therefore, by lemma 9.13, A is a Γinstance of B 0 . If t is some variable x, the rule applied to obtain Γ `E t : B (which must be T1, T2 or T3) can only be T1. This proves case (i) of the lemma. If t = (v)u, the rule applied to obtain Γ `E t : B can only be T3. This proves case (ii). If t = λx u, the rule applied to obtain Γ `E t : B can only be T2. Therefore B ≡ C → D, and Γ,x : C `E u : D. Since B is open, A is a Γinstance of B . Let C be the class of formulas P → Q such that Γ, x : P `E u : Q ; clearly, this class contains B . We now prove that it contains the class C Γ,B of Γinstances of B (yielding case (ii) of the lemma, since A ∈ C Γ,B ) ; so let R ∈ C , R ≡ P → Q. If y is an individual variable not occurring in Γ, and v is a term, then : Γ, x : P `E u : Q and therefore Γ, x : P [v/y] `E u : Q[v/y], by lemma 9.10. Thus R[v/y] ∈ C . Similarly, we see, using lemma 9.11, that R[F /X x 1 . . . x n ] ∈ C whenever X is a relation variable not occurring in Γ. Now suppose that R ≡ R 0 [v/y] ≡ P 0 [v/y] → Q 0 [v/y], and v = w is a particular case of E . By hypothesis, we have Γ, x : P 0 [v/y] `E u : Q 0 [v/y] ; therefore,
Lambdacalculus, types and models
178
by lemma 9.12, we also have Γ, x : P 0 [w/y] `E u : Q 0 [w/y], which proves that R 0 [w/y] ∈ C . Q.E.D.
Now we are able to prove theorem 9.9 : we simply repeat the proof of proposition 4.3 (which is the same statement for system D), using proposition 9.8 instead of proposition 4.1, and lemma 9.14(ii) instead of lemma 4.2(ii). Note the following derived rules : Proposition 9.15. If Γ `E t : A and A 0 is a Γinstance of A, then Γ `E t : A 0 . Let C be the class of all formulas B such that Γ `E t : B . We prove that C contains C Γ,A (the class of Γinstances of A). Clearly, A ∈ C . Let B ∈ C . If x is an individual variable not occurring in Γ, then Γ `E t : ∀x B (rule T5) ; thus Γ `E t : B [u/x] for every term u (rule T4) ; therefore B [u/x] ∈ C . Similarly, it can be seen that B [F /X x 1 . . . x n ] ∈ C whenever X is a relation variable with no occurrence in Γ (apply rule T7, then rule T6). Finally, if B = C [u/x] and u = v is a particular case of E , then, by applying rule T8 to Γ `E t : C [u/x], we obtain : Γ `E t : C [v/x], and therefore C [v/x] ∈ C . Q.E.D.
Proposition 9.16. Let u, v be two terms such that C A + E ` u = v. If Γ `E t : A[u/x], then Γ `E t : A[v/x]. The expression `E u = v can be obtained by applying rules (i), (ii), (iii) of proposition 9.4. We reason by induction on the number of steps in this derivation. Consider the last rule used : if it is rule (i), then u = v is a particular case of E . Then, by rule T8, we obtain immediately Γ `E t : A[v/x]. if it is rule (ii), then either u = v (in that case the result is trivial), or expressions of the form `E u = w and `E w = v are obtained at the previous step ; therefore, by induction hypothesis, we have, successively, Γ `E t : A[w/x] and Γ `E t : A[v/x]. if it is rule (iii), then we have obtained `E u i = v i (1 ≤ i ≤ n) at the previous step, and we have u = f (u 1 , . . . , u n ) and v = f (v 1 , . . . , v n ). By assumption, we have Γ `E t : A[ f (u 1 , . . . , u n )/x]. Now we may apply, repeatedly (n times), the induction hypothesis ; thus we have successively : Γ `E t : A[ f (v 1 , u 2 , . . . , u n )/x], Γ `E t : A[ f (v 1 , v 2 , u 3 . . . u n )/x], . . . , and finally Γ `E t : A[ f (v 1 , . . . v n )/x]. Q.E.D.
Chapter 9. Second order functional arithmetic
179
3. Realizability Let L be a second order language. With each nary relation variable X , we associate an (n+1)ary relation variable X + (the mapping being oneone) ; with each nary relation symbol R, we associate a new (n +1)ary relation symbol R + (not found in L ). Let L + be the language obtained by adding to L these new relation symbols, as well as the constant symbols K , S and the binary function symbol Ap (in case they are not already in L ). With each formula A of L , we associate a formula A + of L + , also denoted by x ∥− A, where x is an individual variable not occurring in A. x ∥− A should be read : x realizes A. It is defined, by induction on A, by the following conditions : if A is atomic, say A ≡ X (t 1 , . . . , t n ), where the t i ’s are terms and X is an nary relation variable or symbol, then x ∥− A is X + (t 1 , . . . , t n , x) ; if A ≡ B → C , then x ∥− A is ∀y[y ∥− B → (x)y ∥−C ] (it is assumed that the individual variable y is distinct from x and does not occur free in A) ; if A ≡ ∀y B , then x ∥− A is ∀y(x ∥− B ) (the individual variable y is assumed 6= x) ; if A ≡ ∀X B , then x ∥− A is ∀X + (x ∥− B ) (X is an nary relation variable). Lemma 9.17. Let A be a formula, x, x 1 , . . . , x k distinct individual variables, t 1 , . . . , t k terms, and A + = x ∥− A. Then x ∥− A[t 1 /x 1 , . . . , t k /x k ] is the formula A + [t 1 /x 1 , . . . , t k /x k ]. This is immediate, by induction on the length of A. Q.E.D.
Lemma 9.18. Let A, F be two formulas, x, x 1 , . . . , x k distinct individual variables, X a kary relation variable, and F + = x ∥− F . Then : x ∥− A[F /X x 1 . . . x k ] is the formula {x ∥− A}[F + /X + x 1 . . . x k x]. The proof is by induction on the length of A : If A is atomic, then the result follows immediately from the previous lemma. If A ≡ B → C , then x ∥− A is ∀y{y ∥− B → (x)y ∥−C }, thus {x ∥− A}[F + /X + x 1 . . . x k x] is ∀y({y ∥− B }[F + /X + x 1 . . . x k x] → {(x)y ∥−C }[F + /X + x 1 . . . x k x]). By induction hypothesis, this is : ∀y{y ∥− B [F /X x 1 . . . x k ] → (x)y ∥−C [F /X x 1 . . . x k ]}, that is to say x ∥− A[F /X x 1 . . . x k ]. The other cases of the induction are obvious. Q.E.D.
Notation. We shall use the correspondence between λterms and terms of combinatory logic, as it was settled in chapter 6. Therefore, we use notations
180
Lambdacalculus, types and models
from that chapter : with each λterm t , we associate a term of L , denoted by t L . We shall also consider the system of equational axioms C 0 defined in chapter 6 : (C 0 ) (K )x y = x ; (S)x y z = ((x)z)(y)z. Theorem 9.19. Let E be a system of equational axioms of L , and t a λterm such that x 1 : A 1 , . . . , x k : A k `E t : A. Then, we have : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : (t L ∥− A), where E 0 is the equational system E +C 0 , and t L the term of L which is associated with t . In particular, C A +C 0 + E ` ∀x 1 . . . ∀x k {x 1 ∥− A 1 , . . . , x k ∥− A k → t L ∥− A}. In view of the CurryHoward correspondence, the second part of the theorem easily follows from the first one. Indeed, if there exists a typing of the form : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : (t L ∥− A), then t L ∥− A is an intuitionistic consequence of C A, E 0 , x 1 ∥− A 1 , . . . , x k ∥− A k ; this yields the expected result. The proof of the first part is by induction on the length of the derivation of the typing x 1 : A 1 , . . . , x k : A k `E t : A. Consider the last rule used : If it is T1, then the given typing can be written : x 1 : A 1 , . . . , x k : A k `E x i : A i ; it is then clear that x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 x i : (x i ∥− A i ). If it is T2, then we have t = λy u, A ≡ B → C and x 1 : A 1 , . . . , x k : A k , y : B `E u : C was obtained as a previous typing. We may suppose that y does not occur in A, A 1 , . . . , A k , and that y 6= x 1 , . . . , x k . By induction hypothesis : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ), y : (y ∥− B ) `E 0 u : (u L ∥−C ). By rule T2, we obtain x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 λy u : (y ∥− B ) → (u L ∥−C ). Since y does not occur free in the formulas x 1 ∥− A 1 , . . . , x k ∥− A k , we have, by rule T5 : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : ∀y{y ∥− B → u L ∥−C }. Now the equation u L = t L y is a consequence of C 0 , since t = λy u. Thus, by rule T8, we obtain : x 1 : (x 1 ∥− A 1 ),. . . , x k : (x k ∥− A k ) `E 0 t : ∀y{y ∥− B → t L y ∥−C }, that is to say x 1 : (x 1 ∥− A 1 ), . . ., x k : (x k ∥− A k ) `E 0 t : t L ∥− B → C . If it is T3, then we have t = uv and two previous typings : x 1 : A 1 , . . . , x k : A k `E u : B → A and x 1 : A 1 , . . . , x k : A k `E v : B . Therefore, by induction hypothesis : x 1 : (x 1 ∥− A 1 ), . . ., x k : (x k ∥− A k ) `E 0 u : (u L ∥− B → A) and : x 1 : (x 1 ∥− A 1 ), . . ., x k : (x k ∥− A k ) `E 0 v : (v L ∥− B ). Now the formula u L ∥− B → A is ∀y[y ∥− B → u L y ∥− A]. By applying rule T4, we obtain :
Chapter 9. Second order functional arithmetic
181
x 1 : (x 1 ∥− A 1 ), . . ., x k : (x k ∥− A k ) `E 0 u : v L ∥− B → u L v L ∥− A. Finally, by rule T3, we deduce : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 uv : u L v L ∥− A. If it is T4, then A ≡ B [u/x], where u is some term of L , and we have the previous typing x 1 : A 1 , . . . , x k : A k `E t : ∀x B . The induction hypothesis implies that : x 1 : (x 1 ∥− A 1 ), . . ., x k : (x k ∥− A k ) `E 0 t : (t L ∥− ∀x B ), that is to say : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : ∀x(t L ∥− B ). By applying rule T4, we obtain x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : {t L ∥− B }[u/x]. Now, by lemma 9.17, the formula {t L ∥− B }[u/x] is precisely t L ∥− B [u/x]. If it is T5, then A ≡ ∀x B , and we have the previous typing : x 1 : A 1 , . . . , x k : A k `E t : B , where x does not occur free in A 1 , . . . , A k . According to lemma 9.10, it can be assumed that x 6= x 1 , . . . , x k (otherwise, change the variable x : this does not modify A 1 , . . . , A k ) ; thus x does not occur free in t . By induction hypothesis, we have : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : (t L ∥− B ). Since x has no occurrence in x i ∥− A i , by applying rule T5, we obtain : x 1 : (x 1 ∥− A 1 ),. . . , x k : (x k ∥− A k ) `E 0 t : ∀x(t L ∥− B ). Now x does not occur in t L ; therefore, the formula ∀x(t L ∥− B ) is identical to t L ∥− ∀x B ; this yields the result. If it is T6, then A ≡ B [F /X x 1 . . . x n ], and we have the previous typing : x 1 : A 1 , . . . , x k : A k `E t : ∀X B , (X being an nary relation variable). By induction hypothesis : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : ∀X + (t L ∥− B ) ; therefore, by applying rule T6, we obtain x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : {t L ∥− B }[F + /X + x 1 . . . x n x], F + being the formula x ∥− F . Now, by lemma 9.18, the formula : {t L ∥− B }[F + /X + x 1 . . . x n x] is precisely t L ∥− B [F /X x 1 . . . x n ]. If it is T7, then A ≡ ∀X B , and we have the previous typing : x 1 : A 1 , . . . , x k : A k `E t : B , (X having no free occurrence in A 1 , . . . , A k ). By induction hypothesis, we have : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : (t L ∥− B ). Since X + does not occur in x i ∥− A i , by applying rule T7, we obtain : x 1 : (x 1 ∥− A 1 ), . . ., x k : (x k ∥− A k ) `E 0 t : ∀X + (t L ∥− B ). Now the formula ∀X + (t L ∥− B ) is identical to t L ∥− ∀X B ; this yields the result. If it is T8, then A ≡ B [v/x], and we have x 1 : A 1 , . . . , x k : A k `E t : B [u/x] as a previous typing, the equation u = v being a particular case of E . By induction hypothesis, we have : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : (t L ∥− B [u/x]) ;
Lambdacalculus, types and models
182
now, by lemma 9.17, the formula t L ∥− B [u/x] is {t L ∥− B }[u/x]. Thus, by applying rule T8, we obtain : x 1 : (x 1 ∥− A 1 ), . . . , x k : (x k ∥− A k ) `E 0 t : {t L ∥− B }[v/x], which is precisely the expected result, since {t L ∥− B }[v/x] is identical to t L ∥− B [v/x]. Q.E.D.
4. Data types Let L be a second order language, and L + the extended language defined in the beginning of the previous section, page 179 (so L + contains the constant symbols K , S and the binary function symbol Ap). We define a standard model of L + as a full model such that its domain is Λ/'βη (the set of λterms modulo βηequivalence) and the interpretations of the symbols K , S and Ap are the standard ones. In other words, we will say that a full model of L + is standard if its restriction to the language of combinatory logic is the standard model of the extensional combinatory logic. Let M be a standard model of L + , and D[x] a formula of L , where the individual variable x is the only free variable. We will say that D[x] defines a data type in the model M if and only if the following conditions hold : i) each a ∈ M  = Λ/'βη , such that M = D[a], is a closed λterm ; ii) M = ∀x∀y{y ∥− D[x] ↔ x = y ∧ D[x]}. We now give some basic examples of data types.
Booleans. Consider two closed terms of L , which we will denote by 0,1 (they may be constant symbols, terms of combinatory logic . . . ). Then : Proposition 9.20. The formula Bool[x] ≡ ∀X [X 1, X 0 → X x] defines a data type in a standard model M if and only if the interpretation of 1 (resp. 0) in M is the Boolean 1 (resp. 0) of the λcalculus. Indeed y ∥− Bool[x] is the formula ∀X ∀u∀v[X (1, u), X (0, v) → X (x, (y)uv)]. It is equivalent to ∀u∀v[(x = 1 ∧ (y)uv = u) ∨ (x = 0 ∧ (y)uv = v)]. Now, let M be a standard model, and y any element of M  = Λ/'βη . We can take u, v as two distinct variables of the λcalculus, not occurring in y. Then (y)uv = u (resp. v) if and only if y = 1 (resp. 0) (Booleans of the λcalculus). Therefore : M = (y ∥− Bool[x]) ↔ (x = 0 ∧ y = 0) ∨ (x = 1 ∧ y = 1). Thus we see that Bool[x] defines a data type if and only if M satisfies the formula :
Chapter 9. Second order functional arithmetic
183
(x = 0 ∧ y = 0) ∨ (x = 1 ∧ y = 1) → x = y. This completes the proof of our statement. Q.E.D.
Integers. Here we consider a closed term 0 and a term s(x) of L having no variables but x. The integers type is then defined by the formula : Int[x] ≡ ∀X [∀y(X y → X s(y)), X 0 → X x]. If M is a standard model and a ∈ M , then M = Int[a] if and only if M = a = s n (0) for some n ∈ N. Proposition 9.21. The formula Int[x] defines a data type in a standard model M if and only if, for every integer n, the interpretation of the term s n (0) in M is Church numeral λ f λx( f )n x. Indeed, y ∥− Int[x] is the formula : ∀X ∀ f ∀a{∀z∀u[X (z, u) → X (s(z), ( f )u)], X (0, a) → X (x, (y) f a)}. Let x 0 , y 0 ∈ M  = Λ/'βη ; take f , a as two variables of the λcalculus, not occurring in the terms x 0 , y 0 , and X as the following binary relation on M : {(s n (0), ( f )n a) ; n ∈ N}. With these interpretations of f , a, X , we clearly have : M = ∀z∀u[X (z, u) → X (s(z), ( f )u)], X (0, a). Therefore, if M satisfies y 0 ∥− Int[x 0 ], then : M = X (x 0 , (y 0 ) f a)), that is x 0 = s n (0) and (y 0 ) f a = ( f )n a for some n ∈ N. Now f , a are variables which do not occur in y 0 . Hence y 0 = λ f λa( f )n a. It follows that M = y 0 ∥− Int[x 0 ] if and only if x 0 = s n (0) and y 0 = λ f λa( f )n a for some n ∈ N. Hence, if Int[x] is a data type, then M = (y 0 ∥− Int[x 0 ]) → x 0 = y 0 , and therefore s n (0) = λ f λa( f )n a. Conversely, if s n (0) = λ f λa( f )n a for all n ∈ N, we have, clearly, M = Int[x 0 ] ∧ x 0 = y 0 ⇔ x 0 = y 0 = s n (0) for some n, thus x 0 = s n (0) and y 0 = λ f λa( f )n a ; therefore, M = y 0 ∥− Int[x 0 ]. Q.E.D.
Product of data types. Let cpl(x, y) be a term of L , with no variables but x, y, and A[x], B [y] two formulas which define data types in a standard model M . We define the product type (A × B )[x] as the formula ∀X {∀y∀z(A[y], B [z] → X cpl(y, z)) → X x}. If c ∈ M , then M = (A × B )[c] if and only if M = c = cpl(a, b), where a, b ∈ M  and M = A[a], B [b].
Lambdacalculus, types and models
184
Proposition 9.22. (A × B )[x] defines a data type in a standard model M if and only if, for every a, b ∈ M  such that M = A[a], B [b], the interpretation of cpl(a, b) in M is the ordered pair λ f ( f )ab. u ∥− (A × B )[x] is the following formula : ∀X ∀ f {∀y∀z∀v∀w[v ∥− A[y], w ∥− B [z] → X (cpl(y, z), ( f )v w)] → X (x, u f )}. Now the model M satisfies the formulas : (v ∥− A[y]) ↔ A[y] ∧ (v = y) and (w ∥− B [z]) ↔ B [z] ∧ (w = z). Thus, in M , u ∥− (A × B )[x] is equivalent to : ∀X ∀ f {∀y∀z(A[y], B [z] → X (cpl(y, z), ( f )y z)) → X (x, u f )}, and therefore to : (i) ∀ f ∃y∃z{A[y] ∧ B [z] ∧ x = cpl(y, z) ∧ u f = ( f )y z}. Suppose that : M = A[a], B [b] → cpl(a, b) = λ f ( f )ab. Let u 0 , x 0 ∈ M  be such that M = (u 0 ∥− (A × B )[x 0 ]). Take any variable not occurring in u 0 as the interpretation of f . Then, by (i), there exist a, b ∈ M  such that : M = A[a], B [b], x 0 = cpl(a, b) and (u 0 ) f = ( f )ab. Now a, b are closed terms, thus u 0 = λ f ( f )ab. Hence u 0 = x 0 = cpl(a, b), and therefore M = (A × B )[x 0 ] ; it follows that (A × B )[x] defines a data type in M . Conversely, suppose that (A × B )[x] defines a data type in M and let a, b ∈ M  be such that M = A[a], B [b] ; take x 0 = cpl(a, b) and u 0 = λ f ( f )ab. Then, by (i), M satisfies u 0 ∥− (A × B )[x 0 ] ; therefore, u 0 = x 0 , that is cpl(a, b) = λ f ( f )ab. Q.E.D.
Direct sum of data types. Let i (x) and j (x) be two terms of L , where x is the only variable, and A[x] and B [y] two formulas which define data types in a standard model M . We define the direct sum type : (A + B )[x] ≡ ∀X {∀y(A[y] → X i (y)), ∀z(B [z] → X j (z)) → X x}. If c ∈ M , then M = (A + B )[c] if and only if : either M = c = i (a) for some a ∈ M  such that M = A[a] or M = c = j (b) for some b ∈ M  such that M = B [b]. We have the same proposition as in the previous case (with a similar proof) : Proposition 9.23. (A + B )[x] defines a data type in a standard model M if and only if, for each a (resp. b) ∈ M  such that M = A[a] (resp. B [b]), the interpretation of i (a) (resp. j (b)) in M is the term λ f λg ( f )a (resp. λ f λg (g )b).
Lists of elements of a data type. Let $ be a closed term of L (for the empty list), and cons(x, y) a term of L where x, y are the only variables. Let A[x] be a data type in a standard model M . We
Chapter 9. Second order functional arithmetic
185
define the type L A[x] (the type of lists of objects of type A) as the following formula : L A[x] ≡ ∀X {∀y∀z(A[y], X z → X cons(y, z)), X $ → X x}. If c ∈ M , then M = L A[c] if and only if M = c = cons(a 1 , cons(a 2 , . . . , cons(a n , $) . . .)) where M = A[a i ] (1 ≤ i ≤ n). Proposition 9.24. L A[x] defines a data type in a standard model M if and only if, for all a 1 , . . . , a n ∈ M  such that M = A[a i ] (1 ≤ i ≤ n), the interpretation of cons(a 1 , cons(a 2 , . . . , cons(a n , $) . . .)) (term of L + ) in M is the λterm : λ f λx(( f )a 1 )(( f )a 2 . . . (( f )a n )x. Indeed, t ∥− L A[x] is the formula : ∀X ∀ f ∀a{∀y∀z∀u∀v[u ∥− A[y], X (z, v) → X (cons(y, z), ( f )uv)], X ($, a) → X (x, (t ) f a)}. Now M satisfies u ∥− A[y] ↔ A[y] ∧ u = y ; thus, in M , t ∥− A[x] is equivalent to : ∀X ∀ f ∀a{∀y∀z∀v[A[y], X (z, v) → X (cons(y, z), ( f )y v], X ($, a) → X (x, (t ) f a)}. Now this formula holds in the standard model M if and only if : (ii) for all f , a ∈ M , there exist a 1 , . . . , a n ∈ M  such that M satisfies A[a i ], x = cons(a 1 , . . . , cons(a n , $) . . .), and (t ) f a = (( f )a 1 ) . . . (( f )a n )a. Suppose that M = cons(a 1 , . . . , cons(a n , $) . . .) = λ f λa(( f )a 1 ) . . . (( f )a n )a whenever M = A[a i ]. Let t 0 , x 0 ∈ M  be such that M = (t 0 ∥− L A[x 0 ]). Take two variables not occurring in t 0 as the interpretations of f and a. Then, by (ii), there exist a 1 , . . . , a n ∈ M  such that M = A[a i ], x 0 = cons(a 1 , . . . , cons(a n , $) . . .) and (t 0 ) f a = (( f )a 1 ) . . . (( f )a n )a. Now, since A is a data type, the a i ’s are closed terms ; thus t 0 = λ f λa(( f )a 1 ) . . . (( f )a n )a. Therefore, t 0 = x 0 and L A[x] defines a data type in M . Conversely, suppose that L A[x] defines a data type in M . Let a 1 , . . . , a n ∈ M  be such that M = A[a i ] ; take x 0 = cons(a 1 , . . . , cons(a n , $) . . .), and t 0 = λ f λa(( f )a 1 ) . . . (( f )a n )a. Then, by (ii), M satisfies t 0 ∥− L A[x 0 ], and hence t 0 = x 0 , that is : cons(a 1 , . . . , cons(a n , $) . . .) = λ f λa(( f )a 1 ) . . . (( f )a n )a.
5. Programming in F A 2 We consider a standard model M of a second order language L , and a system E of equations of L which is satisfied in M . Let f be an nary function symbol of L , and D 1 [x 1 ], . . . , D n [x n ], E [y] formulas which define data types in M . Let D 1 , . . . , D n , E ⊂ M  the sets of λterms defined in M by these formulas.
Lambdacalculus, types and models
186 Then, for every λterm t such that :
`E t : ∀x 1 . . . ∀x n {D 1 [x 1 ], . . . , D n [x n ] → E [ f (x 1 , . . . , x n )]} we have M = (t )u 1 . . . u n = f (u 1 , . . . , u n ) for all u 1 ∈ D 1 , . . . , u n ∈ D n . In other words, the term t is a program for the function f on the domain D 1 × . . . × D n . Indeed, it then follows from theorem 9.19 that : C A +C 0 + E ` t L ∥− ∀x 1 . . . ∀x n {D 1 [x 1 ], . . . , D n [x n ] → E [ f (x 1 , . . . , x n )]} that is to say : C A +C 0 + E ` ∀x 1 . . . ∀x n ∀y 1 . . . ∀y n {y 1 ∥− D 1 [x 1 ], . . . , y n ∥− D n [x n ] → (t L )y 1 . . . y n ∥− E [ f (x 1 , . . . , x n )]}. Therefore, this formula holds in M . According to the definition of data types, we have M = y i ∥− D i [x i ] ↔ y i = x i ∧ D i [x i ]. Hence : M = ∀x 1 . . . ∀x n {D 1 [x 1 ], . . . , D n [x n ] → (E [ f (x 1 , . . . , x n )] ∧ (t L )x 1 . . . x n = f (x 1 , . . . , x n ))}. Now the interpretation of the term t L in M is the λterm t (lemma 6.22). Thus we obtain a program for f , by proving : D 1 [x 1 ], . . . , D n [x n ] `E E [ f (x 1 , . . . , x n )] in second order intuitionistic logic, by means of rules D1 through D8.
Examples with integers Let ϕ1 , . . . , ϕn be functions such that ϕi : Nki → N ; we wish to program ϕ1 , that is to say to obtain a λterm t such that (t )p . . . p 'βη ϕ1 (p 1 , . . . , p k1 ) for all 1
Church numerals p , . . . , p . 1
k1
k1
We consider a language L consisting only of functions symbols f 1 , . . . , f n (the arity of f i being k i ), including 0 et s, which will be interpreted in N as the integer 0 and the successor function. Let E be the set of those equational formulas of L which are satisfied in the following model N : the domain is N, and each symbol f i is interpreted by the function ϕi . We define a standard model M of E , in which the interpretation of each symbol f i is a function ψi which extends ϕi (thus ψi is a mapping of M ki into M , where M  = Λ/'βη ). For that purpose, we consider the language L 0 obtained by adding to L an infinite sequence c 0 , . . . , c n , . . . of constant symbols. Let T (resp. T 0 ) be the set of closed terms of L (resp. L 0 ). We define an equivalence relation on T 0 by : t ∼ u ⇔ E ` t = u. Let M 0 be the model of L 0 such that its domain is M 0  = T 0 /∼ and the function symbols are given their canonical interpretation. Then the restriction of M 0 to the subset T /∼ is a submodel N 0 which is obviously isomorphic to N .
Chapter 9. Second order functional arithmetic
187
Moreover, c n ∈ M 0  \ N 0  : otherwise, we would have E ` c n = τ, for some closed term τ of L , and therefore E ` ∀x(x = τ), since c n occurs neither in E nor in τ. Then N would contain only one element, but this is false (actually, N  is an infinite countable set). Also, M 0 = c m 6= c n whenever m 6= n : otherwise, we would have : E ` c m = c n , thus E ` ∀x∀y(x = y), which would lead us to the same contradiction. It follows that M 0  \ N 0  is an infinite countable set. Finally, M 0 satisfies E : indeed, let t = u be an equation of E , where t and u are terms of L , with variables x 1 , . . . , x n , and let τ1 , . . . , τn ∈ T 0 . We need to prove that M 0 = t [τ1 /x 1 , . . . , τn /x n ] = u[τ1 /x 1 , . . . , τn /x n ], that is to say : E ` t [τ1 /x 1 , . . . , τn /x n ] = u[τ1 /x 1 , . . . , τn /x n ], which is clear. Then the isomorphism from N 0 onto N can be extended to a one to one function from M 0  onto Λ/'βη : indeed, since N  is the set of Church numerals, its complement in Λ/'βη is countable. This allows us to transfer on Λ/'βη the structure of M 0 , defining therefore over Λ/'βη a model M of E which is an extension of N ; this is what was expected. Remark The above method will be systematically used in the further examples of “ programming ” with various data types. It consists in extending, to the whole set Λ/'βη , functions which are defined only on data types, and preserving the equations which they satisfy. The above proof still applies, provided that the data types under consideration do not consist of one single element. Thus we will take, as equational system E , the set of all equational formulas satisfied by the functions to be programmed, on their domains, and we will be allowed to assume that E is satisfied on the whole standard model M .
The formula Int[x] ≡ ∀X {∀y(X y → X s y), X 0 → X x} is written in the language L , using the functions symbols 0 and s. We proved above that this formula defines a data type. In order to program the function ϕ1 , it is thus sufficient to obtain an intuitionistic proof of : ∀x 1 . . . ∀x k1 {Int[x 1 ], . . . , Int[x k1 ] → Int[ f 1 (x 1 , . . . , x k1 )]} by means of rules D1 through D8. In rule D8, we can use any equation satisfied in N by ϕ1 , . . . , ϕn . Consider, for instance, the language L , consisting of the symbols 0, s, +, × and p (for the predecessor function). In order to program the successor function, we look for an intuitionistic proof of ∀x{Int[x] → Int[s(x)]}, thus for a term of this type. Now we have : ν : Int[x], f : ∀y(X y → X s y), a : X 0 ` (ν) f a : X x (by rules T1, T6, T4). Hence :
Lambdacalculus, types and models
188
ν : Int[x], f : ∀y(X y → X s y), a : X 0 ` ( f )(ν) f a : X sx ; therefore, by rule T2 : ν : Int[x] ` λ f λa( f )(ν) f a : ∀y(X y → X s y), X 0 → X sx and finally : ` suc : Int[x] → Int[sx], where suc is defined as λνλ f λa( f )(ν) f a. We shall need below the derived rules stated in the next two propositions : Proposition 9.25. x : A, y : B ` λ f ( f )x y : A ∧ B ; x : A ∧ B ` (x)1 : A ; x : A ∧ B ` (x)0 : B ; x : A ` λ f λg ( f )x : A ∨ B ; y : B ` λ f λg (g )y : A ∨ B ; a : A[t /x] ` λ f ( f )a : ∃x A ; a : A[t /x] → B ` λz(a)z : ∀x A → B . Notice that, using proposition 9.8, we obtain the following consequences : if Γ ` t : A and Γ ` u : B , then Γ ` λ f ( f )t u : A ∧ B ; if Γ ` t : A ∧ B , then Γ ` (t )1 : A and Γ ` (t )0 : B ; if Γ ` t : A, then Γ ` λ f λg ( f )t : A ∨ B ; if Γ ` u : B , then Γ ` λ f λg (g )u : A ∨ B ; etc. Recall that A ∧ B , A ∨ B , ∃x A are, respectively, the following formulas : ∀X {(A, B → X ) → X }, ∀X {(A → X ), (B → X ) → X }, ∀X {∀x(A → X ) → X }. Proof of the proposition : x : A, y : B , f : A, B → X ` ( f )x y : X by rules T1 and T3 ; therefore, x : A, y : B ` λ f ( f )x y : (A, B → X ) → X ; then, by T7, we obtain the first property. x : A ∧ B ` x : (A, B → A) → A by T1 and T6 ; now ` λxλy x : A, B → A ; thus x : A ∧ B ` (x)1 : A. x : A, f : A → X , g : B → X ` ( f )x : X ; therefore x : A ` λ f λg ( f )x : (A → X ), (B → X ) → X ; hence x : A ` λ f λg ( f )x : A ∨ B . a : A[t /x], f : ∀x(A → X ) ` f : A[t /x] → X by T1 and T6 ; thus a : A[t /x], f : ∀x(A → X ) ` ( f )a : X ; then a : A[t /x] ` λ f ( f )a : ∀x(A → X ) → X ; finally a : A[t /x] ` λ f ( f )a : ∃x A. a : A[t /x] → B , z : ∀x A ` z : A[t /x] ; thus a : A[t /x] → B , z : ∀x A ` (a)z : B ; finally a : A[t /x] → B ` λz(a)z : ∀x A[x] → B . Q.E.D.
Chapter 9. Second order functional arithmetic
189
Proposition 9.26 (Proofs by induction on N). i) ν : Int[x], ϕ : ∀y(A[y] → A[s y]), α : A[0] ` (ν)ϕα : A[x] ; ii) ν : Int[x], ϕ : ∀y(A[y] → A[s y]), α : A[0], ψ : ∀z(A[z], B [z] → B [sz]), β : B [0] ` t : B [x], where t can be taken either as : ((νλcλ f (( f )(ϕ)(c)1)((ψ)(c)1)(c)0)λg (g )αβ)0 or as : (νλ f λaλb(( f )(ϕ)a)(ψ)ab)0αβ. (i) is immediate since, by rules T1 and T6, we have : ν : Int[x] ` ν : ∀y(A[y] → A[s y]), A[0] → A[x]. (ii) First proof : we prove A[x] ∧ B [x] by induction (we mean : using (i)). By proposition 9.25, we have : ` λg (g )αβ : A[0] ∧ B [0] ; on the other hand : c : A[y] ∧ B [y] ` (c)1 : A[y], (c)0 : B [y] ; thus c : A[y] ∧ B [y] ` (ϕ)(c)1 : A[s y], ((ψ)(c)1)(c)0 : B [s y] ; therefore : c : A[y] ∧ B [y] ` λ f (( f )(ϕ)(c)1)((ψ)(c)1)(c)0 : A[s y] ∧ B [s y] ; hence : ` τ0 : ∀y(A[y] ∧ B [y] → A[s y] ∧ B [s y]), where τ0 = λcλ f (( f )(ϕ)(c)1)((ψ)(c)1)(c)0. It follows that : ν : Int[x] ` (ντ0 )λg (g )αβ : A[x] ∧ B [x], and, finally : ν : Int[x] ` ((ντ0 )λg (g )αβ)0 : B [x]. Second proof : we prove F [x] ≡ ∀y(A[y], B [y] → B [x + y]) by induction on x, using the following equations : x + 0 = x ; 0 + y = y ; x + s y = sx + y. These equations are obviously satisfied in N, so they also hold in the standard model, according to our remark page 187. Clearly, ` 0 : F [0] (use rule T8 and the equation 0 + y = y). On the other hand, we have : f : F [z], a : A[y], b : B [y] ` (ϕ)a : A[s y], (ψ)ab : B [s y], and therefore : f : F [z], a : A[y], b : B [y] ` (( f )(ϕ)a)(ψ)ab : B [z + s y]. Then, using the equation z + s y = sz + y, we obtain : f : F [z] ` λaλb(( f )(ϕ)a)(ψ)ab : A[y], B [y] → B [sz + y]. Hence, ` τ1 : F [z] → F [sz], where τ1 = λ f λaλb(( f )(ϕ)a)(ψ)ab. According to (i) it follows that ν : Int[x] ` (ν)τ1 0 : F [x]. Now, by rule T4, we obtain ν : Int[x] ` (ν)τ1 0 : A[0], B [0] → B [x + 0]. Finally, using the equation x + 0 = x, we have : ν : Int[x] ` (ν)τ1 0αβ : B [x]. Q.E.D.
We obtain an alternative form of the inductive reasoning : Corollary 9.27. We have ν : Int[x], ψ : ∀y(Int[y], B [y] → B [s y]), β : B [0] ` u : B [x], where u is the term t [suc/ϕ, 0/α], and t is defined as in proposition 9.26.
Lambdacalculus, types and models
190
This is obvious from proposition 9.26, since ` suc : ∀x(Int[x] → Int[sx]) and ` 0 : Int[0]. Q.E.D.
To program the predecessor function on N, we use the equations : p0 = 0 ; psx = x (and, if needed, the previous equations involving +). By rules T1 and T8, we have : ν : Int[x], f : ∀y(X y → X s y), a : X 0 ` a : X p0, 1 : ∀y(X y, X p y → X ps y). Then we apply proposition 9.26(ii), taking A[x] ≡ X x, B [x] ≡ X px, ϕ = f , ψ = 1, α = β = a. Thus we obtain a term u such that : ν : Int[x], f : ∀y(X y → X s y), a : X 0 ` u : X px ; therefore : ν : Int[x] ` λ f λa u : Int[px]. This provides the following term for the predecessor function : λνλ f λa(νλg λbλc((g )( f )b)b)0aa. The next proposition expresses the principle : every integer is either the successor of an integer or 0. Proposition 9.28. ν : Int[x] ` t : ∀X {∀y(Int[y] → X s y), X 0 → X x}, where t = (νλhλ f λa( f )((h)suc)0)0. Let H [x] be the formula ∀X {∀y(Int[y] → X s y), X 0 → X x}. It is proved by induction on x. Clearly, ` 0 : H [0]. Moreover : h : H [z] ` h : ∀y{Int[y] → Int[s y]}, Int[0] → Int[z] (replace X y with Int[y] in H [z]). Since ` suc : ∀y{Int[y] → Int[s y]} and ` 0 : Int[0], we may deduce that : h : H [z] ` ((h)suc)0 : Int[z]. Thus h : H [z], f : ∀y{Int[y] → X s y}, a : X 0 ` ( f )((h)suc)0 : X sz. Hence, ` λhλ f λa( f )((h)suc)0 : ∀z(H [z] → H [sz]). Finally, we get ν : Int[x] ` t : H [x]. Q.E.D.
We therefore obtain another λterm for the predecessor function on N, using the same equations as above. With this aim, we replace X x by Int[px] in proposition 9.28, which gives : ν : Int[x] ` t : ∀y(Int[y] → Int[ps y]), Int[p0] → Int[px]. Now we have ps y = y and p0 = 0, thus ` I : ∀y(Int[y] → Int[ps y]) and ` 0 : Int[p0]. It follows that we may take λν(νλhλ f λa( f )((h)suc)0)0I 0 (where I = λx x) as a term for the predecessor function.
Examples with lists We add to L the constant symbol $ and the binary function symbol cons. Let A[x] be a data type ; then the type of the lists of objects of A is written : L A[x] ≡ ∀X {∀y∀z(A[y], X z → X cons(y, z)), X $ → X x}.
Chapter 9. Second order functional arithmetic
191
Thus, $ represents the empty list and cons(y, z) represents the list obtained by putting the data y in front of the list z. For every formula F , we obviously have the following typing (inductive reasoning on lists) : σ : L A[x], ϕ : ∀y∀z(A[y], F [z] → F [cons(y, z)]), α : F [$] ` (σ)ϕα : F [x]. Length of a list. We use the equations : l ($) = 0; l (cons(y, z)) = s(l (z)). In the context σ : L A[x], f : ∀y(X y → X s y), a : X 0, we prove X l (x) by induction on x. By the previous equations, we have : σ: L A[x], f : ∀y(X y → X s y), a: X 0 ` a: X l ($), f : X l (z) → X l (cons(y, z)). Hence : σ: L A[x], f : ∀y(X y → X s y), a: X 0 ` λx f : A[y], X l (z) → X l (cons(y, z)). It follows that σ: L A[x], f : ∀y(X y → X s y), a: X 0 ` ((σ)λx f )a: X l (x) and therefore : ` λσλ f λa((σ)λx f )a : ∀x(L A[x] → Int[l (x)]), which provides a λterm for the length of lists. Reversal (or mirror) of a list. We add to L function symbols mir (unary) and c (binary) ; mir(x) represents the reversal of the list x and c(y, z) the list obtained by putting the data z at the end of the list y. We will use the equations : c($, a) = cons(a, $) ; c(cons(b, x), a) = cons(b, c(x, a)) ; mir($) = $ ; mir(cons(a, x)) = c(mir(x), a). In the context σ : L A[x], we prove L A[mir(x)] by induction on x. First, we have ` 0 : L A[mir($)]. Now we need a term of type ∀y∀z(A[y], L A[mir(z)] → L A[mir(cons(y, z))]), that is to say ∀y∀z(A[y], L A[mir(z)] → L A[c(mir(z), y)]). It suffices to obtain a term of type : ∀y∀z(A[y], L A[z] → L A[c(z, y)]). Now we have : α : A[y 0 ], τ : L A[z 0 ], f : ∀y∀z(A[y], X z → X cons(y, z)), a : X $ ` ( f )αa : X cons(y 0 , $) and therefore ` ( f )αa : X c($, y 0 ). On the other hand, the type ∀y∀z(A[y], X c(z, y 0 ) → X c(cons(y, z), y 0 )) can also be written: ∀y∀z(A[y], X c(z, y 0 ) → X cons(y, c(z, y 0 ))). To obtain a term of this type, it suffices to obtain one of type : ∀y∀z(A[y], X z → X cons(y, z)) ; therefore, we have : α : A[y 0 ], τ : L A[z 0 ], f : ∀y∀z(A[y], X z → X cons(y, z)), a : X $ ` f : ∀y∀z(A[y], X c(z, y 0 ) → X c(cons(y, z), y 0 )). Finally : α : A[y 0 ], τ : L A[z 0 ], f : ∀y∀z(A[y], X z → X cons(y, z)), a : X $ ` (τ f )( f )αa : X c(z 0 , y 0 ), and therefore :
192
Lambdacalculus, types and models
α : A[y 0 ], τ : L A[z 0 ] ` λ f λa(τ f )( f )αa : L A[c(z 0 , y 0 )], that is : ` λαλτλ f λa(τ f )( f )αa : ∀y∀z(A[y], L A[z] → L A[c(z, y)]). So we now have σ : L A[x] ` ((σ)λαλτλ f λa(τ f )( f )αa)0 : L A[mir(x)], which provides the term λσ((σ)λαλτλ f λa(τ f )( f )αa)0 as a reversal operator for lists.
References for chapter 9 [Kri87], [Kri90], [Lei83], [Par88]. (The references are in the bibliography at the end of the book).
Chapter 10 Representable functions in system F We wish to give a characterization of the class of those recursive functions from N to N which are representable by a λterm of type Int → Int in system F (in other words, the class of functions which can be “ programmed ” in system F ). Our first remark is that this class does not contain all recursive functions ; this can be seen by the following simple diagonal argument : Let t 0 , t 1 , . . . , t n , . . . be a recursive enumeration of the λterms of type Int → Int in system F . We define a recursive function ϕ : N → N by taking, for every n ∈ N, ϕ(n) = 1 (resp. ϕ(n) = 0) if the normal form of (t n )n is 0 (resp. is 6= 0). If the function ϕ was represented by t n for some integer n, then (t n )n would be βequivalent to the Church integer ϕ(n). This is false and, therefore, the recursive function ϕ is not in the class under consideration. Consider the language L of combinatory logic, with the constant symbols K , S and the binary function symbol Ap. Recall that, with each λterm t , we can associate a term t L of L , such that the interpretation of t L in the standard model of L is t (lemma 6.22). The λterm λnλ f λx( f )(n) f x is denoted by suc ; by abuse of notation, the terms suc L and 0L (of L ) will still be denoted, respectively, by suc and 0. We define two formulas of L : Int ≡ ∀X {(X → X ) → (X → X )} (where X is a propositional variable), and Int[x] ≡ ∀X {∀y(X y → X (suc)y), X 0 → X x}. In chapter 9, we have seen that the formula Int[x] defines a data type in the standard model of L , and therefore also in every standard model of any language L 0 which extends L . Clearly, the interpretation of Int[x] in any standard model is the set of Church numerals. Let T be a theory (a system of axioms) in a language L (T ) ⊃ L , and ϕ : N → N a recursive function ; ϕ is said to be provably total in the theory T if there exists a term t (x) of L (T ), of which x is the only variable, such that : • T ` ∀x{Int[x] → Int[t (x)]} (in classical second order logic) ; 193
Lambdacalculus, types and models
194
• There exists a standard model M of T , in which the term t (x) represents the function ϕ (in other words, for every Church numeral n, the interpretation of t (n) in M is the Church numeral ϕ(n)). Proposition 10.1. We have the following typings : i) ν : (x ∥− Int) ` ν : Int[((x)suc)0] ; ii) ν : Int[x] `C 0 ((ν)suc)0 : (x ∥− Int). Recall that the system of axioms C 0 consists of both equations (K )x y = x and (S)x y z = ((x)z)(y)z. i) The formula x ∥− Int can be written ∀X ∀ f ∀a{∀y(X y → X ( f )y), X a → X (x) f a}. Therefore, by the typing rules T1 and T4 (replace f by suc and a by 0), we immediately obtain : ν : (x ∥− Int) ` ν : ∀X {∀y(X y → X (suc)y), X 0 → X ((x)suc)0}, that is : ν : (x ∥− Int) ` ν : Int[((x)suc)0]. ii) We prove x ∥− Int by induction on x ; 0 ∥− Int is the formula : ∀X ∀ f ∀a{∀y(X y → X ( f )y), X a → X (0) f a}. Now C 0 ` (0) f a = a, and we have, trivially : ` 0 : ∀X ∀ f ∀a{∀y(X y → X ( f )y), X a → X a}. Hence `C 0 0 : (0 ∥− Int) (rule T8). We now look for a term of type x ∥− Int → (suc)x ∥− Int. We have : ν : (x ∥− Int), ϕ : ∀y(X y → X ( f )y), α : X a ` (ν)ϕα : X (x) f a, therefore : ν : (x ∥− Int), ϕ : ∀y(X y → X ( f )y), α : X a ` (ϕ)(ν)ϕα : X ( f )(x) f a. Now : C 0 ` (suc)x f a = ( f )(x) f a. By rule T8, we obtain : ν : (x ∥− Int), ϕ : ∀y(X y → X ( f )y), α : X a `C 0 (ϕ)(ν)ϕα : X (suc)x f a and therefore, by T2 : ν : (x ∥− Int) `C 0 λϕλα(ϕ)(ν)ϕα : ((suc)x ∥− Int). Hence : `C 0 suc : ∀x{x ∥− Int → (suc)x ∥− Int}. We have proved 0 ∥− Int and ∀x{x ∥− Int → (suc)x ∥− Int} ; it follows that : ν : Int[x] `C 0 ((ν)suc)0 : (x ∥− Int). Q.E.D.
Proposition 10.2. Let t be a λterm such that ` t : Int → Int is a typing in system F . Then `C 0 λn(t )(n)suc 0 : ∀x{Int[x] → Int[(t L )x suc 0]} is a typing in system F A 2 , with the equational axioms C 0 . By theorem 9.19, we have `C 0 t : t L ∥− Int → Int, that is : (*) `C 0 t : ∀x{x ∥− Int → (t L )x ∥− Int}. By proposition 10.1(ii), n : Int[x] `C 0 (n)suc 0 : x ∥− Int, and therefore, by (*) and rule T3, we have n : Int[x] `C 0 (t )(n)suc 0 : (t L )x ∥− Int. Then it follows from proposition 10.1(i) that :
Chapter 10. Representable functions in system F
195
n : Int[x] `C 0 (t )(n)suc 0 : Int[(t L )x suc 0], hence : `C 0 λn(t )(n)suc 0 : Int[x] → Int[(t L )x suc 0]. Q.E.D.
Theorem 10.3. Let t be a λterm such that ` t : Int → Int is a typing in system F . Then t represents a function from N to N which is provably total in the theory C A +C 0 . Using proposition 10.2 and the CurryHoward correspondence (as stated chapter 9, page 173), we get C A + C 0 ` ∀x{Int[x] → Int[(t L )x suc 0]}. Thus the term (t L )x suc 0 represents a function ψ : N → N, which is provably total in the theory C A +C 0 . The term t represents a function ϕ : N → N : indeed, if n is a Church numeral, then, in system F , we have ` n : Int, and therefore ` (t )n : Int. It follows (by the adequacy lemma 8.13 and proposition 8.14) that (t )n is βequivalent to a Church numeral. Then it is enough to prove that ϕ = ψ. The interpretation of t L in the standard model is t L Λ 'β t (lemma 6.22). Consequently, for every Church numeral n, the interpretation of (t L )n suc 0 in the standard model is (t )n suc 0. Now (t )n suc 0 'β (t )n, since (t )n is a Church numeral. Hence ψ(n) = ϕ(n). Q.E.D.
The next theorem is a strengthened converse of theorem 10.3. Theorem 10.4. Let E be a system of equations in a language L (E ) ⊃ L , and ϕ : N → N a function which is provably total in C A + E . Then there exists a λterm t , of type Int → Int in system F , which represents the function ϕ. By hypothesis, there exist a term u(x) of L (E ), the only variable of which is x, and a standard model M of E , such that : i) C A + E ` ∀x{Int[x] → Int[u(x)]} and ii) M = u(n) = ϕ(n) for every Church numeral n. According to (i), the expression `E Int[x] → Int[u(x)] can be obtained by means of the deduction rules D0 through D8 of chapter 9, page 172 (completeness theorem for the classical second order predicate calculus). In view of theorem 10.5 below, there also exists an intuitionistic proof for this expression, that is a proof only involving rules D1 through D8. Now, by the CurryHoward correspondence (chapter 9, page 173), such a proof provides a λterm t such that `E t : Int[x] → Int[u(x)] (a typed term in system F A 2 with the equational axioms E ). The term t represents the function ϕ ; indeed, by theorem 9.19, we have : C A + E +C 0 ` (t L ∥− Int[x] → Int[u(x)]), that is : C A + E +C 0 ` ∀x∀y{y ∥− Int[x] → (t L )y ∥− Int[u(x)]}.
Lambdacalculus, types and models
196
Thus the standard model M satisfies the formula : ∀x∀y{y ∥− Int[x] → (t L )y ∥− Int[u(x)]}. Now the formula Int[x] defines a data type, in the standard model M . Hence : M = ∀x∀y{y ∥− Int[x] ↔ Int[x] ∧ x = y}, and therefore : M = ∀x{Int[x] → (t L )x = u(x)}. In other words, the term (t L )x represents the same function as u(x), that is ϕ. Since the interpretation of t L in the standard model M is t (lemma 6.22), we see that t represents ϕ. Finally, the term t is of type Int → Int in system F . Indeed, we have the typing `E t : Int[x] → Int[u(x)] in system F A 2 . Thus we also have : ` t : Int[x]− → Int[u(x)]− as a typing in system F (see the proof of the normalization theorem 9.6 for F A 2 ). Now this typing is simply ` t : Int → Int. Q.E.D.
Gödel’s ¬translation Theorem 10.5. Let E be a system of equations in a language L (E ) ⊃ L , and σ, τ two terms of L (E ). If the expression `E Int[σ] → Int[τ] can be proved in classical second order logic (that is with rules D0 through D8, page 172), then it can also be proved in intuitionistic second order logic (in other words, without using rule D0). We add to the language L (E ) a propositional constant O (that is a 0ary relation symbol); whenever A is a formula, we will denote the formula A → O by ¬0 A. For every formula A, we define a formula A ∗ , by induction, by the following conditions : if A is atomic, then A ∗ is ¬0 A; (A → B )∗ is A ∗ → B ∗ ; (∀ξ A)∗ is ∀ξ A ∗ whenever ξ is an individual variable or a relation variable. So the formula A ∗ is obtained by putting ¬0 before every atomic subformula of A. A ∗ will be called the Gödel translation of A. Remark. This is not exactly the classical definition of the Gödel translation of A, according to which one should put ¬0 ¬0 before every atomic subformula of A.
Lemma 10.6. i) ¬0 ¬0 ¬0 A `i ¬0 A ; ii) ¬0 ¬0 (A → B ) `i ¬0 ¬0 A → ¬0 ¬0 B ; iii) ¬0 ¬0 ∀ξ A `i ∀ξ¬0 ¬0 A whenever ξ is a first or second order variable. The notation A 1 , . . . , A k `i A means that A is an intuitionistic consequence of A 1 , . . . , A k , that is to say that the expression A 1 , . . . , A k ` A can be obtained by means of the rules D1 through D8 of chapter 9 (page 172).
Chapter 10. Representable functions in system F
197
i) Remark that, if X `i Y , then ¬0 Y `i ¬0 X ; indeed, if Y is deduced from X , then O is deduced from X and Y → O. Now, clearly, A `i ¬0 ¬0 A. Therefore, by the previous remark, we have : ¬0 ¬0 ¬0 A `i ¬0 A. ii) With the premises ((A → B ) → O) → O, (A → O) → O, B → O, we have to deduce O. From B → O, we deduce (A → B ) → (A → O); with (A → O) → O, we obtain (A → B ) → O. From this and ((A → B ) → O) → O, we deduce O. iii) We wish to show ((∀ξ A) → O) → O `i (A → O) → O; so with the premises ((∀ξ A) → O) → O and A → O, we have to deduce O. Now we know ∀ξ A `i A; with A → O, we deduce ∀ξ A → O; from this and ((∀ξ A) → O) → O, we obtain O. Q.E.D.
Lemma 10.7. ¬0 ¬0 A ∗ `i A ∗ for every formula A. The proof is by induction on the length of the formula A. If A is atomic, what we have to prove is ¬0 ¬0 ¬0 A `i ¬0 A : this is precisely lemma 10.6(i). If A is B → C , ¬0 ¬0 A ∗ is ¬0 ¬0 (B ∗ → C ∗ ); by lemma 10.6(ii), we have : ¬0 ¬0 A ∗ `i ¬0 ¬0 B ∗ → ¬0 ¬0C ∗ . Now B ∗ `i ¬0 ¬0 B ∗ (obvious), and ¬0 ¬0C ∗ `i C ∗ (induction hypothesis). Hence ¬0 ¬0 A ∗ `i B ∗ → C ∗ , that is ¬0 ¬0 A ∗ `i A ∗ . If A is ∀ξ B , where ξ is a first order or second order variable, then ¬0 ¬0 A ∗ is ¬0 ¬0 ∀ξ B ∗ . By lemma 10.6(iii), we have ¬0 ¬0 A ∗ `i ∀ξ¬0 ¬0 B ∗ and therefore ¬0 ¬0 A ∗ `i ¬0 ¬0 B ∗ . Now, by the induction hypothesis, ¬0 ¬0 B ∗ `i B ∗ . Thus ¬0 ¬0 A ∗ `i B ∗ , and since ξ does not occur free in ¬0 ¬0 A ∗ , we have : ¬0 ¬0 A ∗ `i ∀ξ B ∗ , that is ¬0 ¬0 A ∗ `i A ∗ . Q.E.D.
Lemma 10.8. (¬¬A)∗ `i A ∗ for every formula A. Since ⊥ is the formula ∀X X , ⊥∗ is ∀X ¬0 X , that is ∀X (X → O). Therefore O `i ⊥∗ (obvious) and ⊥∗ `i O (replace X by O → O in the previous formula). Thus ⊥∗ is equivalent to O in intuitionistic logic. (¬¬A)∗ is the formula ((A →⊥) →⊥)∗ , that is (A ∗ →⊥∗ ) →⊥∗ . Thus (¬¬A)∗ `i (A ∗ → O) → O, or equivalently (¬¬A)∗ `i ¬0 ¬0 A ∗ . Then the conclusion follows from lemma 10.7. Q.E.D.
Lemma 10.9. Let A, B be two formulas, and X a kary relation variable. Then : {A[B /X x 1 . . . x k ]}∗ `i A ∗ [¬0 B ∗ /X x 1 . . . x k ] and A ∗ [¬0 B ∗ /X x 1 . . . x k ] `i {A[B /X x 1 . . . x k ]}∗ .
Lambdacalculus, types and models
198
The proof is by induction on the length of A. If A is atomic and its first symbol is X , say A ≡ X t 1 . . . t k , then : A ∗ [¬0 B ∗ /X x 1 . . . x k ] ≡ ¬0 ¬0 B ∗ [t 1 /x 1 , . . . , t k /x k ] and {A[B /X x 1 . . . x k ]}∗ ≡ B ∗ [t 1 /x 1 , . . . , t k /x k ]. Then the result follows from lemma 10.7. The other cases of the inductive proof are trivial. Q.E.D.
Theorem 10.10. Let E be a system of equations in a language L (E ) ⊃ L , let A be a finite set of formulas of L (E ), and A ∗ = {F ∗ ; F ∈ A }. If one can obtain A `E A by rules D0 through D8, page 172, then one can obtain A ∗ `iE A ∗ by rules D1 through D8 only. The theorem means that if A `E A can be proved in classical second order logic, then the Gödel translation A ∗ `iE A ∗ can be proved in intuitionistic second order logic. We shall prove it by induction on the length of the derivation of A `E A with rules D0, . . . , D8. Consider the last rule used. If it is D0, then A `E A can be written : B, ¬¬A `E A. It is enough to show that (¬¬A)∗ `i A ∗ : this was done in lemma 10.8. If it is D1, D2, D3, D5 or D7, the result is obvious from the definition of A ∗ . If it is D4 or D8, we obtain the result by proving that {A[t /x]}∗ ≡ A ∗ [t /x] for every term t and every formula A of L (this is immediate, by induction on A). If it is D6, then A ≡ B [C /X x 1 . . . x k ] ; by the induction hypothesis, the expression A ∗ `i ∀X B ∗ was previously deduced ; so we also obtain : A ∗ `i B ∗ [¬0C ∗ /X x 1 . . . x k ]. By lemma 10.9, we finally deduce A ∗ `i {B [C /X x 1 . . . x k ]}∗ . Q.E.D.
Proposition 10.11. Let U , V be two formulas of L (E ) such that U `iE U ∗ and V ∗ `iE ¬0 ¬0V . If one can obtain U `E V by rules D0 through D8, then one can obtain U `iE V by rules D1 through D8 only. By theorem 10.10, U ∗ `iE V ∗ can be obtained by rules D1 through D8. The hypotheses about the formulas U ,V show that one can also deduce U `iE ¬0 ¬0V by means of these rules, that is : U `iE (V → O) → O. Now O is a propositional constant which does not occur in U . Thus it suffices to replace O by V to obtain the desired result : U `iE V . Q.E.D.
Any type U [x] such that U [x] `i U ∗ [x] will be called an input type, while a type V [x] such that V ∗ [x] `i ¬0 ¬0V [x] will be called an output type.
Chapter 10. Representable functions in system F
199
Proposition 10.12. The type Int[x] is an inputoutput one, that is to say that we have : Int[x] `i Int∗ [x], and Int∗ [x] `i ¬0 ¬0 Int[x]. Int[x] is the formula : ∀X {∀y(X y → X (suc)y), X 0 → X x}. By replacing X with ¬0 X , we immediately obtain Int∗ [x], which is : ∀X {∀y(¬0 X y → ¬0 X (suc)y), ¬0 X 0 → ¬0 X x}. Now in the formula Int∗ [x], replace X x with ¬0 Int[x]; the result is : ∀y(¬0 ¬0 Int[y] → ¬0 ¬0 Int[(suc)y]), ¬0 ¬0 Int[0] → ¬0 ¬0 Int[x]. Now it can be seen easily that `i Int[y] → Int[(suc)y], so that : `i ¬0 ¬0 Int[y] → ¬0 ¬0 Int[(suc)y]. We also have `i Int[0], and therefore : `i ¬0 ¬0 Int[0]. Finally, Int∗ [x] `i ¬0 ¬0 Int[x]. Q.E.D.
Now we are able to prove theorem 10.5 : suppose that Int[σ] `E Int[τ] have been obtained by means of rules D0, D1, . . . , D8. By proposition 10.12, we have Int[σ] `i Int∗ [σ] and Int∗ [τ] `i ¬0 ¬0 Int[τ]. Therefore, by proposition 10.11, we can obtain Int[σ] `iE Int[τ] by rules D1, . . . , D8 only. Q.E.D.
Theorems 10.3 and 10.4 provide a characterization of the class of those recursive functions from N to N which are represented by a λterm of type Int → Int in system F (and therefore also of the class of those recursive functions which are represented by a typed λterm in F A 2 , of type Int[x] → Int[t (x)], with an arbitrary equational system E , in a language L (E ) ⊃ L , t (x) being a term of L (E )). This is the class of functions which are provably total in the theory C A + C 0 ; it is also the class of functions which are provably total in the theory C A + E , where E is any equational system containing C 0 .
Undecidability of strong normalization As an application of the above results (namely theorems 8.9 and 10.4), we will now show : Theorem 10.13. The set of strongly normalizable λterms is not recursive. The argument is a modification of [Urz03]. We first prove : Theorem 10.14. Let f : N2 → {0, 1} be representable by a λterm of type Int, Int → Bool in system F . Then, there exists a λterm Φ, with the only free variable x, such that, for all m∈N: ˆ i) Φ[m/x] is solvable ⇒ (∃n ∈ N) f (m, n) = 1. ˆ ii) (∃n ∈ N) f (m, n) = 1 ⇒ Φ[m/x] is strongly normalizable.
200
Lambdacalculus, types and models
Remark. Recall that : Int ≡ ∀X ((X → X ), X → X ) and Bool ≡ ∀X (X , X → X ) ; ˆ = (suc)m 0 ; suc = λnλ f λx( f )(n) f x is a λterm for the successor ; if m ∈ N, then m 0 = 0 = λxλy y, 1 = λxλy x.
Let φ be a λterm which represents f , such that: `F φ : Int, Int → Bool Consider the following λterm, with a free variable x : W = λy(φx y0)λw(w)y + w, with y + = (suc)y. We define Φ = (W )0W and we show that Φ has the desired property. ˆ ˆ y0)λw(w)y + w. For each integer m, we put : W m = W [m/x] = λy(φm
Proof of (i) Let m be a fixed integer such that f (m, n) = 0 for all n ∈ N. We have : ˆ n0)λw(w) ˆ ˆ m Âw ((φm nˆ + w)W m . W m nW Recall that Âw denotes the weak head reduction (see page 30).
ˆ nˆ 'β 0 for all n ∈ N. Therefore, by lemma 2.12, Since φ represents f , we have φm we have : ˆ n0)λw(w) ˆ ((φm nˆ + w)W m Âw (λw(w)nˆ + w)W m Âw W m nˆ +W m . m ˆ m Âw W m nˆ +W m for all n. But nˆ + = (suc)nˆ = pˆ We have shown that W nW with p = n + 1. It follows that : ˆ ˆ m Âw · · · Φ[m/x] = W m 0ˆ W m Âw W m 1ˆ W m Âw · · · Âw W m nW ˆ This infinite weak head reduction shows that Φ[m/x] is not solvable (theorem 4.9).
Proof of (ii) Let A = Int → ∀X (X → Id) where Id = ∀X (X → X ). We first show that : `F W m : Int, A → Id for every m ∈ N. Indeed, we have : y : Int `F y + : Int because `F suc : Int → Int. y : Int, w : A `F w y + : ∀X (X → Id) and therefore : y : Int, w : A `F w y + w : Id. It follows that : y : Int `F λw w y + w : A → Id. Now, since 0 = λxλy y, we have trivially : y : Int `F 0 : A → Id. But, by hypothesis, x : Int, y : Int `F φx y : Bool, and therefore : ˆ y0)λw w y + w : A → Id y : Int `F (φm ˆ : Int, because `F 0 : Int and `F suc : Int → Int). (note that `F m ˆ y0)λw w y + w : Int, A → Id which is the result. Thus, we get `F λy(φm If p ∈ N, then we have `F pˆ : Int. It follows that : `F W m pˆ : A → Id for every m, p ∈ N. In particular, W m and W m pˆ are strongly normalizable (theorem 8.9).
Chapter 10. Representable functions in system F
201
Lemma 10.15. Let t , t ∗ , t 1 , . . . , t k ∈ Λ such that t Âw t ∗ (t ∗ is obtained from t by weak head reduction). If t and t ∗ t 1 . . . t k are strongly normalizable, then t t 1 . . . t k is strongly normalizable. Proof by induction on the length of the weak head reduction from t to t ∗ . If this length is 0, the result is obvious, since t = t ∗ . Otherwise, we have : t = (λx u)vu 1 . . . u l and we put t 0 = u[v/x]u 1 . . . u l . By the induction hypothesis, we see that t 0 t 1 . . . t k = u[v/x]u 1 . . . u l t 1 . . . t k is strongly normalizable. But v is also strongly normalizable, since t is. Therefore, by lemma 4.27 : (λx u)vu 1 . . . u l t 1 . . . t k = t t 1 . . . t k is strongly normalizable. Q.E.D.
We now consider a fixed integer m such that f (m, p) = 1 for some p. Let n be the first such p. We have to show that W m 0ˆ W m is strongly normalizable. In ˆ m is strongly fact we show, by a backward recursion from n to 0, that W m pW normalizable, for 0 ≤ p ≤ n. With this aim in view, we apply lemma 10.15, with ˆ k = 1, t 1 = W m . We have already proved that t and t 1 are strongly t = W m p, normalizable. We have : ˆ y0)λw(w)y + w)pˆ Âw (φm ˆ p0)λw(w) ˆ t = (λy(φm qˆ w with q = p + 1, ˆ since (suc)pˆ = q. ˆ nˆ 'β 1. Therefore, by Consider first the case p = n ; by hypothesis, we have φm ˆ n0)λw(w) ˆ lemma 2.12, we have (φm qˆ w Âw 0. It follows that t = W m nˆ Âw 0 and we can take t ∗ = 0. We have to show that t ∗ t 1 , i.e. 0W m , is strongly normalizable, which is trivial, since W m is. ˆ pˆ 'β 0. Therefore, by Consider now the case p < n ; by hypothesis, we have φm ˆ p0)λw(w) ˆ lemma 2.12, we have (φm qˆ w Âw λw(w)qˆ w. It follows that t = W m pˆ Âw λw(w)qˆ w and we can take t ∗ = λw(w)qˆ w. We have to show that t ∗ t 1 , i.e. (λw(w)qˆ w)W m , is strongly normalizable. ˆ m are strongly normalBy lemma 4.27, it suffices to show that W m and W m qW ˆ m , this follows from the izable. This is already known for W m , and for W m qW induction hypothesis, since q = p + 1 (we are doing a backward induction). Q.E.D.
We shall now assume the following results from recursivity theory : (1) For every recursively enumerable set E ⊂ N, there exists a primitive recursive function f : N2 → {0, 1} such that : E = {m ∈ N; (∃n ∈ N) f (m, n) = 1}. In other words, every recursively enumerable set of integers is the projection of a subset of N2 , the characteristic function of which is primitive recursive.
(2) Every primitive recursive function is provably total in the theory C A + E for some set E of equations.
Lambdacalculus, types and models
202
Remark. Given a primitive recursive function, the idea is simply to write down the equations defining it and to prove with them, in classical second order logic, that this function sends integers into integers. The details will be written in a future version of this book.
We can now prove theorem 10.13. More precisely, we show : Theorem 10.16. The set of strongly normalizable terms and the set of unsolvable terms are recursively inseparable. In other words, a recursive set which contains every strongly normalizable term must contain an unsolvable term. Let R be a recursive set which contains every strongly normalizable term and no unsolvable term. We choose a recursively enumerable set E ⊂ N which is not recursive. Let f be a primitive recursive function, obtained by (1). By means of (2) and theorem 10.4, we see that f is representable, in system F , by a λterm of type Int, Int → Bool. By theorem 10.14, we get a λterm Φ such that, for all m ∈ N : ˆ Φ[m/x] is solvable ⇒ m ∈ E ; ˆ m ∈ E ⇒ Φ[m/x] is strongly normalizable. ˆ By hypothesis on R, this gives : Φ[m/x] ∈ R ⇔ m ∈ E. This is a contradiction, because R is recursive and E is not. Q.E.D.
References for chapter 10 [Fri77], [Gir71], [Gir72], [Urz03]. (The references are in the bibliography at the end of the book).
Bibliography [Ama95] R. Amadio. A quick construction of a retraction of all retractions for stable bifinites. Information and Computation 116(2), 1995, p. 272274. [Bar83] H. Barendregt, M. Coppo, M. DezaniCiancaglini. A filter model and the completeness of type assignment. J. Symb. Logic 48, n◦ 4, 1983, p. 931940. [Bar84] H. Barendregt. The lambdacalculus. North Holland, 1984. [Bera91] S. Berardi. Retractions on dIdomains as a model for Type:Type. Information and Computation 94, 1991, p. 377398. [Berl92] C. Berline. Rétractions et interprétation interne du polymorphisme : le problème de la rétraction universelle. Theor. Inf. and Appl. 26, n◦ 1, 1992, p. 5991. [Berr78] G. Berry. Séquentialité de l’évaluation formelle des λexpressions. In Proc. 3◦ Coll. Int. sur la programmation, Paris, 1978 (Dunod, éd.). [Boh68] C. Böhm. Alcune proprietà delle forme βηnormali nel λKcalcolo. Pubblicazioni dell’Istituto per le applicazioni del calcolo 696, Rome, 1968. [Boh85] C. Böhm, A. Berarducci. Automatic synthesis of typed λprograms on term algebras. Th. Comp. Sc. 39, 1985, p. 135154. [Bru70] N. de Bruijn. The mathematical language AUTOMATH, its usage and some of its extensions. Symp. on automatic demonstration. Springer Lect. Notes in Math. 125, 1970, p. 2961. [Chu41] A. Church. The calculi of lambdaconversion. Princeton University Press, 1941. [Con86] R. Constable & al. Implementing mathematics with the Nuprl proof development system. Prentice Hall, 1986 [Cop78] M. Coppo, M. DezaniCiancaglini. A new type assignment for λterms. Archiv. Math. Logik 19, 1978, p. 139156. [Cop84] M. Coppo, M. DezaniCiancaglini, F. Honsell, G. Longo. Extended type structures and filter lambda models. In : Logic Colloquium 82, ed. G. Lolli & al., North Holland, 1984, p. 241262. 203
204
Lambdacalculus, types and models
[Coq88] T. Coquand, G. Huet. The calculus of constructions. Information and computation 76, 1988, p. 95120. [Cur58] H. Curry, R. Feys. Combinatory logic. North Holland, 1958. [Eng81] E. Engeler. Algebras and combinators. Algebra universalis 13(3), 1981, p. 389392. [For83] S. Fortune, D. Leivant, M. O’Donnell. The expressiveness of simple and second order type structures. J. Ass. Comp. Mach. 30, 1983, p. 151185. [Fri77] H. Friedman. Classically and intuitionistically provably recursive functions. In : Higher set theory, ed. G. Müller & D. Scott, Springer Lect. Notes in Math. 669, 1977, p. 2127. [Gia88] P. Giannini, S. Ronchi della Rocca. Characterization of typing in polymorphic type discipline. In : Proc. of Logic in Comp. Sc. 88, 1988. [Gir71] J.Y. Girard. Une extension de l’interprétation de Gödel à l’analyse. In : Proc. 2nd Scand. Logic Symp., ed. J. Fenstad, North Holland, 1971, p. 6392. [Gir72] J.Y. Girard. Interprétation fonctionnelle et élimination des coupures de l’arithmétique d’ordre supérieur. Thèse, Université Paris VII, 1972. [Gir86] J.Y. Girard. The system F of variable types, fifteen years later. Th. Comp. Sc., 45, 1986, p. 159192. [Gir89] J.Y. Girard, Y. Lafont, P. Taylor. Proofs and types. Cambridge University Press, 1989. [Hin78] J. Hindley. Reductions of residuals are finite. Trans. Amer. Math. Soc., 240, 1978, p. 345361. [Hin86] J. Hindley, J. Seldin. Introduction to combinators and λcalculus. Cambridge University Press, 1986. [How80] W. Howard. The formulaeastypes notion of construction. In : To H.B. Curry : Essays on combinatory logic, λcalculus and formalism, ed. J. Hindley & J. Seldin, Academic Press, 1980, p. 479490. [Kri87] J.L. Krivine, M. Parigot. Programming with proofs. J. Inf. Process. Cybern. EIK 26, 1990, 3, p. 149167. [Kri90] J.L. Krivine. Opérateurs de mise en mémoire et traduction de Gödel. Arch. Math. Logic 30, 1990, p. 241267. [Lei83] D. Leivant. Reasoning about functional programs and complexity classes associated with type disciplines. 24th Annual Symp. on Found. of Comp. Sc., 1983, p. 460469.
Bibliography
205
[Lév80] J.J. Lévy. Optimal reductions in the lambdacalculus. In : To H.B. Curry : Essays on combinatory logic, λcalculus and formalism, ed. J. Hindley & J. Seldin, Academic Press, 1980, p. 159192. [Lon83] G. Longo. Set theoretical models of lambdacalculus : theories, expansions, isomorphisms. Annals of pure and applied logic 24, 1983, p. 153188. [Mar79] P. MartinLöf.Constructive mathematics and computer programming. In : Logic, methodology and philosophy of science VI, North Holland, 1979. [Mey82] A. Meyer. What is a model of the lambdacalculus? Information and Control 52, 1982, p. 87122. [Mit79] G. Mitschke. The standardization theorem for the λcalculus. Z. Math. Logik Grundlag. Math. 25, 1979, p. 2931. [Par88] M. Parigot. Programming with proofs : a second order type theory. Proc. ESOP’88, Lect. Notes in Comp. Sc. 300, 1988, p. 145159. [Plo74] G. Plotkin. The λcalculus is ωincomplete. J. Symb. Logic 39, 1974, p. 313317. [Plo78] G. Plotkin. T ω as a universal domain. J. Comput. System Sci. 17, 1978, p. 209236. [Pot80] G. Pottinger. A type assignment for the strongly normalizable λterms. In : To H.B. Curry : Essays on combinatory logic, λcalculus and formalism, ed. J. Hindley & J. Seldin, Academic Press, 1980, p. 561577. [Rey74] J. Reynolds. Toward a theory of type structures. Colloque sur la programmation. Springer Lect. Notes in Comp. Sc. 19, 1974, p. 408425. [Ron84] S. Ronchi della Rocca, B. Venneri. Principal type schemes for an extended type theory. Th. Comp. Sc. 28, 1984, p. 151171. [Sco73] D. Scott. Models for various type free calculi. In : Logic, methodology and philosophy of science IV, eds. P. Suppes & al., North Holland, 1973, p. 157187. [Sco76] D. Scott. Data types as lattices. S.I.A.M. Journal on Computing, 5, 1976, p. 522587. [Sco80] D. Scott. Lambdacalculus : some models, some philosophy. In : Kleene symposium, ed. J. Barwise, North Holland, 1980, p. 223266. [Sco82] D. Scott. Domains for denotational semantics. Springer Lect. Notes in Comp. Sc. 140, 1982, p. 577613. [Sto77] J. Stoy. Denotational semantics : the ScottStrachey approach to programming languages. M.I.T. Press, 1977.
206
Lambdacalculus, types and models
[Tar55] A. Tarski. A latticetheoretical fixpoint theorem and its applications. Pacific J. Math. 5, 1955, p. 285309. [Urz03] P. Urzyczyn. A simple proof of the undecidability of strong normalization. Math. Struct. Comp. Sc. 3, 2003, p. 513.
EBook Information

Series: Ellis Horwood (1993) (mise à jour 3 novembre 2011)

Year: 2,014

Edition: version 29 Sep 2014

Pages: 206

Pages In File: 206

Language: English

Commentary: Downloaded from https://www.irif.univparisdiderot.fr/~krivine/articles/Lambda.pdf

Org File Size: 1,167,098

Extension: pdf