() { return ; } var declarations, but without initialization: int x; GOOD int x, y, z; GOOD int * A; int ct, len; GOOD int x = 5; BAD int StupidThing (void) { int n; return (printf ("Enter a number: "), scanf ("%d",&n), printf ("You entered %d\n",n), 0); "Hi" "There" ==> "HiThere" int x = 17 ; System.out.println (8+3 * 5); Tail Recursion: Is when the very last thing a method does is recursion. if (n==0) return 1; else return f(n-1); factorial: if (n==0) return 1; else return n*fact(n-1); <== Not tail recursion Now, most compilers worth their salt will translate tail recursion to a branch statement. When this method runs, recursion will not screw with the stack. It simply jumps back to the beginning of the method with new parameters, but doesn't create a new instance of the method on the stack. (test)? (inner1)? stuff : falsestuff : (inner2)? stuff : falsestuff select_first = Lx.Ly.x select_second = Lx.Ly.y In the world of lambda calculus, everything is a function. Every function accepts an argument and returns a result. These arguments and results are themselves functions because everything is a function. Do functions take two arguments? No, functions take one argument What if I want a function that takes two arguments? Write a function that takes one argument and have it return a function that takes the second argument and then return the answer you want. Is that hard? Not as much as you might think. true = select_first false = select_second not = Lx.((x false) true) and = Lx.Ly.((x y) false) or = Lx.Ly.((x true) y) cond = Le1.Le2.Lc.((c e1) e2) If c is true, cond evaluates to e1. If c is false, cond evaluates to e2. zero = identity = Lx.x succ = Ln.Ls.((s false) n) one = (succ zero) two = (succ one) and so forth. one= (succ zero) (Ln.Ls.((s Lx.y.y) n) Lx.x) Ls.((s Lx.y.y) Lx.x) Ls.((s false) zero) two = (succ one) (Ln.Ls.((s false) n) one) Ls.((s false) one) Now why are we defining our succ in this way? So that we can have an easy pred. succ is like adding one to an integer. pred is like subtracting one from an integer. iszero = Ln.(n select_first) (Ln.(n select_first) zero) (zero select_first) (identity select_first) select_first true (iszero one) (Ln.(n select_first) Ls.((s false) zero)) (Ls.((s false) zero) select_first) ((select_first false) zero) false (iszero two) ... ((select_first false) one) false pred = Ln.(((iszero n) zero) (n select_second)) (pred (succ arg)) (Ln.(((iszero n) zero) (n select_second)) (succ arg)) (((iszero (succ arg)) zero) ((succ arg) select_second)) ((succ arg) select_second) ((Ln.Ls.((s false) n) arg) select_second) (Ls.((s false) arg) select_second) ((select_second false) arg) arg (pred zero) (Ln.(((iszero n) zero) (n select_second)) zero) (((iszero zero) zero) (zero select_second)) zero So (pred zero) is zero And (pred (succ arg)) is arg What does it mean to add? add(x,y): if (y==0) return x; return add (succ (x), pred (y)); 5+3 = 6+2 = 7+1 = 8+0 = 8 add = ((cond (iszero y) x) ((add (succ x)) (pred y))) ^ ^ ^ ^ ^ These are actually expressions, that we're writing shorting. We know what cond is, and can replace cond with the appropriate expression. Same with iszero, succ, and pred. The problem if replace add with the huge lambda-expression to which it is equivalent, that expression will also have an "add" in it. Then that will have to be replaced, again with something containing an add, and so forth, and so on. The reason is difficult is that functions don't have names in lambda calculus, and so invoking a specific function by name simply can't be done. Functions don't have names, but arguments to functions do. (Like Lx.x, that argument does have a name: "x". And you can pass in a function as an argument add_helper to be Lx.Ly.Lf.((cond (iszero y) x) (((f (succ x)) (pred y)) f) add is now (((add_helper x) y) add_helper)