LAMBDA CALCULUS A way to define essentially all of computer programming from very basic building blocks. I call it the "quanta" of programming. From the lambda calculus, you can define booleans, integers, ifs, loops, arithmetic, and everything else. The lambda calculus is not recursive. But you can emulate recursion with lambda calculus, but it is not itself intrinsically recursive. Functions can't literally call themselves. In fact, functions them A set is recursively enumerable if there is an algorithm to determine whether an element is in that set that always returns YES if the element is in the set and never returns YES if it is not. However, if the element is not in the set the algorithm doesn't necessarily return anything at all. Computable or recursive, means the algorithm always definitely returns YES or NO. When Tyler was saying recursive, I thought that he was referring the use of recursion as a programming language uses it, in the sense of method calling a method already on the stack. Every program in the Lambda Calculus is technically as a single (possibly quite lengthy) lambda expression. When human beings use lambda expressions, we often use "MACROS", a specific lambda expression for which we just give a name, for reasons of shorthand. But the MACROS aren't functions, they're just shorthand. So that when the MACROS are expanded, we do end up with one single huge lambda expression. In the universe of lambda expression, everything is a function of one variable. Everything is. The variable that a function takes is itself a function taking one variable. The thing that the function returns is itself a function taking one variable. The most basic lambda expression is this: (Lx).x This is the identity function. The x next to the lambda indiciates the "parameter" for the function. This above takes the parameter and returns that parameter. If I write two lambda functions next to each other, the second function is treated as the argument to the first function. (Lx).x (Lx).x ==> (Lx).x (Lx).y (Lx).y ==> y y is a "free variable" it's not connected to any sort of parameter. So the first lambda expression ignores its parameter and returns y. (Lx).xx (Lx).xx ==> (Lx).xx (Lx).xx ==> (Lx).xx (Lx).xx This is the lambda calculus analog to an infinite loop. Now, the book abuses the shorthand slightly, but not in any way that will interfere with programming. So, in lambda calculus, you can define certain expressions to mean TRUE and FALSE. TRUE: (Lx).(Ly).x FALSE: (Lx).(Ly).y TRUE expr1 expr2 ==> expr1 FALSE expr1 expr2 ==> expr2 t->(t0,t1): this means if t is true then t0 else t1 <== BOOK SHORTHAND for ifs (Lx).(Ly).(x-y) 5 3 (Ly).(5-y) 3 (5-3) 2 Continuing on with regular old imperative languages, we have operational vs. denotational semantics. The difference is very subtle and nuanced, so nuanced that I have a hard time arguing against the notion that it's a distinction without a difference. Aexp -> (SIGMA -> Z) Which that any arithmetic expression, for example, x+y is actually a function that maps a memory state to a specific integer. If sigma is a memory state in which x happens to be 2 and y happens to be 3. The function x+y maps that memory state to the integer 5. The expression x==17 gives us a function mapping memory addresses to boolean values. If x==17 is applied to a memory state in which x happens to be 71, then the function maps that memory state to FALSE. A command is a function that returns a partial function sort of mapping states to states. A "partial function" is one where inputs are not guaranteed to have an output in the function. In classical arithmetic, the function f(x) = 1/x can be construed as a partial function from R to R, meaning you don't always get an answer. A[[n]] = (L sigma).n A[[5]] is a function that accepts a memory state and returns 5.