Programming with Math | The Lambda Calculus
By Eyesomorphic
Summary
Topics Covered
- Lambda Calculus Computes Everything
- Higher-Order Functions Unlock Power
- Currying Simulates Multiple Inputs
- Types Prevent Nonsense Computations
- Types Encode Logical Proofs
Full Transcript
Learning a new programming language is hard. There's so much syntax to learn
hard. There's so much syntax to learn and even then you have to understand how the language computes to go from this syntax to a meaningful program.
But what if I told you that there is a language that can do absolutely anything any other programming language can and will ever do until the end of time that
consists of only three pieces of syntax and a singular rule of computation.
Introducing the lambda calculus.
The lambda calculus is at its core a theory of functions.
By functions, I don't mean complicated blocks of code, rather a mapping of inputs to [music] outputs.
For example, we could have a function that maps a number to its square. So,
the function maps the input two to the output four and so on.
Writing the mapping from every possible input to its output is tedious at best.
So let's abstract away the specific inputs and just say that for some arbitrary input x we map it to x^ 2.
We call x a variable.
We could have used any symbol in place of x. [music] Mapping y to y^ 2 or star
of x. [music] Mapping y to y^ 2 or star to star 2 doesn't change which inputs are mapped to which outputs. [music]
As some fancy terminology, we say that these functions are alpha equivalent.
And the process of renaming the input variable and all [music] instances of that variable in the function to a different variable is called alpha conversion.
To avoid having to use arrows to denote functions, let's introduce some new notation.
For a function that maps an arbitrary input x to the [music] output m, we'll write it as follows.
We'll first write a Greek letter lambda to indicate that we're writing a function. There's really nothing special
function. There's really nothing special about this letter. It's simply a visual marker that we're defining a function.
Then we'll write the input variable, in this case x, next to the lambda.
We'll then write a dot and finally m, the output of our function. The dot is simply there to separate the input from the output.
We often call functions written like this lambda abstractions.
Okay. So now we have a concise notation to write functions. But on their own they don't exactly do very much.
That's where function application comes in.
Let's suppose I have the function that maps the input x to the output x + 1.
If Alice wanted to find what a specific number, say three, is mapped to, we can substitute in three for our input variable. So, we would replace any x's
variable. So, we would replace any x's we see in the output with a three, giving us 3 + 1 or 4. We call this process of substituting a term for the
input variable of a function beta reduction. And we write this as an arrow
reduction. And we write this as an arrow with a little beta next to it to show that a reduction has taken place.
In general, if we have a lambda abstraction lambda x dom applied to an input n denoted by writing the input on the right of our function, we can
perform a beta reduction by going through m and substituting [music] our input n for the variable x.
We'll shorten this by writing a slash to mean substituted for.
So we've now understood variables, lambda abstractions, and applications of functions via beta reduction.
This looks very much like a start to an extremely primitive programming language. To finish it off, we would
language. To finish it off, we would just need to add support for multiple inputs, loops, booleans, some data structures, a few more rules of computation.
As it turns out, we actually don't need anything more. If we only allow the
anything more. If we only allow the programmer to write variables, lambda abstractions, [music] and applications, and simply compute these terms using
beta reduction, we can make anything that a computer will ever do ever.
This amazing result was documented in the church Turing thesis, two names we'll come across later on.
We call this tiny programming language the lambda calculus.
This result seems impossible. But the
key to seeing how this is achieved is by noticing that our functions here are higher order.
This means that functions can be passed as inputs to other functions and equally functions can return functions as an output.
This small subtlety is what gives the lambda calculus its punch, which I'll demonstrate by replicating a standard feature of more complicated programming
languages in the lambda calculus.
You might have noticed that lambda abstractions only have a singular input variable, which seems rather limiting.
What if Bob wants to make a function that say takes two numbers as inputs and then outputs their sum?
He can actually achieve this with a little trick. Consider the function that
little trick. Consider the function that takes an input x and returns another function.
This outputed function takes an input y and outputs x + y.
Okay, that's a little confusing. But
let's see what happens when we apply this function to an input, say 1.
Replacing all x's with ones gives the function lambdayy do 1 + y.
That is a function that takes an input and returns 1 + that input.
So if we apply this function to the number two, we replace the occurrence of y in our lambda abstraction with two and
we get 1 + 2 or three.
We've effectively given this function one input [music] after another and it has returned their sum.
To generalize this, the arbitrary lambda abstraction [music] lambda x dot lambdayy.l
can be applied to two inputs m and n by first beta reducing with the first input m to the function lambday. L with M
substituted for X which is yet to be applied to the input N.
We can then perform a second beta reduction with the [music] input N to finally get the term L with M substituted for X and N substituted for
Y.
So our output is a term with a substitution for X [music] and a substitution for Y. exactly as we'd expect from a function that takes two inputs.
This method of functions returning functions to sequentially apply to multiple inputs is [music] called currying. Named after the legician
currying. Named after the legician Haskell curry.
If you have experience programming, why not try to make some curried functions yourself?
This Python program here, for example, uses the built-in lambda syntax to [music] add two numbers together using curry.
This way of thinking can take a while to get used to, but hopefully I'm starting to convince you that the higher order lambda abstractions have more nuance than you might have expected.
To continue building the primitive calculus into a more practical programming language, we can encode some more common programming concepts into the calculus.
As a quick example, to use booleans and conditionals, we can represent true as the term lambda x dot lambda.x.
The function that takes two inputs and returns the first.
and represent false as the term lambda x dot lambda doy the function that takes two inputs and returns the second.
This seems completely arbitrary but look what happens when we encode an if statement as lambda b dot lambda x dot
lambdayy dobb applied to x and then y.
This lambda abstraction just takes [music] three inputs B, X, and Y and outputs the result of applying B to the inputs X and Y.
This looks pretty random, but if we input a boolean as the first argument and then any two terms as the inputs X and [music] Y, this function will behave just like an if statement. It will
return the first term if the boolean is true and the second if it's not.
In Python, this lambda abstraction would be the program if B then return X return Y.
Let's test that out. Applying our if statement to the inputs true MN where M and N are some terms in the lambda calculus should return M. Let's plug
true in for B first giving lambda X dot lambday.rxy true xy [music]
lambday.rxy true xy [music] applied to m then n.
Plugging in m for x gives lambday dot true my y applied to n.
Then finally substituting n for y gives true mn.
Now we can turn the term true back into its definition in the lambda calculus.
the function that takes two arguments and returns the first.
So let's first substitute m for x giving lambday m [music] applied to n.
Finally substituting n for y just returns m since there aren't any y's in this lambda abstraction at all. So after
all that, we've ended up with our if statement [music] with the boolean input true and the inputed terms m and n outputting the first term m exactly as
we expected it to do. Try plugging in false to our if statement and verify that it returns n this time.
You may be starting to feel slightly short changed. The introduction to this
short changed. The introduction to this video suggested that the lambda calculus would be the solution to the problem of complicated programming languages.
However, as you saw from the tedious method by which we encode relatively standard features into the calculus, the lambda calculus is certainly not suited for any real world programming.
So what was the point in all of this?
Well, for starters, once we've tediously encoded concepts into the calculus, we can actually just ignore the low-level implementation and just deal with the
objects themselves.
For example, we now know that booleans and conditionals can be simulated. So,
we can just deal with boolean terms and if statements without worrying about unfolding their definitions.
This makes programming in the lambda calculus somewhat reasonable.
As an example, we can write the not function as lambda b do if b then false true instead of this absolute mess if we
unfold all of the definitions.
Given that we can also encode numbers, recursion and data types like pairs, I hope you can start to see the power of
this miniature programming language.
I implore you to study how these concepts can be encoded, but instead I'd really like to focus on the implications of our newfound mathematical programming language.
We set out trying to find a simple and easy programming language that we can learn instead of more verbose and intricate languages. But instead, we've
intricate languages. But instead, we've stumbled across something far more profound. The lambda calculus gives us a
profound. The lambda calculus gives us a tiny definition of what it means to compute and because of this allows us to study programming and computation mathematically.
My favorite example of this is introducing a simple type system to the lambda calculus.
As some motivation, consider our not function from earlier. Lambda B do if B then false true.
This function maps the input true to the output false and the input false to the output [music] true. Essentially just
swapping true and false.
But there is a slight subtlety here.
There is no actual requirement for our input to be a boolean.
Our not function applied to an input of say one. Beta reduces to if one then
say one. Beta reduces to if one then false true. This is nonsensical. If one
false true. This is nonsensical. If one
makes no sense because one isn't a boolean. So trying to use our encodings
boolean. So trying to use our encodings of these concepts to reduce this further will result in a term with no real meaning.
To avoid this, we can add some rules to our lambda calculus.
We'll assign a label called a type to terms in the lambda calculus that make sense which in our case is going to be built from the types bool for boolean
values and num for numeric values.
We'll indicate the type of a term by writing the type after a colon. So we'd
write true colon bool to say that true [music] is of type boolean.
Lambda abstractions have the type A arrow b normally said as type A to B where A is the type of its input and B
is the type of its output.
We'll specify the [music] type of the input by putting it next to the input variable when writing lambda abstractions.
So to finally solve our problem of nonsense terms, we'll now say that we can only apply a lambda abstraction to a term that has the same type as the one
specified by its input variable.
As an example, we'd now write our not function like this, specifying that our input B is a boolean.
Since this abstraction takes a boolean and then outputs a boolean, it will have type bool to bool.
This means that we are simply not allowed to apply this function to the term one as this has type num and not bool.
Whilst this does limit what the calculus can do, it does stop a lot of nonsense terms from arising.
The version of lambda calculus with its typing system attached to it is called the simply typed lambda calculus and is very reminiscent of statically typed
languages like C or Java. But despite
this similarity, the lambda calculus and its simply typed version was invented by Alonzo Church between the 1930s and 40s before programming and even computer
science itself was being formally studied.
As a side note, Alonzo Church was actually the doctoral adviser of the father of computer science, Alan Turing, whose achievements have rightly earned
him the status of a household name.
Whilst the lambda calculus has played a pivotal role in the study of programming, it has been equally as important in the field of mathematical logic as we can define systems of logic
using the calculus.
In fact, our simply typed lambda calculus has a particularly interesting interpretation in terms of logic.
Let's say that the types in our calculus should be interpreted as propositions.
That being statements which are either true or false.
Then we'll say that each term in the lambda calculus should be seen as a proof that the proposition corresponding to that term's type [music] is true.
As an example, suppose we have an arbitrary type A.
We can interpret this as corresponding to a particular proposition.
Then we'll think of a term of type A as a proof that [music] A is true. If no
such term exists, then A is false.
This seems rather strange, but bear with me. If we use this interpretation of our
me. If we use this interpretation of our simply typed lambda calculus, then we find that the type of lambda abstractions that being a to b for some
arbitrary types A and B corresponds to the proposition A implies B. The
proposition A implies B is simply stating that if I can prove A is true, then I know that B must be true as well.
For example, if A was the proposition that it's raining and B was the proposition that the ground is wet, then the proposition A implies B would be if
it is raining, then the ground is wet.
If we assume that this implication is true, then if I could prove that it was raining, I would also know that the ground is wet.
So, how does this play out in our calculus? Let's take two types A and B.
calculus? Let's take two types A and B.
Then the proposition A implies B is equivalent to the type A to B.
Let's suppose that A was true.
That is there exists some term N of type A.
If we also suppose that A implies B was true, then there must exist some term of type A to B.
This must be a lambda abstraction of the form lambda x [music] dom where the input x is of type a and the output m is of type b.
Now a implies b is true and we know that a is true. So we know that logically b must be true as well. We'll separate the propositions we're assuming to be true
and the logical conclusions to these assumptions [music] with a line for our correspondence to work. Then we
need to be able to use this lambda abstraction and our term n to create a term of type b.
The key here is noticing that we can apply our lambda abstraction to n because n is of type a.
Crucially, we know that this has type B.
As the lambda abstraction outputs terms of type B.
As we can find a term of type B, we know that the proposition corresponding to B is true, just as we expected it to be.
This astounding relationship between propositions and [music] types is called the Curry Howard correspondence and has some fascinating implications.
If we build a programming language with this in mind, we can actually use this correspondence to explicitly write mathematical proofs using computer code.
These languages are called proof assistance. And popular ones include
assistance. And popular ones include lean and agar.
This lean program here, for example, is a proof that there are infinitely many prime numbers. [music] And we know that
prime numbers. [music] And we know that it's correct since it type checks. As
this area of research develops, who knows, in the future, mathematics might not be taught with pen and paper, but rather with a keyboard and mouse.
Well, that was quite the journey. We
started out by exploring [music] the essence of functions. And whilst it certainly isn't a serious contender for modern programming languages, [music] it has allowed us to explore a rich theory
of theoretical computer science and mathematical logic, whilst giving us the ability to reason about computer programs and code rigorously.
Who knew that a tiny little system of formal logic would have so many implications even a century after its creation?
If this topic interests you, I would encourage you to subscribe, leave a like, and browse other videos on this channel for similar explorations of mathematics and theoretical computer
science. But until then, goodbye.
science. But until then, goodbye.
Loading video analysis...