Infinite Sets
Dealing with infinity is not what you expect while studying discrete math. But, we've been working with infinite sets all along — using integers, rationals, etc., so let's look at them a bit more in depth.
One obvious point is one difference between an infinite and a finite set. In a finite set, adding an element makes the set bigger, but that's not true for infinite sets.
Let's say
Then,
We can see that
However, now let's say that
So, if we have an infinite set, and we add an element to it, we still have an infinite set.
But, there is an important idea that there are different sizes of infinities (you might already be familiar with it if you've heard of Cantor before).
Following that idea, we can also differentiate finite sets and infinite sets when it comes to relations. Now, when we think of a surjective relation between two finite sets (
Likewise, when talking about a bijective relation (
There is also a related idea, the Schroeder-Bernstein Theorem. It says that if set
If English is slowing down your imaginative capabilities, here is the thing put precisely:
Remember that a notation like
A countable set is a set where you can list the elements as
So, let's say we have a countable set
Also,
Then, what about uncountable sets?
We can start with an example. Cantor's diagonal argument shows that the set of infinite binary sequences
We can think of "a list" of infinite binary sequences. However, using each element diagonally, we can create another new infinite binary sequence, like this:
In that case, the set of natural numbers is strictly smaller than set
The Halting Problem Basics
Let's take a look at the Halting Problem that's briefly touched upon in this course:
The general Halting Problem for some programming language is, given an arbitrary program, to determine whether the program will run forever if it is not interrupted.
The thing we're looking for is a way of detecting that a program does not halt.
Let's say we have a string s
that defines a procedure (a function) P
.
We are going to write very crude pseudocode through this example.
So, s
is something like this:
s = '''
def P(f):
# do stuff (return or not return, that is the question)
'''
Now, s
is said to halt iff P(s)
returns (when a function returns, obviously it halts). Notice that what we're doing is passing s
itself to what is defined by s
.
Let's assume that there is a function Q
that decides whether s
halts. We are arguing that it is impossible to exist, but for the sake of contradiction, let's say it does:
# THIS CANNOT EXIST
def Q(s):
If s halts:
return True
Else:
return False
So, Q(s)
returns true if s
halts, and false otherwise. It is a "halts decider".
We are now going to have another function Q_PRIME
that is kind of a modified version of Q
.
def Q_PRIME(s):
If !Q(s): # (that means s does not halt)
return True
Else:
go on forever # (that means s halts)
So, Q_PRIME(s)
returns true if Q(s)
returns false, and it returns nothing (it doesn't halt) if Q(s)
returns true.
Again, if Q_PRIME(s)
returns nothing, that means s
halts (because Q_PRIME(s)
returns nothing when Q(s)
returns true. And, Q(s)
returns true when s
halts).
Now, just like s
defining a function P
, let's say that t
defines the function Q_PRIME
itself.
So, t
is said to halt iff Q_PRIME
returns (when a function returns, obviously it halts).
Here is the thing, though: When we pass Q_PRIME
the string defining itself (t
), there is a contradiction.
Q_PRIME(t)
returns iff t
doesn't halt (because Q(t)
returns false). That is, if Q_PRIME
doesn't halt, it returns — this is a contradiction.
So, t
halts iff t
doesn't halt.
In short, if Q_PRIME(t)
returns, that means it halts. But, it can only return if it doesn't halt, that is, if Q(t)
returns false, because that's how we defined it.
When applied to itself, we reach a contradiction.
And, therefore there cannot be such Q
, a "halts decider".
It is undecidable.
Self-referentiality is more than an interesting idea. It has an important role in set theory, computer science, and the problem of consciousness as well.
For our purposes, self-application, the idea of taking a function and applying it to itself, is important.
There are some classic examples like
If it's false, then it has to be true; but when it's true, it means that it's false. That's a paradox.
But, self-application is taken for granted in computer science. Take a look at the simple trivial example like the pow
function that returns the result of taking
from math import pow
pow(pow(2, 3), 2) # 64.0
pow(pow(pow(2, 3), 2), 4) # 16777216.0
Russell's Paradox
What Russell was proposing was this:
So,
It implies that
Now, if we let
But, there is a fix: we can avoid this paradox if we don't allow
That's okay, but it raises another philosophical question: when is a set a set, and when is it not a set?
ZFC (Zermelo-Frankel set theory with Choice), which provides an axiomatic system for set theory addresses this issue of self-referentiality, and states that no set can be a member of itself:
Also, all sets are well-founded under membership:
Meaning that
It's also said that
Remember the well-ordering principle? Every nonempty set of nonnegative integers has to have a least element, in other words, it can't be decreasing infinitely. It is a similar idea: you can't have an infinite sequence of sets where each of which is a member of the previous one.