POSTS

# An Intuitive Understanding of The Halting Problem

One of the fundamental results from computer science is The Halting Problem, which roughly stated, says that there is no general way to determine whether a given program with a given input will either terminate in a finite time, or run forever.

It has been formally proved that there is no general solution to the Halting
Problem - that is, there is no algorithm which can analyse *every* program
and tell you for sure whether it halts or not. If you are a programmer,
this sounds like a depressing result. Much of software engineering practice
is devoted to the reduction of bugs - but what chance do you have of proving
that your program returns the right answer, if in general you can’t even
prove that it terminates?

However, whilst the Halting Problem proof is undeniably valid, I also found
it to be unsatisfying. For one thing, it uses a contrived pathological
example to create a contradiction. There are *some* programs which cannot
be proved to terminate or not, but clearly there are programs which can
(such as the examples at the top of this article). Furthermore, my
experience tells me that infinite loops are very rarely a problem in real
programs. Is there an underlying reason why some programs cannot be
analysed in this way?

A deeper understanding came to me when I realised that some hard mathematical problems can be expressed as Halting Problems.

# Fermat’s Last Theorem as a Halting Problem

Fermat’s Last Theorem states that for positive integers a, b, c and n, there are no values for n≥3 which satisfy the equation

a

^{n}+ b^{n}= c^{n}

This theorem was first proposed in 1637, but a proof eluded mathematicians for hundreds of years. It was finally solved in 1994, requiring the development of whole new branches of mathematics.

However, this theorem can also be reformulated directly as a Halting
Problem (here in Python^{1}):

```
import itertools, sys
for limit in itertools.count(3): # count from 3 to infinity
for a in range(1, limit+1):
for b in range(1, limit+1):
for c in range(1, limit+1):
for n in range(3, limit+1):
if a**n + b**n == c**n:
sys.exit(0)
```

This simple program comprises five nested loops, one condition, and no input. But if you can tell by inspection whether it halts or not, then you have also proved (or disproved) Fermat’s Last Theorem!

The program as written is “unoptimised”, in the sense that it will often
re-test the same combinations of a, b, c and n, but that doesn’t matter
here. It’s very easy to prove that it always makes progress (each inner
loop takes a finite amount of time, bounded by “limit”), and that every
combination of a, b, c and n will eventually be tested (in the outer loop
iteration where `limit = max(a,b,c,n)`

)

What we care about is whether it could ever possibly halt, and that’s exactly the same as proving or disproving Fermat’s Last Theorem.

There are other mathematical theorems of the form “there is no X which satisfies Y”, some of which I expect could also be expressed as Halting Problems; so if there were a general algorithm which could examine any program and tell you whether or not it halts, it would solve those problems at a stroke too.

Put another way: a solution to the Halting Problem, if it existed, would necessarily make much of mathematics (and mathematicians) obsolete. And for me, this makes it easier to understand why a solution cannot exist.

In Python,

`range(x, y)`

iterates over values from x up to*but not including*y.If Python is not familiar to you, here it is again in C:

`for (limit=3; ; limit++) for (a=1; a<=limit; a++) for (b=1; b<=limit; b++) for (c=1; c<=limit; c++) for (n=3; n<=limit; n++) if (pow(a,n) + pow(b,n) == pow(c,n)) { exit(0); }`

^{[return]}