EsoLang Mailing List FAQ

This is the FAQ for the esolang mailing list

List mechanics

What mailing lists are there?
x: blah
y: blah
z: blah

Turing completeness and the Turing mahcine

By Brian Raiter (breadbox at
Turing complete languages
A given programming language is said to be Turing-complete if it can be shown that it is computationally equivalent to a Turing machine. That is, any problem that can be solved on a Turing machine using a finite amount of resources (i.e., time and tape), can be solved with the other language using a finite amount of its resources.

Typically, one proves a given language is Turing-complete by providing a recipe for translating any given Turing machine program into an equivalent program in the language in question. Alternately, one can provide a translation scheme from another language, one that has already been proven to be Turing-complete.

Nearly every existing computer language is Turing-complete. About the only computer languages that aren't Turing-complete are a handful of special languages that are capable of LESS than a Turing machine -- usually because some limitation is "hard-wired" into the language's structure or definition. (For example, Hofstadter's designed a language called BlooP so that it was impossible for it to have a iteration structure with an arbitrarily high upper bound.)

Because of this fact, the exact definition of a Turing machine isn't particularly important, and in fact any given CS instructor (or textbook) is going to provide a slightly different version of a Turing machine. Basically, a Turing machine is an abstract model for a programmable machine, something that nowadays we would think of as a programming language. It was invented by Alan Turing as a theoretical object which he essentially used to give a concrete basis for proving theorems about algorithms, in the days before electronic computers.

The Turing machine
The Turing machine is usually presented as a read-write head over an arbitrarily long (though finite) length of tape. At each position on the tape is recorded a symbol which the head can read and/or overwrite with a new symbol. The machine's programming is mainly determined by a set of states. At each tick of the Turning machine's clock, the Turing machine reads the symbol recorded at its current position on the tape, writes a new symbol at that position (or possibly retains the existing symbol), moves the read-write head one position to the left or right (or perhaps remains at the current location), and determines the new state to be in at the start of the next tick (again, possibly the same as the current one). One state is specially marked as the initial state; the machine begins a run in that state. Any number of states may also be marked as final states; the machine ends its run upon reaching one of those states.
By Panu Kalliokoski (pkalliok at
Turing completeness
What is interesting in Turing-completeness is that it is the most rigorous definition of "what can be done by algorithmic computation": it is a hypothesis, impossible to prove, that there are no problems that can be solved by mechanical computation but cannot be solved by Turing machines. This is impossible to prove because the concept of "mechanical computation" is an intuitive, not a formal one.

Of course, there are systems that are more efficient at solving problems than Turing machines.

An interesting aside is that there are problems that are proven not to be possible to be solved by a Turing machine. As a result, they are suspected not to be solvable by any mechanical computational system.
Sometimes Turing completeness is defined by the kind of languages a system can recognise (Chomsky's language hierarchy). These are:

  1. Regular languages, that can be recognised by finite automata
  2. Context-free languages, that can be recognised by pushdown (aka stack) automata
  3. Contextual languages, that can be recognised by Turing machines
Formalisms that are equivalent to each class:

  1. Finite automata: regular expressions, all systems that have finite state (such as every concrete system), ...
  2. Context-free languages: Forth without free storage, LR grammars such as yacc, ...
  3. Turing machines: Lambda Calculus, Combinatory Logic, Semi Thue Grammars, and most programming languages

Where can I find out more about...

spec: http://url
compilers: http://url

Good books related to esoteric languages?

Al. Andreou (ee4299 at asked:
Can someone provide a list of books that will be useful to the esolang enthusiast with no theoretical CS background.
Markus Kliegl (markus.kliegl at responded thusly:
Besides looking at many esoteric languages, I'd say looking at "normal" languages and their paradigms and theoretical backgrounds is a lot more revealing and interesting than reading some random CS (theory) books. I personally like Common Lisp, Ocaml, and Mercury a lot. Others on this list seem to favor Haskell, Erlang, Scheme, Forth, etc.

Maybe reading about some general concepts like turing machines (or "state" in general), lambda calculus (or the "functional" paradigm), combinators, etc. would be useful. The other popular subject here is of course compilers.

Anyway, here's a list of some of the things I enjoyed reading (some may be rather irrelevant to this list):

Why Functional Programming Matters by John Hughes.

The Principia Discordia
Search google for many versions

Minds, Brains, and Programs by John R. Searle

Elephants Don't Play Chess by Rodney A. Brooks
(I'd recommend reading the last two in that order)

Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I by John McCarthy

Introduction to Functional Programming by John Harrison
(This probably comes closest to your request for a good book.)

The first two chapters of The Emperor's New Mind by Roger Penrose might interest you, too.
Chris Pressey (cpressey at also responded:
I understand that the first edition of the "Dragon" book (Compilers: Principles, Techniques, and Tools by Aho, Sethi, & Ullman) has some pretty esoteric stuff in it. I only have the 2nd edition, it's pretty good too.

Understanding Programming Languages by Ben-Ari

Patterns in Software by Gabriel

Some on this list would probably recommend Goedel, Escher, Bach: an Eternal Golden Braid by Hofstaedter but it was just too geeky/starry-eyed for my taste. I much preferred Fear and Loathing in Las Vegas by Thompson.

Plus there was one library book whose title I can't remember, but I think it had the word "Mind" in it, but I'm pretty sure the authors' names were Levine & Levine. Google revealed nothing concrete, but it was a good book overall, covering some *very* early attempts at "programming language" design, like the Principia Mathematica and even earlier stuff. ?
This book also comes up from time to time, and is available for free as a PDF:

Advanced Programming Languages Design

Is Language x Turing Complete?

Rune Berge (rune at asked:
I am currently working on my own minimalistic language and have the following question: Is a working brainfuck interpreter sufficient to prove that the language is Turing complete? I'm pretty sure it is, but it'd be nice to have it confirmed.
Daniel B Cristofani (cristofd at responded thusly:
Yes, for certain values of "working". :) The most direct way to check whether your interpreter's version of brainfuck is known to be Turing-complete is to use it to run the brainfuck utm available at and see what answers you get. There's a simple test case in the second paragraph, and that should test everything except whether the storage is practically inexhaustible.