18 Dec 2015

Will Computers Redefine the Roots of Math?

quantamagazine

As he [Vladimir Voevodsky] sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.

“The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.

The accepted foundation of mathematics is set theory. Like any foundational system, set theory provides a collection of basic concepts and rules, which can be used to construct the rest of mathematics. Set theory has sufficed as a foundation for more than a century, but it can’t readily be translated into a form that computers can use to check proofs.

a recasting of the underpinnings of mathematics

Set Theory and a Paradox

Set theory grew out of an impulse to put mathematics on an entirely rigorous footing — a logical basis even more secure than numbers themselves.

Set theory begins with the set containing nothing — the null set — which is used to define the number zero. The number 1 can then be built by defining a new set with one element — the null set. The number 2 is the set that contains two elements — the null set (0) and the set that contains the null set (1).

functions in the plane can be defined as sets of ordered pairs

“You end up with complicated structures, which are a set of things, which are a set of things, which are a set of things, all the way down to the metal, to the empty set at the bottom,” said Michael Shulman

An advantage of set theory as a foundational system is that it is very economical — every object mathematicians could possibly want to use is ultimately built from the null set.

On the other hand, it can be tedious to encode complicated mathematical objects as elaborate hierarchies of sets. This limitation becomes problematic when mathematicians want to think about objects that are equivalent or isomorphic in some sense, if not necessarily equal in all respects.

For example, the fraction ½ and the decimal 0.5 represent the same real number but are encoded very differently in terms of sets.

Coq and Agda, for example, are based on a different formal system called type theory.

type theory
Bertrand Russell in 1901, noted that some sets contain themselves as a member.

Russell defined a new set, asked whether that set contains itself, and showed that answering that question produces a paradox

paradox

the set of all sets that do not contain themselves

type system
SET, MEGASET
from here, the whole system arises in an orderly fashion

An important distinction between set theory and type theory lies in the way theorems are treated.

In set theory, a theorem is not itself a set — it’s a statement about sets. By contrast, in some versions of type theory, theorems and SETS are on equal footing.

By contrast, in some versions of type theory, A theorem is the type whose elements are all the different ways the theorem can be proved.

(sets equivalent) isomorphic
the way to show formally that these two sets are equivalent is to pair objects from the first set with objects from the second
difference between set theory and type theory
traditional set theory proof of the theorem Set A ≅ Set B (where the symbol ≅ means “is isomorphic to”), mathematicians are concerned only with whether one such pairing exists
type theory, Set A ≅ Set B can be interpreted as a collection, consisting of all the different ways of demonstrating the isomorphism (which in this case is two)

keep track of the various ways in which two objects (like these two sets) are equivalent, and type theory does this automatically by bundling equivalences together into a single type

homotopy (topology)
useful definition of equivalence
Spaces are homotopy equivalent if one can be deformed into the other by shrinking or thickening regions, without tearing.
type theory is well-suited to keeping track of paths
a topologist might think of two points in a space as equivalent whenever there is a path connecting them. Then the collection of all paths between points x and y can itself be viewed as a single type, which represents all proofs of the theorem x = y.

paths between paths can be thought of as higher-order relationships between points in a space

Voevodsky … formalize mathematics in a way that would make these higher-order relationships — paths of paths of paths — easy to work with … category theory …

there remained regions of mathematics that categories couldn’t reach

Yet Voevodsky was able to create an interpretation of type theory in the language of infinity-groupoids, an advance that allows mathematicians to reason efficiently about infinity-groupoids without ever having to think of them in terms of sets. This advance ultimately led to the development of univalent foundations.

Toward a New Foundational System

In 1972 the Swedish logician Per Martin-Löf introduced his own version of type theory inspired by ideas from Automath, a formal language for checking proofs on the computer. Martin-Löf type theory (MLTT)

“This is one of the most magical things, that somehow it happened that these programmers really wanted to formalize [type theory],” Shulman said, “and it turns out they ended up formalizing homotopy theory.”

“What’s appealing and different about [univalent foundations], especially if you start viewing [it] as replacing set theory,” he ( Daniel Grayson) said, “is that it appears that ideas from topology come into the very foundation of mathematics.”

From Idea to Action

“The drive to mechanize proof and proof verification doesn’t strongly motivate most mathematicians as far as I can tell,” he said. “I can understand why computer scientists and logicians would be excited, but I think mathematicians are looking for something else.”