Arnie voice: "I'll be back"

And I thought that a computer can only do what it’s been told to do, - 1+1=2 not why it is 2

I have been wondering about this issue for years. In some ways, we have already reached the point where some things can be modeled on a computer, but no human really completely understands it. Perhaps our notion of what is meant by “understanding” may be about to change.

They can be taught to do the same repetitive steps over and over, very quickly. Solving equations sometimes involves doing the same repetitive steps, over and over… So we can, for instance, teach them to solve certain equations.

Because they can do repetitive work like this much faster than humans, and can keep almost endless amounts of variables in their “memory” at the same time, they can solve huuuge equations that would drive a human batty.

For instance, In Linear Algebra: any human can do the steps to solve a matrix of N, you just have to do all the steps until you’re “done”… But the time to do so increases rapidly and the chance for human error also ramps right up. Even more pressing, once the human has correctly solved a matrix of 100,000 variables: What does it mean? Does the resulting equation make any sense? No, he can’t deduce a “meaning” from the equation, and he can’t get a intuitive sense for “how” it works because it simply overwhelms the mind, but it always produces the right result.

A computer solves a matrix like that a lot faster, and with no error (if the steps were programmed correctly), so it can spit out results that make no sense to us, even if this trivial example that doesn’t require really “hard” computing. It just helps us solve much “bigger” problems easily.

It gets a lot more complex than this, but the point is the moment the basis of the complex problem is a fixed set of repetitive steps, a computer can do it.

As for the implication: I like to think that computers will one day understand our minds much better than we do. We could end up using them to “upgrade” ourselves. At that point we will be able to “understand” what the computers were telling us all along. Mostly because we became part computer ourselves.

The eventual proof of the four colour problem gained some notoriety because of its reliance on repetitive computations done by machine. But one can at least grasp what the computer was checking, albeit on our behalf. I don’t think the computer actually invented the proof.

Still, some felt that such a proof lacks some elegance.

Rigil

I don't think the computer actually invented the proof.

Think of a computer playing chess. It does not “invent” new moves intelligently, it just does a brute force search on all possible future moves.

Much the same with a proof, it can scan all possible options until it finds a path from where started to where it needed to end. That is where the human input came in… “We need to start HERE and we want to see if we can find a mathematical path to X”, where that mathematical path could be extremely complex. All we needed to teach it were the possible operations it can perform and what constitutes a desired outcome. It’s kinda like evolution really (there’s a branch of AI called “genetic algorythms”, more below), it simply tries to do operation A, from there B, …, from there Z. If Z (where Z could be millions of “moves” in the future) does not match some kind of “fitness criteria” (frequently used in AI) or is not a valid result, it disregards Z. Once it has tried all permutations of the “branch” of Y it disregards Y and tries another operation on X, and so on.

So one programs some “intelligent guesses” (heuristics, the fitness function, etc…) into it to make the process faster, but the result is it can “invent” a new proof for something (or confirm that there exists NO path from A to Z), but humans first needed to suspect there’s something there.

Then again in genetic algorythms, what you do is define a fitness function once more (Itself something you can test iteratively to find the best one), but you generate a set of completely random “proofs”. You then apply the fitness function to eliminate, say, 90% of the “weakest” candidates. You then “breed” the remaining proofs, you swap pieces between them, to produce a large number of “offspring”, then make the fittest of those survive and breed, etc… Eventually you find that the proofs evolve into more and more valid candidates until you find a completely valid proof, or say a new algorythm that takes certain inputs and always gives you the desired outputs. I have written a genetic algorythm myself, it is fascinating to see in action… millions of generations gradually moving towards the anwser you’re looking for, often producing counter-intuitive and surprising, yet valid, results.

The problem with this is, of course, much like we have vestigial bits that are no longer useful to our function, the results of genetic algorythms have the same property. They could have 100 steps where 10 would do. As you say, they lack “elegance”.

I should add: BUT, there are then other methods that can be used to eliminate such wastage.

I’m afraid that’s not correct. Such an approach entails an NP problem which would take much longer to exhaust than the age of the universe even if all the world’s computers were clustered. Instead, modern chess-playing algorithms are much more subtle and do limited maximum-likelihood/min-max loss-profit searches just a few moves ahead (about seven to ten, IIRC), readapting when the anticipated scenario doesn’t eventuate. This is the reason Grand Masters can still occasionally beat chess computers.

Genetic algorithms (GAs) are typically used in searching for a solution (e.g. an optimum) to an object function whose solution space is chaotic. Simulated Annealing is another algorithm of this type. (In contrast, where the solution space is well behaved, far more efficient ways of solving an object function than GAs exist, e.g. Steepest Descent.) The GA is used because it can deliver a good or even excellent solution quite rapidly and is relatively easy to implement but it is not guaranteed to deliver the very best possible solution because it’s not exhaustive. If you define an object function in such a way that a solution to it is a symbolic proof of some mathematical conjecture, then a genetic algorithm can indeed deliver better proofs.

People tend to think of computers as sophisticated calculators. One should rather view a computer as a symbol manipulation machine because arithmetic is just one particular instance of manipulating symbols. Over the last three or four decades, computers have increasingly assisted the development of mathematics mostly through so-called “experimental mathematics,” for example investigating the behaviour of an iterator in the complex plane (to wit: the Mandelbrot Set).

'Luthon64

Funnily enough I had that exact phrase (age of the universe) in my initial post text but removed that block for more brevity.

Instead, modern chess-playing algorithms are much more subtle and do limited [i]maximum-likelihood/min-max loss-profit[/i] searches just a few moves ahead

I did elaborate with:

So one programs some "intelligent guesses" (heuristics, the fitness function, etc...) into it to make the process faster
Genetic algorithms (GAs) are typically used in searching for a solution (e.g. an optimum) to an object function whose solution space is chaotic.

“Typically” being the operative word: they can be used anywhere you are looking for a “something” satisfying a certain fitness criteria, provided you can represent that something in a computer, and you can codify it’s “fitness”. Whether GA is the best solution depends on the problem field. Horses for courses, of course. :smiley: