MAGAZINE

Artificial Intelligence | What the F*ck is it?

According to folklore, in the 16th Century the Golem was portrayed as a Homunculus who acted as a protector of the people in times of persecution. Today and in modern literature, the homunculus is attributed to Paracelsus (est. 1530) as a demonic helper. According to the Polish Science Fiction writer Stanislaw Lem, machines don’t necessarily have to produce anything or destroy the entire world. What they have in common is that they are artificial machines (or human-like creations) that make, what was previously thought impossible, a reality, are intelligent, can think and sometimes even have feelings. Almost as we imagine the future of artificial intelligence today.

In the summer of 1956 the American scientist John McCarthy was invited to attend a conference at Dartmouth College in Hanover, New Hampshire. Under the project name “Dartmouth Summer Research Project on Artificial Intelligence”, leading mathematicians met for a six week workshop to reflect on the possibilities of Artificial Intelligence. The discussion went down in history as the big bang of AI. A new term was born: Artificial Intelligence, and the rest, as they say, is history.

But ‘history’ started much earlier than that. Mathematics, methods and systematics have been developed for hundreds of years and they are now considered the basis of algorithms. Back in their time, the reasons for development were completely different. All recognised quantities from Bernoulli to Leibniz to Newton were all involved in the relevant research.

Jakob Bernoulli  in the 16th century developed a methodology to express probabilities mathematically. He described the Bernoulli numbers using his power series and thus expressed the first mathematical probability calculation. Much later and long after his death, this methodology would resurface. Actually, he had a rather simple goal. He wanted to express the popular dice game mathematically and be able to win with it.

Leibnitz, on the other hand, and at the same time as Newton, developed the differential calculus. In their era, it was considered a mathematical revolution. Without this acumen, many of today’s computing systems would not have been able to exist, especially not as a variant in the implementation and application of neural networks, the basis of what we call artificial intelligence today. Leibnitz, however, was way ahead of his time. He is considered a pioneer of his field as he was one of the first to conceive the notion of thinking machines. He laid the foundations, so to speak, and for the first time, he differentiated himself from the literary and philosophical world that brought the Golem to the imaginary daylight.

What Is Learning? A Small Comparison From Computer Science

When a three year old child throws a ball to the ground, the child has to wait until it bounces back from the ground to catch it. This experience was learnt through play and even though the child does not understand that the fundamental axioms of Isaac Newton are responsible for this action, it is about to test action-reaction in practice. This experience he has learned will not let him down. The ball will bounce back; you don’t need to know the physical rules for that.

The first serious chess computer was developed in 1979 by Ken Thompson and Joe Condon at Bell Laboratories in New Jersey. Belle was a hard-wired machine and was able to generate 180,000 positions per second using the computer power available at the time. At the same time, Robert Hyatt was working on lightning. In 1979 he was offered the opportunity to test his program on the Cray-1. Cray-1 was the fastest computer in the world at that time, but it still could not beat Belle.

Here we have some elemental factors; The abilities of the algorithm could not be beaten by the computing power of a weaker program. All of these game computers worked on a rule-based basis. They were programmed using the rules of the game. Although this may seem intelligent in reality it has nothing to do with what we understand today as ‘Artificial Intelligence’.

Another Game, Another Time: Go

In September 2015 AlphaGO, a computer from Google DeepMind, won the Chinese board game Go during a game against the European champion Fan Hui. What I must point out here is that AlphaGO does not know the rules of Go as its programming works differently. It is based on the experience gained from playing over millions of GO games upon which the computer was able to improve, by playing against itself. AlphaGO achieved a staggering prediction rate of 57%. So the computer works on experiences that come from a data storage. In the field of Artificial Intelligence, we would call this “ground truth” – a collection of data that is proven correct. The rules of the game became irrelevant to the computer as it learned the game through observation and by applying these experiences. Remember the example of the child with the ball? Here is how it all ties together. The child knows nothing about physics, but he instinctively knows how the ball will behave. That is, learning.

In 2016 a computer from Google DeepMind won four out of five games in a distributed computer network against the reigning world champion in GO. The system consisted of 1202 CPUs and 176 GPUs.

The iphone1, dating back to 2007, had the same computing power of the entire hardware system with which mankind reached the moon for the first time. Artificial Intelligence is possible because we have the computing power, and we can feed the data into the system. The mathematical foundations for this have been known for hundreds of years. When we hear terms such as “evolutionary algorithms” today, it is not the algorithm that learns by changing, its the database and its starting position. The rest is an application of known mathematical principles in a spectacularly high number. It is nothing more, yet it is also nothing less.

Written by Jürgen Schmidt

You may also like

Subscribe To Our Newsletter

Get notified about new articles