What is Artificial Intelligence? (Introduction 2023)

On December 17, 1903, the Wright brothers launched the new era of human flight when they successfully tested a flying vehicle that took off by its own power, flew naturally at even speeds, and descended without damage.

“If birds can glide for long periods of time, then… why can't I?” - Orville Wright

The invention of the airplane is the perfect example to illustrate what Artificial Intelligence really is.

Since the dawn of humanity, we have always wanted and succeeded in creating tools to extend our natural biological abilities. Being able to create new technology is one of the defining factors of our species.

For hundred of thousands of years we saw birds fly, and therefore wanted to do it ourselves. But, as we obviously don’t have wings, we had to create new artificial ones.

German “flying man”, Otto Lilienthal, stands suited in his ornithopter at Fliegeberg, Berlin — August 16, 1894

Otto’s ornithopter, an aircraft that flies by flapping wings, was clearly inspired by birds. At that time we didn’t know much about the aerodynamics that make flight possible, so we tried to copy what we saw already working in nature.

Sadly, this didn’t go so well for Otto, as his flying machine stalled mid-flight near Gollenberg, Germany, causing him to fall 50 feet to his death on August 10, 1896.

An Artificial Brain

Fastforward 126 years since Otto’s death, and at the time of this writing, June 27, 2022, we have safe, reliable and long distance flights across the world. Problem solved.

Now we are taking on a much harder challenge, instead of creating an artificial bird, we starting to succeed in creating an artificial brain.

Artificial Intelligence Brain

“Any sufficiently advanced technology is indistinguishable from magic” — Arthur C. Clarke

Why would we want such a thing? you may ask.

The answer is pretty straight forward:

“The entirety of who we are and value, all our feelings, thoughts and inventions come from a single source, our brain.”

Recreating the human brain is the ultimate technology. A technology that creates new and better technology.

If we manage to recreate such a thing, it would be the last invention we will ever have to make. If we solve this problem, we indirectly solve everything else.

The Human Brain

Before looking at how we are currently trying to create thinking machines, let’s take the same approach Otto Lilienthal took with his artificial bird and try to understand what is the equivalent of wings inside our skull.

“I think, therefore I am” — René Descartes, 1637

Our brain is similar to a computer. Like all computers, it receives inputs, does calculations with them, and produces outputs. It receives information from all the senses (vision, audition, olfaction, gustation and tactition), as well as inner inputs from long and short term memory, then processes these inputs and finally sends back outputs to the body, and itself, in order to achieve its goals. The brain is the root of human intelligence. If we understand its principles, we will discover the theory of intelligence, just like we discovered the theory of aerodynamics that explain flight.

The human brain is roughly the size of two clenched fists and weighs about 1.5 kilograms. Brain tissue is made up of about 100 billion neurons and 100 trillion connections between them.

The Neuron

The neuron is the main computational unit of the brain. It also has inputs and outputs. Its inputs are electrical signals from other neurons, and its outpus are electrical signals to other neurons.

Neurons receive and send electrical signals from and to other neurons

This is the most important discovery about the brain. That it is composed by a lot of neurons connected to each other, this means that the brain is essentially a biological neural network. The knowledge of a particular brain is encoded in the structure of its connections, known as the connectome, and also in the strengths of these connections. If a connection between two neurons is strong, they will likely send messages between each other, if the connection is weak they won’t interact very much.

The Perceptron

The perceptron was invented in 1943 by McCulloch and Pitts. This is what is known today as the first mathematical model that tried to simulate the processing of information in the brain.

Mark I perceptron machine, the first implementation of the perceptron algorithm, made by Frank Rosenblatt — Cornell Aeronautical Laboratory, 1958

The mark I perceptron machine was our first attempt at creating an artificial brain. Its main purpose was image recognition, the task of classifying what class of object a particular image contains, something we naturally do with our eyes and the visual cortex inside our brains. It had an array of 400 photocells, randomly connected to artificial neurons. Connections between neurons were made with physical wires, and their strengths were encoded with potentiometers. Learning was performed by changing the strengths of these connections with electric motors.

The Artificial Neuron

Let’s take a look at how this mathematical model of a biological neuron works.

The idea is very simple. There are a set of inputs to the neuron. These inputs represent the connection between this neuron and other neurons, so we also must also represent the strengths of these connections, these are what we call the weights of the network, where the learned knowledge of our artificial neural network resides. The transfer function receives each input and multiplies it by its weight value, then sums all of them and sends this output value to an activation function, which then compares if this value exceeds a particular threshold, if it does, the neuron is activated, meaning it sends a signal to other neurons and the cycle repeats, if not the neuron is kept deactivated and does not interact with its neighbours.

This simple mathematical model was the main idea behind the perceptron.

The Single Layer Perceptron

The original perceptron was composed of a single hidden layer of artificial neurons, meaning that all inputs were connected to all the neurons in this single layer, and then all their outputs were sent to a final set of neurons that represented the output of the network.

A single-layer perceptron

Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multi layer perceptron) had greater processing power than perceptrons with one layer.

Machine Learning

Now that we know how to model an artificial brain, how do we make it learn stuff like a human brain does?

The cost function of an artificial neural network

As we know now, an artificial neural network, like our brain, receives inputs and produces outputs. Therefore, we can say that this neural network has learned if given a set of inputs, it produces the right kind of outputs. For example, if we train a neural network to detect cats vs dogs, always we give it as input an image of a cat, it should give “cat” as output, not “dog”, and viceversa.

To achieve this, we have to create a way to quantify how wrong the output of the network was given a particular input, to then let our AI know this and fix its current knowledge to minimise its error, aka learn.

The way to quantify this, is to define what is called a Cost Function or also Loss Function. We will go into great detail about many different kinds of cost functions in future articles, but for now we will stay with the general idea that this cost function must give us a number that represents the error our network produced given an input.

This error is then back propagated from the output layer to the first layer by the method known as Backpropagation and the weights of the connection between all neurons of the network are updated by another method called Gradient Descent, which tells us in which direction and by how much does the cost function’s output change if we change the weights of the network in a particular way. Finally, we compute the weights changes that minimises the error of the network, update them, and try again, effectively making our machines learn.

Machine learning is therefore the field of Artificial Intelligence that takes all these building blocks to make an artificial neural network that can learn by itself from a dataset of examples of inputs and desired outputs. We are essentially teaching the AI to behave as we want it to behave.

Deep Learning

This brings us to the present day. The field of AI is currently dominated by what is known as Deep Learning. This is the realization that the larger the neural network, meaning the more layers of neurons it has, the more neurons per layer, and the more examples we show it, the better it performs. Current advances in AI are truly exponential, after the long so called “AI winters”, we are finally seeing humanity’s dream of creating thinking machines, a reality.

There are many subfields in Deep Learning, such as Computer Vision, Natural Language Processing and Speech Recognition. All these subfields are essentially tackling most of the sensory inputs our brain has, but for artificial ones. The holy grail is to create an Artificial General Intelligence or AGI for short, this would be an AI that can do every task any human can do, effectively achieving human level intelligence. Progress in AGI is being made by companies like OpenAI and DeepMind. The road to AGI might be long or short, but we can all agree that it is the single most important technology problem in the history of humanity. If we solve AGI, we will cure all diseases, extend the human lifespan significantly, and make interstellar travel possible.

At Theos we focus on solving the democratization of AI. Our AI development platform lets every company and individual in the world benefit from the most powerful technology ever invented. You can try it now for free.

Previous
Previous

Introduction to Computer Vision (Ultimate Guide 2023)