THE BIRTH OF ARTIFICIAL INTELLIGENCE
The start of the science of artificial intelligence is usually dated to an eight-week summer project at Dartmouth College in Hanover, New Hampshire in 1956. At the outset, there were two distinct approaches.
PERCEPTRONS
Artificial neural networks (ANNs) automatically extract features and relationships from data such as images, text, or sound without needing to be explicitly programmed instructions or structured data. The early versions were called Perceptrons, and were championed by Frank Rosenblatt.
A book called “Perceptrons” published in 1969 by Marvin Minsky and Seymour Papert seemed to prove that ANNs could not work, and the alternative approach was pursued by almost everyone for the rest of the century.
SYMBOLIC AI
This alternative approach is Symbolic AI, also known as Good Old-Fashioned AI. This approach uses symbols to represent problems and logical rules and processes like decision trees to solve them. Symbolic AI works well with tasks that require clear, deterministic solutions and reasoning, such as solving mathematical equations or executing complex, rule-based manipulations.
Symbolic AI reached its peak with Expert Systems in the 1980s, but the problem of edge cases was never resolved. The disappointments with both approaches to AI led to two AI winters, in which AI researchers struggled to obtain grants to continue their work.
THE FIRST BIG BANG IN AI
This all changed in 2012, with the First Big Bang in AI. After nearly 30 years trying, Geoff Hinton finally got the backpropagation algorithm to work properly, which led to the revival of ANNs, re-branded as Deep Learning. What made the difference was multiple processing layers. Peceptrons only had one or two layers, whereas deep learning systems can have hundreds.
THE SECOND BIG BANG
A Second Big Bang in AI came in 2017 when some Google researchers published a paper called “All You Need is Attention”, which introduced Transformer AI models, and Generative AI. This is the approach that has allowed Large Language Models to be developed. Transformer models are a kind of deep learning.
NEURO-SYMBOLIC AI
Some people believe that scaling up deep learning systems with ever more data and compute power will lead to superintelligence, but many others believe that the alternative approach of symbolic AI needs to be reintroduced and combined with deep learning. This combination is known as Neuro-Symbolic AI. The hope is that this approach will enable AIs to generalize from fewer examples, to reason with complex logic, and provide interpretable and explainable models.