Artificial Intelligence: Roots and Early Development

0
99 views

Discussing Artificial Intelligence (AI) by referring to Melanie Mitchell’s book, Artificial Intelligence: A Guide to Intelligent Systems (2020) describes AI’s long journey from philosophical idea to practical technology.  AI has its roots in the history of human thought, starting with fundamental questions about what intelligence is and how the mind works.  AI has evolved through the interaction of human thought, technological advances, and real-world challenges. Despite much that has been achieved, the history of AI shows that the road ahead is full of new challenges, both from a technical and ethical perspective. Understanding these historical roots, we can at least better appreciate how AI has and will continue to shape our future.

Melanie’s book delves into the intellectual, philosophical and technical roots of this field, which have formed the basis of its development over the centuries.

Inspiration from Philosophical Thought

The history of AI cannot be separated from humans’ efforts to understand themselves. Philosophy played an important role in forming early concepts of intelligence and information processing. Greek philosophers, Plato and Aristotle tried to explain how humans think and make decisions through logic.

Aristotle introduced the idea of ​​formal logic, which is a system of rules for drawing conclusions from premises. This logic is the basis for understanding the inference process, which is then applied in modern computer systems. René Descartes in the 17th century introduced dualism, namely the view that the mind and body are separate entities. He also argued that the human mind could be analyzed mechanically, an idea that inspired the quest to build intelligent machines. In the same period, philosophers Thomas Hobbes and Gottfried Wilhelm Leibniz argued that human thought processes could be reduced to mathematical operations (the idea of ​​Mechanism). Leibniz even dreamed of creating a machine that could solve all problems with logic.

Mechanical Machines and the Industrial Revolution

Throughout the 17th century to the 19th century, technological developments spurred imagination about the possibility of creating machines that could think. Mechanical machines began to be designed to imitate human functions.

The automaton, a “Turk” machine (a type of robot that could play chess against a human) designed in the 18th century, is an early example of an attempt to imitate human intelligence. Even though Turk is actually a hoax (controlled by hidden humans), it shows great interest in the idea of ​​intelligent machines. Then, Charles Babbage designed the analytical engine, the prototype of the first modern computer. Meanwhile, Ada Lovelace understood the potential of these machines to do more than just calculations, creating the first algorithm designed to be executed by a machine.

The Computing Revolution in the 20th Century

The modern foundations of AI began to take shape with the development of computational theory in the early 20th century. The idea that the human mind can be simulated through mathematical algorithms is at the heart of the field. In 1936, Alan Turing introduced the concept of a Turing machine, a theoretical device that can process algorithms to solve any problem that can be defined mathematically (Theory of Computation). Turing also raised the question of whether machines can think in his essay “Computing Machinery and Intelligence” (1950), which introduced the Turing Test as a measure of machine intelligence.

Then, in the 1940s and 1950s, Norbert Wiener introduced the field of cybernetics, which studies control and communication systems in machines and living organisms. This concept is the basis for the development of adaptive and self-learning systems in AI.

The Birth of Artificial Intelligence as a Discipline

The term “artificial intelligence” was first coined at the Dartmouth conference in 1956, which is considered the official starting point of the field. Leading researchers, namely John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon, gathered to explore how to create machines that could think like humans. Early approaches to AI focused on rule-based systems, where computers were programmed to follow a strict set of logical instructions. This approach is used to build programs such as Logic Theorist and General Problem Solver, which can solve basic mathematical and logical problems.

During the 1950s and 1960s, there was great optimism that AI would quickly produce machines capable of performing almost all human tasks. However, technical challenges soon emerged.

Period of Disappointment (AI Winter)

Initial optimism soon faded as researchers realized the limitations of rule-based systems. Machines have a hard time dealing with real-world problems that require intuition, flexibility, and understanding of context.

Early AI systems had difficulty coping with the exponentially growing complexity of problems. For example, chess games or natural language processing required far more computing than computers could handle at that time. In addition, computing technology is not yet powerful enough, and data for training AI systems is still very limited (lack of data and computation). As a result of this disappointment, funding for AI declined for decades, creating a period known as “AI Winter.”

The Rise of AI: Machine Learning and Data

At the end of the 20th century, AI experienced a revival with the emergence of a new approach, namely machine learning. This approach allows computers to learn from data rather than relying entirely on human-defined rules.

The concept of artificial neural networks, first introduced in the 1940s by Warren McCulloch and Walter Pitts, was revived with the development of training algorithms such as backpropagation. Neural networks allow computers to recognize patterns in data and make predictions. Then, the emergence of the internet and the digital revolution created very large volumes of data (big data), which became the main fuel for training machine learning systems. Increased computing power, especially with the introduction of graphics processing units (GPUs), enables the training of larger and more complex AI models.

The Modern Era: AI That Changed the World

Today, AI has become an integral part of everyday life, used in various applications such as facial recognition, virtual assistants, autonomous cars, and many more. Some important developments in modern AI are: Deep learning is an advancement in deep neural networks (deep learning) enabling computers to perform tasks previously thought impossible, such as recognizing images with high accuracy and translating languages ​​in real-time. Then Natural Language Processing (NLP) is an advance in natural language processing that has enabled the development of models such as GPT (Generative Pre-trained Transformer), which can understand and generate text in a human-like way.

With the immense power that AI has, questions arise about its ethical and social impact. How AI can be used responsibly for the benefit of humanity is an increasingly important topic of discussion.

 

Written by: lili irahali from various sources