The history of artificial intelligence goes back further than you may realize. In fact, you may be surprised to know that it even had a “golden age”. Join us as we take a look back at the history artificial intelligence, and what it means for us humans moving forward.
Turing’s Electronic Brains
The Turing test, despite controversy, shaped the trajectory of artificial intelligence. It provided a clear target for researchers in this budding field: to create a machine that could convincingly pass the test. Today, while the Turing test’s significance may be debated, its impact remains undeniably influential.
Imagine participating in the Turing test as an interrogator. The responses you receive convince you that you’re communicating with a human, not a bot. But it turns out you’re interacting with a computer program. Here’s the twist: the program either genuinely comprehends the conversation like a human, or it skilfully simulates such understanding.
These are two distinct possibilities. The idea that the program genuinely understands is a more potent assertion, one that warrants more conviction. Most would agree that programs capable of simulating understanding are plausible, but accept with hesitation that programs can truly comprehend.
These two goals – creating programs that simulate human understanding (weak artificial intelligence) versus those that genuinely understand (strong artificial intelligence) – are quite different. Developing strong AI is more ambitious and contentious. Yet, it’s crucial not to dismiss the importance of understanding these distinctions and their implications, whether you’re looking to shape strategy, innovate, or simply keep a pulse on technological advances. You may not be creating AI, but understanding these nuances can impact how you interpret its progress and potential.
The Golden Age
Turing’s paper “Computing Machinery and Intelligence,” responsible for the birth of the Turing test, served as a pioneering scientific input to artificial intelligence. However, this contribution felt isolated, as the discipline of AI was non-existent at the time, with no name, no researchers, and only speculative conceptual contributions like the Turing test.
In a matter of a decade, by the end of the 1950s, this drastically changed. A new discipline was born, a community of researchers came together, and the first rudimentary AI systems began to emerge. This period marked the start of an AI boom and the golden age of AI, from around 1956 to 1974. AI systems from this era, with names like SHRDLU, STRIPS, and SHAKEY, have become legends in AI history.
However, these systems were built with computers that were significantly limited and slow by today’s standards. There were no software development tools as we know them today. The researchers had to devise innovative programming tricks to get their programs running. Much of this creativity fed into the hacker culture of programming.
Yet, by the mid-1970s, AI’s progress stagnated, barely advancing past the initial experiments. This caused a dip in faith within the scientific community and the funders, who perceived AI, once filled with promise, as a discipline going nowhere.
Knowledge Is Power
During the mid-1970s, artificial intelligence (AI) experienced significant setbacks that caused some to view it as pseudoscience. This period, often referred to as the “AI winter,” was a low point for the field’s reputation. Nevertheless, even as criticism circulated, a new AI approach was garnering interest.
The late 1970s and early 1980s saw the arrival of a fresh cohort of researchers, who believed that previous AI efforts overly focused on general problem-solving methods. The missing ingredient, they suggested, was knowledge. This led to the development of knowledge-based AI systems, also known as expert systems. These systems relied on human expert knowledge to tackle specific, narrow problems.
Expert systems offered proof that AI could outperform humans in certain tasks. Crucially, they also demonstrated AI’s commercial potential, showing for the first time that AI had the capacity to generate revenue. They also made it possible to teach the techniques of knowledge-based AI to a broad audience, producing a generation of graduates ready to apply their practical AI skills.
Expert systems didn’t claim to provide general AI. Instead, they aimed to solve highly specific problems requiring substantial human expertise, especially when such expertise was scarce. For the following decade, knowledge-based expert systems were the primary focus of AI research. By the early 1980s, the AI winter had thawed and a new, more significant AI boom was underway, fueled by considerable industry investment.
Robots and Rationality
In “The Structure of Scientific Revolutions” (1962), Thomas Kuhn theorized that scientific understanding’s progress will occasionally experience paradigm shifts, especially when existing beliefs cannot accommodate emerging contradictions. By the late 1980s, the AI industry experienced a similar shift, moving away from the “boom days” of expert systems. The community faced criticisms for promising too much and delivering too little, bringing into question not only the “Knowledge is power” mantra that drove the expert systems era but also the underlying assumptions of symbolic AI, a cornerstone since the 1950s.
Interestingly, the harshest critics were insiders, including influential roboticist Rodney Brooks. Brooks, an Australian native born in 1954, focused on creating robots capable of performing useful real-world tasks. He grew frustrated with the dominant idea of encoding world knowledge to guide robotic reasoning and decision-making. Joining MIT faculty in the mid-1980s, he spearheaded an effort to fundamentally rethink AI’s principles.
Deep Breakthroughs
In January 2014, an unprecedented event took place in the UK’s tech industry as Google acquired a small firm named DeepMind for a reported $650 million. This London-based startup, focusing on artificial intelligence, had fewer than 25 employees and seemingly no products, technology, or clear business strategy at the time. This unexpected acquisition, one of the largest in the UK technology sector, sparked global curiosity and headlines, escalating interest in artificial intelligence.
As a result, AI started receiving daily media attention. Governments globally noticed the trend, leading to several national AI initiatives. Tech companies felt the urgency not to fall behind, igniting a wave of investment in the field. While DeepMind’s acquisition was the most high-profile, many others followed, including Uber’s hiring of 40 researchers from Carnegie Mellon University’s machine learning lab in 2015. This shift in attitudes towards AI was primarily driven by advancements in machine learning, a subfield of AI.
AI Today
In the second decade of the 21st century, deep learning has catalyzed an AI boom, reminiscent of the Web’s rise in the 90s. Deep learning has found applications in education, science, industry, commerce, agriculture, health care, entertainment, and more. AI’s presence, visible or not, permeates our lives, reshaping our world like computers and the Internet once did.
The AI-enabled feats achieved recently are striking. In 2019, AI and computer vision algorithms made possible the first-ever image of a black hole, using data from eight worldwide radio telescopes. In 2018, Nvidia researchers used a generative adversarial network, a type of AI, to create incredibly realistic, yet entirely fabricated images of people. This capability will likely shape the future of virtual reality by creating convincing alternative realities.
DeepMind’s AlphaFold, announced in late 2018, offers a promising breakthrough in medicine. It uses machine learning to predict protein shapes, a challenging problem pivotal in understanding conditions like Alzheimer’s disease. These examples illustrate the transformative power of AI.
How We Imagine Things Might Go Wrong
AI’s rapid advancements this century have garnered significant press coverage, ranging from balanced reporting to fearmongering. An incident in 2017, where Facebook reportedly shut down two AI systems communicating in an invented language, typifies such skewed narratives. These reports suggested a loss of control, feeding into a “Terminator” scenario where AI poses an existential threat, a view echoed by prominent figures like Elon Musk and Stephen Hawking.
However, the Facebook experiment was harmless, highlighting that overblown fears often overshadow genuine, pressing issues concerning AI. These unfounded anxieties detract attention from crucial aspects that warrant our immediate focus. Rather than fearing an AI apocalypse, we should concentrate on managing the actual challenges AI presents in the here and now, though these concerns may not make sensational headlines like Hollywood blockbusters do.
How Things Might Actually Go Wrong
AI, a general-purpose technology, has endless applications but also holds potential for unintended consequences and misuse, as with all such innovations throughout history. Early humans who discovered fire couldn’t foresee the climate impact of burning fossil fuels. Michael Faraday didn’t predict the electric chair when he invented the electric generator. Karl Benz couldn’t have foreseen the millions of annual fatalities caused by automobiles, and Vint Cerf certainly didn’t envision terrorist propaganda spreading via the internet he created.
Similarly, AI will undoubtedly have negative global impacts and will be misused, albeit not quite as dramatically as the apocalyptic “Terminator” scenario. These real, upcoming challenges are what we and future generations need to confront and manage effectively.
Conscious Machines?
While deep learning is a critical aspect of general AI, it’s not the sole component. In fact, some crucial elements are yet unknown, as is the ‘recipe’ for achieving general intelligence. Our current achievements—image recognition, language translation, autonomous cars—don’t equate to holistic intelligence. Thus, we’re still grappling with Rodney Brooks’ dilemma from the 80s: we possess some elements of intelligence but lack understanding on how to integrate them into a cohesive system. Furthermore, even our most advanced AI systems lack meaningful understanding of their own processes. Regardless of their proficiency in narrow tasks, they are ultimately just optimized software components for specific functions, lacking genuine comprehension.
Conclusion
The journey of AI, from its inception to the current state, has been marked by a series of ups and downs. It endured periods of crisis, experienced transformations, and sparked revolutionary advancements like deep learning. AI technologies, once a quiet computing backwater, have now become pivotal tools, applied across various sectors, ranging from education to healthcare, and from industry to entertainment. However, sensationalist reporting and misconceptions about AI, often depicting it as a runaway risk, have overshadowed its real challenges and implications.
It’s crucial to understand that like all transformative technologies, AI has its own unintended consequences and potential for misuse. Furthermore, while significant strides have been made in narrow AI, we’re still far from achieving general AI. Many crucial components of intelligence are yet to be discovered, and how to integrate the known ones into a cohesive system remains an unsolved puzzle. As we continue to push the boundaries of AI, we must remain mindful of the true challenges, potential risks, and the limits of our current understanding.