July 6, 2023

The History of Artificial Intelligence

3D web and neon green starburst on a black background
In this blog:
  • What is artificial intelligence?

  • Key Events in Artificial Intelligence

  • What is the future of artificial intelligence?

The artificial intelligence boom is unfolding and expanding in real-time. From research labs like OpenAI to tech juggernauts like Google, most companies are ramping up their artificial intelligence efforts — Big Human included. We recently launched Unhuman, our collection of AI products, and Literally Anything, our text-to-web-app tool.

But AI is not a new idea; this global surge, however massive, is just one more stage in the metamorphosis of machine-modeled human intelligence. AI’s origins can be traced as far back as 380 BC — and philosophers, researchers, analysts, scientists, and engineers have been iterating on it ever since.

As the past frames our present and enlightens our future, learning its history is the key to fully understanding artificial intelligence and how it might evolve. With AI having another moment of glory, we’re studying what artificial intelligence is and the key events that shaped its rise.

What is artificial intelligence?

Artificial intelligence has been categorized as both science and technology. Computer scientist, mathematician, and AI pioneer John McCarthy coined the term “artificial intelligence” in 1955. He described it as “the science and engineering of making intelligent machines,” relating it to “using computers to understand human intelligence.” This definition is the basis for how we characterize AI today: the theory and process of training computers to perform complex tasks that usually require human input, enabling machines to simulate and even improve human capabilities.

Today, artificial intelligence is typically referred to as technology. It’s a set of machine learning tools for data collection and analytics, natural language processing and generation, speech recognition, personalized recommendation systems, process automation, and more. Some of AI’s more well-known contemporary applications are chatbots, self-driving cars, and virtual assistants like Amazon’s Alexa and Apple’s Siri.

Key Events in Artificial Intelligence

The pursuit of artificial intelligence began more than 2,200 years ago in 380 BC, but it was more of an intellectual concept ruminated on by philosophers and theologians. AI was a hypothetical that felt so otherworldly, it was used as a storyline in mythological folklore.

Like much of human history, artificial intelligence first surfaced in Ancient Greece. (Automated is derived from automatos, the Greek word for “acting of oneself.”) As the myth goes, Hephaestus, the god of crafts and metalworking, created a set of automated handmaids out of gold and bestowed them with the knowledge of the gods. Before that, Hephaestus made Talos, a mechanical “robot” assigned to protect Crete from invasions. The start of AI’s practical applications came in 250 BC when inventor and mathematician Ctesibius built the world’s first automatic system — a self-regulating water clock.

Developments in math, logic, and science from the 14th to 19th centuries are definitive markers of artificial intelligence’s climb. Philosophers and inventors at the time may not have known they were early proponents of robotics and computer science, but they laid the groundwork for future AI advancements. In 1308, theologian Ramon Llull completed Ars magna (or Ars generalis ultima, The Ultimate General Art), detailing his method for a paper-based mechanical system that connects, organizes, and combines information to form new knowledge. A forerunner in AI research, Ars magna was a framework for analyzing logical arguments in order to draw logical conclusions. Expanding on Llull’s work, philosopher and mathematician Gottfried Leibniz’s 1666 paper Dissertatio de arte combinatoria (On the Combinatorial Art) asserted that all new ideas are strung together by using a combination of existing concepts. Giving mankind a computational cognitive algorithm, Leibniz devised an alphabet of human thought — a universal rulebook for evaluating and automating knowledge through the breakdown of logical operations.

Physical progress in artificial intelligence didn’t stop at Ctesibius in Greece. Over the course of his life in the 12th and 13th centuries, polymath and engineer Ismail al-Jazari invented over 100 automated devices, including a mechanized wine servant and a water-powered floating orchestra. In 1206, al-Jazari wrote The Book of Knowledge of Ingenious Mechanical Devices; the first record of programmable automation, it later solidified him as the Father of Robotics. It’s also rumored that al-Jazari influenced one of the most prolific inventors in history: Leonardo da Vinci. The Renaissance man was known for his expansive research in automation, going on to design (and possibly build) an artificial armored knight in 1495.

From the early 1600s to the late 1800s, artificial intelligence and modern technology were given artistic spins in poems, books, and plays. In 1726, Jonathan Swift released Gulliver’s Travels, where a machine called “The Engine” became the earliest known reference to a computer. Then in 1872, Samuel Butler anonymously printed Erewhon, one of the first novels to explore the idea of artificial consciousness. Butler also suggested Charles Darwin’s theory of evolution could be applied to machines.

It wasn’t until the 20th century that we started seeing substantial strides in artificial intelligence, setting the foundation for how we view and use it today. The following timeline delves into AI’s most significant developments in the last 120-plus years, from the first programmable computer to the establishment of AI as a formal discipline with modern regulations.

1920-1949: AI tests its capabilities

The pace of technological advancement picked up at the turn of the century. Taking cues from the film and literature at the time, people of science began experimenting with machines and wondering about their capabilities and potential uses.

Important dates:

  • 1921: Rossum’s Universal Robots, a play by Karel Čapek, opened in London, telling the story of artificial people made in a factory. This was the first time the word “robot” was used in the English language, which led to others applying the word and idea to art and research.

  • 1927: The science fiction movie Metropolis was released; it follows a robot girl named Maria as she wreaks havoc in a 2026 dystopia. This was a significant early portrayal of a robot in cinema, later serving as the inspiration for C-3PO in the Star Wars films.

  • 1929: After seeing Rossum’s Universal Robots, biologist Makoto Nishimura built Japan’s first functional robot Gakutensoku (meaning “learning from the laws of nature”). The robot could move its body and even change its facial expressions.

  • 1939: Looking for ways to solve equations more quickly, inventor and physicist John Vincent Atanasoff constructed the first digital computing machine with graduate student Clifton Berry. The Atanasoff Berry Computer wasn’t programmable, but it could solve up to 29 linear equations concurrently, turning Atanasoff into the Father of the Computer.

  • 1949: When Edmund Berkely published Giant Brains; Or Machines That Think, he detailed how machines are adeptly proficient in handling sizable amounts of information, concluding that machines can think (just not in the exact same way humans do).

1950-1959: AI hits the mainstream

The 1950s marked the transformation of the theoretical and imaginary into the empirical and tangible. Scientists started to use extensive research to fuel and test their hypotheses of practical applications of artificial intelligence. During this time, Alan Turing, John McCarthy, and Arthur Samuel proved themselves to be AI trailblazers.

Important dates:

  • 1950: Mathematician and logician Alan Turing released “Computing Machinery and Intelligence,” questioning whether or not machines could manifest human intelligence. His proposal came in the form of The Imitation Game (better known today as The Turing Test), which evaluated a machine’s ability to think as humans do. The Turing Test has since become the cornerstone of AI theory and its evolution. 

  • 1952: Possibly inspired by Claude Shannon’s 1950 paper “Programming a Computer for Playing Chess,” computer scientist Arthur Samuel created a checkers-playing computer program that could determine the probability of winning a game. It was the first program to learn how to autonomously play a game.

  • 1955: John McCarthy used “artificial intelligence” in a proposal for a summer computing workshop at Dartmouth College. When the workshop took place in 1956, he was officially credited with creating the term. 

  • 1955: Economist Herbert Simon, researcher Allen Newell, and programmer Cliff Shaw wrote “Logic Theorist,” which is lauded as the first AI computer program. The program could calculate mathematical theorems, simulating a human’s ability to problem-solve.

  • 1958: Reinforcing his status as the Father of Artificial Intelligence, John McCarthy developed LISP, a computer programming language. LISP’s popularity waned in the 1990s, but it’s seen a recent uptick in use.

  • 1959: Arthur Samuel originated the phrase “machine learning,” defining it as “the field of study that gives computers the ability to learn without explicitly being programmed.”

1960-1969: AI propels innovation

With the strong groundwork scientists, mathematicians, and programmers established in the 1950s, the 1960s saw accelerated innovation. This decade brought in a slew of new AI research studies, programming languages, educational programs, robots, and even movies.

Important dates:

  • 1961: General Motors began using Unimate, the first industrial robot, in its assembly lines. In his original 1954 patent, inventor George Devol described a “programmed article transfer” machine, an autonomous device that could perform systematic digital commands. At General Motors, Unimate was assigned to extract hot metal castings from another machine, a job that was too hazardous for humans.

  • 1964: Daniel Bobrow, another computer scientist, built the AI program STUDENT to solve word problems in high school algebra books. Written with the LISP programming language, STUDENT is considered an early example of natural language processing.

  • 1965: Computer scientist Edward Feigenbaum and molecular biologist Joshua Lederberg invented the first “expert system,” a program that could model human thinking, learning, and decision-making. This feat earned Feigenbaum the title of the Father of Expert Systems.

  • 1966: Joseph Weizenbaum developed the world’s first “chatterbot,” a technology we now refer to as a “chatbot.” Examining how mankind could communicate with machines, the computer scientist’s ELIZA program used natural language processing and pattern matching to simulate human conversations.

  • 1968: Referred to as the Father of Deep Learning, mathematician Alexey Ivakhnenko published a paper called “Group Method of Data Handling.” In it, Ivakhnenko posited a new approach to AI that used inductive algorithms to sort and validate data sets. His statistical work is now referred to as “deep learning.” 

  • 1968: When director Stanley Kubrick released 2001: A Space Odyssey, he put sci-fi back in mainstream media. The film features HAL (Heuristically programmed Algorithmic computer), a sentient computer that manages the Discovery One spacecraft’s systems and interacts with its crew. A malfunction turns a friendly HAL hostile, kicking off a debate about the relationship mankind has with technology.

1970-1979: AI loses its authority

Artificial intelligence’s focus shifted toward robots and automation in the 1970s. However, innovators struggled to get their projects off the ground as their respective governments did little to fund AI research.

Important dates:

  • 1970: Japanese researchers constructed the first anthropomorphic robot, WABOT-1, at Waseda University. The robot had fully functional limbs and semi-functional eyes, ears, and mouth, which it used to communicate with people in Japanese.

  • 1973: Mathematician James Lighthill might’ve been the reason governments reduced their support of AI. In his report to the British Science Council, he criticized past artificial intelligence discoveries, arguing they weren’t as impactful as scientists promised.

  • 1979: Hans Moravec (then a Ph.D. student, later a computer scientist) added a camera to mechanical engineer James L. Adam’s 1961 remote-controlled Standford Cart. This allowed the machine to successfully move around a chair-filled room on its own, becoming one of the earliest examples of an autonomous vehicle.

  • 1979: The Association for the Advancement of Artificial Intelligence (AAAI, formerly the American Association for Artificial Intelligence) was founded. The nonprofit scientific organization is dedicated to promoting AI research, widening its scientific and public understanding, and ethically guiding its future developments.

1980-1989: AI revives government funding

In 1980, AAAI’s first conference rekindled interest in artificial intelligence. The 1980s saw breakthroughs in AI research (particularly deep learning and expert systems), prompting governments to renew their support and funding.

Important dates:

  • 1980: Digital Equipment Corporation began using XCON (expert configurer) in one of its plants, marking the first time an expert system was available for commercial use. John P. McDermott wrote the XCON program in 1978 to help DEC order computer systems by automatically choosing components based on customers’ needs.

  • 1981: In one of the largest AI initiatives at the time, the Japanese Ministry of International Trade and Industry granted $850 million (equivalent to more than $3 billion today) to the Fifth Generation Project over the course of 10 years. The goal was to create supercomputers that could use logic programming and knowledge-based processing to reason as humans do.

  • 1984: AAAI warned of an impending “AI Winter,” fearing artificial intelligence developments wouldn’t live up to the increasing frenzy of the time. The foreshadowed AI Winter would dramatically decrease funding and appeal.

  • 1986: Aerospace engineer Ernst Dickmanns and his team at Bundeswehr University of Munich unveiled the first self-driving car. Using computers, cameras, and sensors, the Mercedes van could reach up to 55 MPH on empty roads.

  • 1987: When the stock market crashed, specialized LISP-based hardware companies could no longer compete with more accessible and affordable competitors like Apple and IBM. 

1990-1999: AI encounters a downturn

Just like AAAI cautioned, the 1990s faced artificial intelligence setbacks. Though waning public and private interest was stoked by AI’s high cost but low return, earlier research paved the way for new innovations at the end of the decade, ingratiating AI into everyday life.

Important dates:

  • 1997: In a highly publicized six-game match, IBM’s Deep Blue computer defeated world chess champion Gary Kasparov. It was the first program to beat a human.

  • 1997: Dragon Systems built Dragon NaturallySpeaking, the world’s first commercial speech recognition software. Compatible with Microsoft’s Windows 95 and Windows NT, it could understand 100 words per minute, and its framework is still used today in more modern versions.

  • 1998: The famed Furby could be considered the first successful domestic “robot.” The toy initially spoke its own language (Furbish) but then gradually learned English words and phrases. Some may argue Furby is just a toy, though, as it has limited interactive capabilities.

2000-2010: AI expands common use

After the Y2K panic died down, artificial intelligence saw yet another trending surge, especially in media. The decade also noted more routine applications of AI, broadening its future possibilities.

Important dates:

  • 2000: The leadup to the new century was fraught with concerns about the “Millenium Bug,” a series of computer glitches that affected the formatting of calendar data. Since all internet programs and software were created in the 1900s, engineers used a two-digit system to record the year, omitting the preceding “19.” Tech experts discerned computers would have trouble reconfiguring to the year 2000, causing flaws in daily and yearly programs. But their worries were futile — systems adjusted accordingly with little difficulty.

  • 2000: Dr. Cynthia Breazeal, then an MIT graduate student, designed Kismet, a robot head that could recognize and recreate human emotions and social cues. An experiment in social robotics and affective computing, Kismet was supplied with input devices that mimicked human visual, auditory, and kinesthetic flexibilities.

  • 2001: Steven Spielberg’s sci-fi flick A.I. Artificial Intelligence followed David, an android with human feelings disguised as a child. As David tries to find a place where he belongs and feels loved, the movie examines whether or not humans can coexist with artificial, anthropomorphic beings.

  • 2003: NASA launched two Exploration Rovers — Spirit and Opportunity — to learn more about past water activity on Mars. When the rovers landed on the planet in 2004, they operated autonomously, collecting surface samples and performing scientific experiments. Both far outlived their planned 90-day mission by several years.

  • 2006: Along with computer scientists Michele Banko and Michael Cafarella, computer science professor Oren Etzioni added another term to the AI vernacular. “Machine reading” gives computers the ability to “read, understand, reason, and answer questions about unstructured natural language text.”

  • 2006: Social media companies like Facebook, Twitter, and Netflix incorporated AI into their advertising and user experience algorithms. These algorithms now pilot most, if not all, of the social media channels we use today.

  • 2010: Microsoft built the Xbox 360 Kinect, a gaming hardware with sensors that could follow and interpret body movement as playable directions. With microphones for speech recognition and voice control, the Kinect facilitated the growth of the Internet of Things (IoT), a connected tech network that helps devices communicate with each other.

2011-Present: AI dominates the day-to-day

Artificial intelligence currently builds on the common-use foundations set in the early 2000s. It’s hard to find a smart device that doesn’t have intelligent functions, intensifying AI’s rise and cementing it as a substantial part of our everyday lives.

Important dates:

  • 2011: Apple’s launch of Siri on the iPhone 4S sparked a trend in virtual assistants — most notably Amazon’s Alexa and Microsoft’s Cortana, both released in 2014. The rugged 2011 version of Siri has since been finessed and integrated into other Apple products.

  • 2015: Wary of a global AI arms race, over 3,000 people signed an open letter to governments worldwide calling for a ban on artificially intelligent weapons and AI warfare. Among the signees were influential scientists and innovators, including Stephen Hawking, Steve Wozniak, and Elon Musk.

  • 2016: Hanson Robotics caused an uproar with its humanoid robot Sophia, whose likeness closely resembles a real human being. Deemed the world’s first “robot citizen,” Sophia describes herself as a “human-crafted science fiction character depicting where AI and robotics are heading.”

  • 2017: Facebook’s Artificial Intelligence Research Lab taught two chatbots how to communicate with each other. As their conversations grew, the chatbots drifted away from English and fabricated their own language without any human intervention.

  • 2020: OpenAI started experimenting with what we now know as ChatGPT 3, a trained natural language processing model. One of the most advanced chatbots to date, ChatGPT 3 can answer philosophical questions, write code, pen essays, and more. (Its predecessor, ChatGPT 2, trained using data from Reddit posts and was released as an open-source model.) A year later, OpenAI took another giant leap in generative AI with DALL-E, a program that produces realistic art and images based on user prompts.

What is the future of artificial intelligence?

Today’s artificial intelligence landscape is evolving with unprecedented speed. With a market that’s expected to grow to $2 trillion by 2030, AI is changing industries across the board — from eCommerce to healthcare and cybersecurity. While we can only speculate on what experts have in store, there are a few trends that’ll define the next decade.

AI is already ingrained in many of our devices, so interactions between mankind and artificial intelligence will only become more commonplace. AI may be dedicated to building autonomous machines, but for now, they still need a human touch. We can also expect the democratization of AI tech and innovation fueled by the open-source movement. This revolution advocates for the free, widespread use of computer software, placing the power of AI in everyone’s hands. However, this unrelenting boom in artificial intelligence attracts compounded concerns. AI operated as an unregulated industry for most of its existence, but its significant growth has lawmakers ready to enact accountability and safety policies.

At the end of the day, we aren’t able to unanimously predict the future of artificial intelligence, but if its history is any indication, we’re strapping into quite the rollercoaster.

Looking to step foot into the world of AI? Send us a message.

up next
Series of purple hearts on top of each other
June 26, 2023

Content

LGBTQIA+ Organizations To Support This Pride Month (and All Year Long)

Red, green, orange, yellow, and teal shapes on a light background
June 21, 2023

Product Design

Debunking the Biggest Digital Product Design Myths