Skip to Main Content

Artificial Intelligence (AI)

Definition - Artificial Intelligence

Artificial = humans make it, aka it doesn’t just magically appear in nature. 

Intelligence = the ability to learn and solve problems, like humans do.

Together, Artificial Intelligence (AI) means taking a brain and putting it inside something that humans make, like a computer or robot. - The Neuron. (n.d.). WTF is AI?. The Neuron. https://www.theneuron.ai/explainer-articles/wtf-is-ai 

 

Artificial Intelligence (AI) is a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more. - 

Google Cloud. (n.d). What is Artificial Intelligence (AI)? https://cloud.google.com/learn/what-is-artificial-intelligence#what-is-artificial-intelligence-ai 

 

Artificial intelligence (AI) encompasses the fields of computer and data science focused on building machines with human intelligence to perform tasks like learning, reasoning, problem-solving, perception, and language understanding. Instead of relying on explicit instructions from a programmer, AI systems learn from data that enables them to handle complex problems and simple repetitive tasks, improving how they respond over time. - 

Michigan Tech. (n.d). What is AI?. Michigan Tech. https://www.mtu.edu/computing/ai/#:~:text=Artificial%20intelligence%20(AI)%20encompasses%20the,you've%20interacted%20with%20AI

The History of AI by IBM

Humans have dreamed of creating thinking machines since ancient times. Folklore and historical attempts to build programmable devices reflect this long-standing ambition and fiction abounds with the possibilities of intelligent machines, imagining their benefits and dangers. It's no wonder that when OpenAI released the first version of GPT (Generative Pretrained Transformer), it quickly gained widespread attention, marking a significant step toward realizing this ancient dream.

- Coursera. (n.d.). History of AI. Coursera. https://www.coursera.org/articles/history-of-ai 

The Beginnings of AI: 1950s

The 1950s, which saw the following advancements, are considered to be the birthplace of AI :

  • Alan Turing: In 1950, English mathematician and computer science pioneer Alan Turing posed the question, “Can machines think?” In his paper, “Computing Machinery and Intelligence,” Turing laid out what has become known as the Turing Test, or imitation game, to determine whether a machine is capable of thinking. The test was based on an adaptation of a Victorian-style game that involved the seclusion of a man and a woman from an interrogator, who must guess which is which. In Turing’s version, the computer program replaced one of the participants, and the questioner had to determine which was the computer and which was the human. If the interrogator was unable to tell the difference between the machine and the human, the computer would be considered to be thinking, or to possess “artificial intelligence.”
  • Dartmouth conference & John McCarthy: In 1956, two years after the death of Turing, John McCarthy, a professor at Dartmouth College, organized a summer workshop to clarify and develop ideas about thinking machines, choosing the name “artificial intelligence” for the project. The Dartmouth conference, widely considered to be the founding moment of Artificial Intelligence (AI) as a field of research, aimed to find “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

Laying the groundwork: 1960s-1970s

The early excitement that came out of the Dartmouth Conference grew over the next two decades, with early signs of progress coming in the form of a realistic chatbot and other inventions.

  • ELIZA: Created by the MIT computer scientist Joseph Weizenbaum in 1966, ELIZA is widely considered the first chatbot and was intended to simulate therapy by repurposing the answers users gave into questions that prompted further conversation, also known as the Rogerian argument. Weizenbaum believed that a rather rudimentary back-and-forth would prove the simplistic state of machine intelligence. Instead, many users came to believe they were talking to a human professional. In a research paper, Weizenbaum explained, “Some subjects have been very hard to convince that ELIZA…is not human.”
  • Shaky the Robot: Between 1966 and 1972, the Artificial Intelligence Center at the Stanford Research Initiative developed Shakey the Robot, a mobile robot system equipped with sensors and a TV camera, which it used to navigate different environments. The objective in creating Shakey was “to develop concepts and techniques in artificial intelligence [that enabled] an automaton to function independently in realistic environments,” according to a paper SRI later published. While Shakey’s abilities were rather crude compared to today’s developments, the robot helped advance elements in AI, including “visual analysis, route finding, and object manipulation.”
  • American Association of Artificial Intelligence was founded: After the Dartmouth Conference in the 1950s, AI research began springing up at venerable institutions like MIT, Stanford, and Carnegie Mellon. The instrumental figures behind that work needed opportunities to share information, ideas, and discoveries. To that end, the International Joint Conference on AI was held in 1977 and again in 1979, but a more cohesive society had yet to arise. The American Association of Artificial Intelligence was formed in the 1980s to fill that gap. The organization focused on establishing a journal in the field, holding workshops, and planning an annual conference. The society has evolved into the Association for the Advancement of Artificial Intelligence (AAAI) and is “dedicated to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.”
  • AI winter: In 1974, the applied mathematician Sir James Lighthill published a critical report on academic AI research, claiming that researchers had essentially over-promised and under-delivered when it came to the potential intelligence of machines. His condemnation resulted in stark funding cuts. The period between the late 1970s and early 1990s signaled an “AI winter”—a term first used in 1984—that referred to the gap between AI expectations and the technology’s shortcomings. 

Early AI excitement quiets: 1980s-1990s

The AI winter that began in the 1970s continued throughout much of the following two decades, despite a brief resurgence in the early 1980s. It wasn’t until the progress of the late 1990s that the field gained more R&D funding to make substantial leaps forward.

  • First driverless car: Ernst Dickmanns, a scientist working in Germany, invented the first self-driving car in 1986. Technically, a Mercedes van that had been outfitted with a computer system and sensors to read the environment, the vehicle could only drive on roads without other cars and passengers.
  • Deep blue: In 1996, IBM had its computer system Deep Blue—a chess-playing computer program—compete against then-world chess champion Gary Kasparov in a six-game match-up. At the time, Deep Blue won only one of the six games, but the following year, it won the rematch. In fact, it took only 19 moves to win the final game. 

AI growth: 2000-2019

With renewed interest in AI, the field experienced significant growth beginning in 2000, which led to increasingly intelligent machines.

  • Kismet: You can trace the research for Kismet, a “social robot” capable of identifying and simulating human emotions, back to 1997, but the project came to fruition in 2000. Created in MIT’s Artificial Intelligence Laboratory and helmed by Dr. Cynthia Breazeal, Kismet contained sensors, a microphone, and programming that outlined “human emotion processes.” All of this helped the robot read and mimic a range of feelings.
  • NASA Rovers: Mars was orbiting much closer to Earth in 2004, so NASA took advantage of that navigable distance by sending two rovers—named Spirit and Opportunity—to the red planet. Both were equipped with AI that helped them traverse Mars’ difficult, rocky terrain and make decisions in real-time rather than rely on human assistance to do so. 
  • IBM Watson: Many years after IBM’s Deep Blue program successfully beat the world chess champion, the company created another competitive computer system in 2011 that would go on to play the hit US quiz show Jeopardy. In the lead-up to its debut, Watson DeepQA was fed data from encyclopedias and across the internet. Watson was designed to receive natural language questions and respond accordingly, which it used to beat two of the show’s most formidable all-time champions, Ken Jennings and Brad Rutter. 
  • Siri and Alexa: During a presentation about its iPhone product in 2011, Apple showcased a new feature: a virtual assistant named Siri. Three years later, Amazon released its proprietary virtual assistant named Alexa. Both had natural language processing capabilities that could understand a spoken question and respond with an answer.

AI surge: 2020-present

The AI surge in recent years has largely come about thanks to developments in generative AI, or the ability for AI to generate text, images, and videos in response to text prompts. Unlike past systems that were coded to respond to a set inquiry, generative AI continues to learn from materials (documents, photos, and more) from across the internet.

  • OpenAI and GPT-3: The AI research company OpenAI built a generative pre-trained transformer (GPT) that became the architectural foundation for its early language models GPT-1 and GPT-2, which were trained on billions of inputs. Even with that amount of learning, their ability to generate distinctive text responses was limited. Instead, it was the large language model (LLM) GPT-3 that created a growing buzz when it was released in 2020 and signaled a major development in AI. GPT-3 was trained on 175 billion parameters, which far exceeded the 1.5 billion parameters GPT-2 had been trained on.
  • DALLE-E: An OpenAI creation released in 2021, DALL-E is a text-to-image model. When users prompt DALL-E using natural language text, the program responds by generating realistic, editable images. The first iteration of DALL-E used a version of OpenAI’s GPT-3 model and was trained on 12 billion parameters.
  • ChatGPT released: In 2022, OpenAI released the AI chatbot ChatGPT, which interacted with users in a far more realistic way than previous chatbots thanks to its GPT-3 foundation, which was trained on billions of inputs to improve its natural language processing abilities. Users prompt ChatGPT for different responses, such as help writing code or resumes, beating writer’s block, or conducting research. However, unlike previous chatbots, ChatGPT can ask follow-up questions and recognize inappropriate prompts.

Generative AI grows: 2023 was a milestone year in terms of generative AI. Not only did OpenAI release GPT-4, which again built on its predecessor’s power, but Microsoft integrated ChatGPT into its search engine Bing and Google released its GPT chatbot Bard. GPT-4 can now generate far more nuanced and creative responses and engage in an increasingly vast array of activities, such as passing the bar exam.

- Mucci, T. (2024, October 21). The history of artificial intelligence. IBM Think. https://www.ibm.com/think/topics/history-of-artificial-intelligence

The History of AI by The Neuron

Check out The Neuron’s humorous brief history of AI

AI Models & Tools

AI models and AI tools are closely related but refer to different components in the AI ecosystem.

An AI model is the underlying algorithm or system that has been trained on data to perform specific tasks. It’s the brain behind the operation. Examples include: 

  1. GPT, developed by OpenAI. Used in ChatGPT, Microsoft Copilot
  2. BERT, developed by Google. Used in Google search
  3. DALL-E, developed by OpenAI. Used in Canva Magic Studio
  4. Gemini, developed by Google DeepMind. Used in Google Gemini. 
  5. Claude, developed by Anthropic. Used in Claude.ai 

 

An AI tool is a software application or interface that uses one or more AI models to help users complete tasks. Examples include: 

  1. ChatGPT, Microsoft Copilot, Google Gemini: For writing assistance, idea generation, and information retrieval.
  2. DALL-E: DALL-E can generate images from textual descriptions.
  3. Otter.ai: Otter.ai is an AI-driven transcription and note-taking tool.
  4. Grammarly: Grammarly is an AI-powered writing assistant.
Footer for USD LibGuide v2.0