Artificial intelligence (AI) involves programming a computer to perform tasks that normally require human intelligence. It can be applied to automate repetitive work, freeing up humans to spend more time on higher impact activities. It can also analyze massive data sets, find patterns and relationships that a human may miss, and identify potential problems before they occur.
The concept of AI dates back to 1950, when Alan Turing envisioned “thinking machines” that could mimic human behavior. While we still have a long way to go before reaching Turing’s vision, a wide variety of AI applications are already making our lives easier and better.
AI has a broad definition that encompasses all software designed to think like a human, including machine learning, natural language processing and robotics. Generally speaking, though, it’s the ability to process information more quickly than a human and make intelligent decisions without needing guidance or direct instruction.
In the past, many AI projects fell short of their lofty goals, but the field was revitalized in the 1980s with greater government funding and more focused research. Newell and Simon published the General Problem Solver, which lacked the power to solve complex problems but laid the foundation for later AI; McCarthy developed Lisp, a popular language for AI programming; and MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that paved the way for today’s chatbots.
Today, the main driving forces of AI development are threefold: affordable, high-performance computing capability; an abundance of data; and the rapid growth of machine learning models. A McKinsey survey conducted in 2021 found that 56 percent of companies were using AI, up from 50 percent the previous year. This growth has been fueled in part by the availability of cloud-based compute infrastructure, which makes it easy for teams to experiment with different models until they reach a point of satisfactory performance.
One of the key obstacles to AI development is a lack of sufficient and reliable data, which is why many industry leaders are concerned that AI tools will be biased toward certain viewpoints. However, recent research has shown that ML algorithms can be taught to reduce biases through training and by incorporating more diverse datasets.
In terms of practical application, we can divide AI into two categories: weak and strong. Weak AI, or narrow AI, is designed and trained to complete a specific task. Examples include industrial robots and personal assistants, such as Siri, Alexa or Cortana. Strong AI, on the other hand, is able to learn and adapt over time. It can understand natural language, recognize objects in images and interpret emotions.
One of the main challenges to developing true AI is achieving explainability, which is the ability for a computer to convey its reasoning process. This is a critical issue for industries that must comply with strict regulatory standards, such as financial institutions in the United States. For example, banks must be able to clearly articulate how they came to their decision to approve or deny credit. Currently, most AI tools operate by teasing out subtle correlations between thousands of variables, so their decision-making processes are often difficult to understand. This is known as black box AI.