Understanding the basics: Artificial Intelligence in Plain English
Zaid Khaishagi
Sep 29, 2023
Artificial Intelligence (AI), as defined by Stanford University, is a mechanism designed to emulate human intelligence through computer systems. It is not the humanoid robot one may initially consider, but rather a compilation of technologies that let the computer reason, sense, learn, and act.
AI can be further subdivided into two main jurisdictions, weak AI and strong AI. Weak AI is designed to perform a specific task. This can include a wide range of tasks, from small to large. For example, anything from having a ghost chase the player in Pac-Man to varying the water pressure in the nozzle in your dishwasher, to predicting whether or not rain will fall in a given city based on meteorological conditions, and everything in between. It is essential to understand that AI largely relies on algorithms to function. In the above example, the algorithms may be simple or complex. For example, in Pac-Man, the ghost may follow the most recent path of the player, or it may try to take the shortest path to the player. When changing the water pressure the dishwasher may select the water pressure used based on the mode selected by the user. In the case of precipitation, the algorithm used may include matching current conditions with previously encountered ones to predict if rain may fall.
Over the last 20 years, a series of algorithms were developed to self-adjust and evolve based on new data. This is a self-adapting process commonly referred to as Machine Learning (ML). Machine learning is a field of artificial intelligence, in which a computer program can learn and adapt to new data without human intervention. These algorithms are often exposed to a large dataset during their training phase. ML models are as good as the data they are trained on, so as the size and quality of the dataset increase, so to does the robustness and performance of the model. The trained model is then tested with data that it has not seen before to validate its accuracy and help fine-tune the model. Once trained, the algorithm can process new unlabeled data to make predictions or decisions based on its training. An “online” ML model can continue to learn and adapt as it is continuously exposed to new data which enables it to adjust itself to improve its accuracy based on new information it receives. Although ML was extremely powerful for solving particular problems (pattern recognition or relationships between data), it was generally trained on large amounts of data for a specific task. As such it was considered by many to still be Weak AI.
Strong AI (also known as AGI, or Artificial General Intelligence) is the term used for AI that can perform a wide variety of tasks. Since the 1980s, there have been several tests developed to evaluate the capability of AGI. These include famous ones such as the Turing Test (when AI chat behavior is compared against a human’s) and the Robot College Student Test (when an AI chat bot attempts to take college courses). Until recently, the vast majority of even the most powerful machine learning algorithms with large amounts of data could only attempt to emulate human intelligence. This made them easy to identify when compared with a real human (Turing Test). In 2017, research between the University of Toronto and the Google Brain team yielded a paper titled “Attention is All You Need” which birthed the first motion toward the modern version of Large Language Models (LLMs). These LLMs serve as the basis for ChatGPT and many other modern AI.
Do these qualify as AGI? Well, modern LLMs can pass the Turing Test quite easily, posing as a human in a variety of different contexts. Furthermore, modern LLMs can actually pass college courses without performing any additional studying for the class in particular, thereby passing the Robot College Student Test. Does this mean that AGI has been achieved and we are living in a new era? Not necessarily, there is widespread debate regarding whether or not these recent discoveries qualify as true General Artificial Intelligence.
How can emerging technologies benefit us? As an overview, ML systems built for a specific task can be made incredibly precise. These applications may include facial or object recognition, spatial movement between celestial bodies, prediction of weather, High-Frequency Trading, market forecasting, and many many more. In the domain of marketing and e-commerce, ML can leverage history and viewing patterns to refine recommendations. This offers its users a customized online experience based on their browsing habits.
Furthermore, lending institutions are now putting to use the predictiveness of ML, allowing for more precise risk assessment and credit modeling. These are just a few examples of how the application of ML in different sectors will be a basis for an economic and informational paradigm shift.
Limitations of AI and ML, based on what we discussed before, we now know that the quality and quantity of the input data set during the training phase are paramount when producing high-performance machine learning systems. As such, ML models are dependent on the data available to them. Furthermore, they can succumb to errors when presented with unprecedented data.
Lastly, AI software is increasingly built with security in mind, meaning that developers of these emerging technologies will be compelled to design in a way that effectively shields against malicious cyber actors successfully.
Hopefully, by now, it is easier for you to conclude that AI isn't the self-thinking, world-conquering robot often portrayed in fiction. Despite its advancements, AI's capabilities are dependent on its input data and its progress is dependent on how much data and processing is available. It is essential to acknowledge these inherent limitations as we explore the potential and navigate the challenges with emerging technologies.
A few short notes:
- ML's capabilities are not 100% bound on its input data. It CAN have emergent properties. We have had numerous instances of this shown to us throughout history. It CAN perform actions it has never seen before, Reinforcement Learning is a clear example of this. However, with this in mind, it IS 100% dependent on its input data meaning that it will increase in quality and performance with an increase in data quantity and quality, and decrease in the same way
- Adding novel tidbits to the section is really helpful.
- Machine Learning is not recent, it's math from the late 1900s.
For the curious:
Here are approachable resources: podcast episodes, documentaries, and beginner-friendly articles or resources on AI related to this blog post.
- The AI Podcast - Nvidia
- The Artificial Intelligence Podcast - Dr. Tony Huang
- The Machine Learning Podcast
- Hack the Box