Never get caught off guard by AI jargon again. Learn 50+ AI-related terms in this one-stop shop for AI lingo.
The following definitions should help you get a better grasp on the complex world of AI.
Have a specific term that you're looking for? Click on one of the following options below to skip ahead in this article:
To get a thorough understanding of how many of the following terms work together, be sure to read our AI Fundamentals article.
A system that is designed to perceive its environment, make human-like decisions, and take autonomous actions (i.e., not directly controlled by a person) to achieve a specific goal or set of goals
The key differentiator of AI agents vs. traditional automation tools lies in their independence and cognitive capabilities. Traditional automation tools require explicit instructions for every action, whereas AI agents need only an end goal to begin their operation.
See also: AI Agent Tools
A program or algorithm trained and programmed by a human on specific data to achieve an explicitly defined task
The entire infrastructure and framework required for building and deploying AI
While the AI model is a central component of an AI system, an AI system also includes data acquisition, hardware and software training resources, the user interface, and more.
A set of step-by-step instructions that guide machines in performing tasks and making decisions
The process of ensuring that an AI system’s goals align with human values and interests
In a practical sense, it means that an AI system does what we ask it to (e.g., When asked to provide a blog post about dogs, it generates a blog post about dogs).
In a broader sense, alignment ensures an AI system does not become so overly focused on a goal that the decisions it makes could cause harm.
The act of an AI model identifying unusual or abnormal patterns in data
Using anomaly detection, machine learning models can detect and prevent cyber attacks by identifying unusual network activity or behavior.
A technology that enables different tools and software to communicate with each other — with the goal of automating tasks that would otherwise be time-consuming
When AI systems have human-level intelligence and can perform a wide range of tasks
In the progression of AI intelligence, this is the state we are currently in.
A broad term referring to a machine’s ability to perform tasks that would typically require human intelligence (e.g., speech recognition, language comprehension, and making decisions or predictions based on data)
When AI systems are designed to operate within predefined constraints and parameters to perform specific tasks or sets of tasks (e.g., voice recognition or image classification) — making it highly effective for specific functions but limited in terms of general intelligence or self-awareness
In the progression of AI intelligence, we initially started here.
When AI systems surpass human intelligence and can perform tasks beyond human comprehension
In the progression of AI intelligence, we’re on our way — but not quite — here.
The use of technology to monitor, control, and execute tasks and processes automatically with minimal human input
Automation improves business processes by delegating repetitive tasks to machines — streamlining processes by linking data across various AI tools and needing minimal ongoing human intervention once established.
See also: Automation Tools for AI
When the internal workings and decision-making processes of an AI model are not easily understood or explained even by the developers who created it — raising issues related to trust and accountability
The process of generating new AI models through the combination of existing models or algorithms
This technique involves evolving and improving AI systems by merging different components or methodologies to create more advanced and efficient models.
The act of an AI model identifying patterns in the input data and then using those patterns to predict the category (or label) for new, unseen data points; it can be thought of as categorizing data
For example, an AI model trained on images of different types of animals (labeled as "cat," "dog," "bird") could then classify a new image and predict whether it shows a cat, dog, or bird.
Classification is used heavily in e-commerce and powers many tagging systems, which categorize and organize products using keywords or labels.
The act of an AI model identifying groupings and patterns in data without a human explicitly defining the criteria
Recommendation engines, such as Amazon and Netflix, commonly leverage clustering to provide personalized product and movie recommendations to users.
The act of an AI model analyzing and interpreting visual data (e.g., images or videos) from cameras or sensors
AI models can leverage computer vision to identify defects in products using visual data, enabling efficient quality control and waste reduction in manufacturing.
The capability of an AI system’s hardware (e.g., GPUs and TPUs) to perform complex computations required for training and running the model efficiently
The amount of computing power plays a crucial role in the performance of machine learning algorithms — especially deep learning models, which rely on vast amounts of data and computations.
A way of training an AI model so that it self-critiques and revises its responses to align with a predefined set of rules or principles (i.e., constitution) based on human principles such as avoiding harm, respecting preferences, and providing accurate information
The goal is for the model to be harmless and to self improve without the need for humans to label/identify harmful outputs. Anthropic’s Claude is an example of a large language model (LLM) that’s powered by constitutional AI.
AI technologies like chatbots that engage in human-like conversations with users
A specified range of tokens (e.g., words) surrounding a target token that defines the scope of information considered when processing or analyzing individual elements within a sequence of data
In natural language processing (NLP), you can think of it as a window through which a computer looks to understand the meaning of words by considering the words nearby. If the window is big, it sees more words at once — helping it understand more complex sentences and larger pieces of information.
Read More: Explore the dynamic history and exciting future of AI in our AI Evolution: Its Past & Future section, tracing its development from foundational concepts to the innovative horizons of tomorrow.
A machine learning approach that trains AI models using multi-layered neural networks (which simulate the decision-making capabilities of the human brain) in order to identify patterns and relationships and make predictions with high accuracy
A type of generative AI in which the AI model aims to learn the underlying structure of a dataset — as well as patterns and relationships between different pieces of information — by looking at how information or details move around using three main components:
This is currently the most popular option for image and video generation. The powerful image generator, Midjourney, operates via a combination of a diffusion model and large language model (LLM).
A “supercharged” vector (i.e., mathematical representations of data) that captures semantic meaning and relationships between data — helping AI models better understand the nuances and overall context of the data
While vectors are suitable for tasks where the focus is on numerical operations and straightforward data representation, embeddings are required for tasks in which the AI model needs to learn complex patterns or understand subtle nuances and relationships between data, such as natural language processing (NLP) and computer vision.
A type of AI model in which the machine performs tasks based on predetermined expertise that has been hard-coded into them, coded by humans to simulate the judgment and behavior of a human expert
A key approach to building responsible AI is model explainability, which refers to making AI models — and how they make certain decisions — transparent and easy to understand.
When training an AI model, giving it a few “shots” or examples in order to improve its performance on a specific task
Customizing an AI model (after it has undergone initial training to create the foundation model) for specific tasks by making small adjustments — usually consisting of sending additional data to the model and adjusting the final layers or parameters.
Note that fine-tuning runs a risk of “overfitting” the model (i.e., making the model too specialized, resulting in a lack of generalizability and poor performance on new data).
When an AI model is trained on a diverse set of unstructured data to create a general or “base model”, which can then be further fine-tuned (i.e., customized) to excel at a specific task
NOTE: Why unstructured data? Unstructured data (such as raw text from websites, books, and articles, or images from the internet) is more abundant and inherently diverse — providing a wealth of human knowledge, language, and visual information. This diversity is crucial for developing models with a broad understanding and the ability to generalize across a wide range of tasks.
The act of an AI model creating new data or content (e.g., text, images, audio, etc.) by learning the patterns and structure of the data they were trained on
AI models that utilize generation are commonly referred to as “generative AI” or “GenAI”.
A type of AI algorithm that pits two neural networks against each other to improve the quality of the generated data (i.e., the two neural networks are trained simultaneously in a competitive manner) for tasks such as generating realistic images.
The two networks include a:
The networks iterate until the generator becomes adept at producing data that the discriminator struggles to distinguish from real data.
Many AI image generation models combine transformers and GANs to generate images from text descriptions — processing the text input via a transformer, which then conditions the GAN to generate the corresponding image.
A type of large language model (LLM) developed by OpenAI that leverages transformer architecture to process language and generate coherent, human-like text and contextually relevant outputs
GPT excels in a wide range of natural language processing (NLP) tasks, including text generation, question answering, summarization, and language understanding.
A specialized electronic circuit designed to accelerate graphics rendering by rapidly processing large datasets and complex algorithms
The tendency of generative AI models to generate unrealistic or nonsensical data
For example, a large language model (LLM) might generate sentences that don't make sense, or an image model might generate unrealistic or distorted images.
The process of generating an image using another image as the prompt
With image-to-image, an AI model will translate the initial image into another image (i.e., the completion/output) while retaining essential features.
Because image-to-image translation enables tasks such as style transfer, colorization, and more, it plays a crucial role in various applications like art, image enhancement, and computer vision.
The process of an AI model applying the information learned during training to generate an actionable result (e.g., generating an image)
The data (e.g., text, images, sensor data, or many other types of relevant information) provided to an AI system to explain a problem, situation, or request
While prompts are a popular form of an input, inputs are fundamental throughout the entire lifecycle of an AI model — from training to deployment and usage.
In automation, inputs are the data required for the automation to work. (For example, inputs of an email-related automation workflow may be the email body, subject line, sender’s email, sent date, tags added, etc.)
Further Insight: Exploring AI Automation
A type of generative AI that processes and generates human language to perform tasks such as language translation, text generation, and answering questions
LLMs (such as OpenAI’s ChatGPT and Google’s Gemini) operate by learning patterns and relationships between words and phrases in order to generate coherent, contextually relevant text outputs based on given prompts or inputs.
Tools and platforms that enable individuals to create AI applications with zero, or very little, coding knowledge
Learn about: Low-Code / No-Code AI Tools
A type of AI model in which the machine learns to perform tasks and optimize performance through experience — without a human explicitly defining the rules
The type of data being generated by an AI model (e.g., text, images, audio, etc.).
The ability of AI models to understand and generate content across multiple types of modalities
A branch of machine learning that enables AI models to understand and generate human language in the form of text or voice
NLP is a key function of large language models (LLMs) such as OpenAI’s ChatGPT. A few examples of NLP in use include:
A subset of natural language processing (NLP) that specifically focuses on enabling machines to interpret human language beyond just words to understand the meaning and intent as well
While NLU and NLP are sometimes used interchangeably, they refer to different aspects of language processing, with NLU being a specific component of the broader NLP framework.
A type of machine learning process — specifically deep learning — that teaches machines to process data in a way that mimics the human brain — using adaptive systems encompassing interconnected nodes in a layered structure to enable machines to understand complex relationships and patterns as well as learn from their mistakes and improve
Software or AI models that are made freely available for anyone to use, modify, and distribute.
Open-source models allow for greater collaboration, transparency, and innovation by enabling developers and researchers to access and build upon existing models and tools — ultimately leading to accelerated progress in the realm of AI as a whole
The response an AI model generates — whether that be text, an image, or other modality
For automation specifically, the ultimate result of an automation workflow is the output (e.g., a generated report, a completed task, or a decision made by the system).
An output is also referred to as an ‘completion’.
The internal variables or within an AI model that are learned during the training process
A performance metric for language models that gauges how well a model predicts words in a sequence
You can think of perplexity as how confident a model is.
Perplexity is often used to evaluate and compare different generative AI models, with a lower perplexity indicating that the model is better at predicting the next word — making it more accurate and reliable for real-world applications.
NOTE: Perplexity is also the name of one of the leading AI tools right now.
The act of an AI model predicting the likelihood of a certain outcome, typically framed as probabilities
Social media ranking algorithms use prediction to estimate the likelihood of a user clicking on a specific ad.
An interaction between a human and an AI model that provides the model with sufficient information to generate the user’s intended output
Prompts can take many various forms, such as questions, text, code snippets, images, or videos. The most common prompts today are text prompts, but prompts can be any modality.
A machine learning approach in which an AI model learns by receiving rewards or penalties based on its actions — akin to playing a game
The model can learn and improve its decision-making in one of two ways:
RLHF is particularly useful in tasks where human judgment is essential, such as natural language processing (NLP) for chatbots or text generation.
The ethical and responsible use of AI technology, ensuring that AI systems are designed and implemented in a way that respects human rights, diversity, and privacy
For example, a business using facial recognition technology must ensure that it’s not used for unlawful surveillance and does not discriminate against certain groups of people.
A process that enhances the output of AI models by incorporating references to knowledge bases outside of the model’s training data sources (e.g., a specific field of knowledge or an organization’s internal database) — without the need to retrain the model
A conversational AI model that searches through a large collection of pre-existing responses to find the best response to a user's query
Unlike generative systems that create responses from scratch, retrieval-based systems select and present responses based on similarity or relevance to the input query or context.
A type of AI model in which the machine performs tasks based on predetermined rules that have been hard-coded into them coded by humans — resulting in predefined outcomes using "if-then" coding statements
The understanding of underlying concepts and intentions (in text or images, for example) to determine the overall message being conveyed
An AI model that understands semantic meaning can grasp the subtle nuances and complexities of language during text generation and relationships between objects during image generation — enabling models to generate more accurate and contextually relevant outputs.
Semantic meaning is crucial for computer vision and various natural language processing (NLP) tasks (e.g., sentiment analysis, classification, question answering, and language translation).
A hypothetical point in the future where AI systems become capable of designing and improving themselves without human intervention — surpassing human comprehension in a way that leads to rapid and unpredictable societal changes
The act of an AI model recognizing speech and transcribing it into written text
Speech recognition is one of the most widely used consumer use cases of AI today… ahem, Siri.
The ability to refine, modify, rectify, or otherwise direct an AI system to function more closely in accordance with the user's expectations
Because ChatGPT is highly customizable and can be directed to focus on specific topics or styles of conversation, it is well known for its steerability.
A computer vision technique that combines the content of one image (the original image) with the style of another (the reference image), blending them together to create a new image that retains the content of the original image but is rendered in the style of the reference image
A machine learning approach in which an AI model is trained on labeled data, predetermined by humans, to learn the relationship between input and output variables — enabling the model to make predictions for new input data based on the patterns it has learned from the labeled examples
A crucial parameter used to control the randomness of outputs generated by AI models — playing an essential role in achieving desired outcomes
Specialized hardware accelerators developed by Google that speed up the training and deployment of machine learning models
The process of generating an image from a text description (i.e., text prompt)
See: AI Tools utilizing Text-to-Image
The process of generating a video from a text description (i.e., text prompt)
Learn More: Text-to-Video AI Tools
The smallest unit of data — representing elements such as words or pixels, depending on the modality — used by AI models to process inputs and generate outputs
For example, in the sentence "Apple is a fruit", each word ("Apple," "is," "a," "fruit") is a token. Both inputs (including prompts) and outputs are broken down into tokens.
Breaking down complex data into these smaller, manageable tokens enables AI models to effectively comprehend and generate content.
A type of neural network that learns to understand context and relationships between different parts of data in order to transform inputs into outputs
Transformers revolutionized the field of AI by allowing AI models to pay attention to different parts of input data simultaneously (as opposed to one element at a time) as it learns. This ability, called “self-attention mechanism”, helps the model develop a deeper understanding of data by learning more complex relationships within the data.
You may have heard the term ‘transformer’ because it is the “T” in GPT (Generative Pre-Trained Transformer), which powers OpenAI’s ChatGPT.
NOTE: While 'transformer' refers to the type of neural network model, the term 'transformer architecture' refers to the overall structure and specific components that make the transformer model function (i.e., how data flows through the model, how information is processed and transformed, and how different parts interact to achieve specific tasks).
A machine learning approach in which an AI model identifies patterns in a dataset without any explicitly labeled outputs
Unsupervised learning is particularly useful in situations where accurately labeling a large volume of diverse, intricate data is a prohibitively timely and expensive undertaking for a human to perform.
» Explore: Health AI Tools
A mathematical representation of tokens
Each token is assigned its own set of numbers that represent its meaning and context. By converting tokens into numerical vectors, machine learning algorithms can process and analyze data more effectively.
While vectors are suitable for tasks where the focus is on numerical operations and straightforward data representation, they are not suitable for tasks that require the AI model to learn complex patterns or understand subtle nuances and relationships between data, such as natural language processing (NLP) and computer vision. Embeddings are needed for these more complex tasks.
When an AI model effectively completes a task without having undergone any task-specific training — thanks to the use of strategically crafted prompts