“Language is fossil poetry which is constantly being worked over for the uses of speech.” 1
“All living languages change. They have to. Languages have no existence apart from the people who use them. And because people are changing all the time, their language changes too, to keep up with them. The only languages that don't change are dead ones.”2
It's a common misconception that language models, like those behind ChatGPT or Claude, get better the more you use them. In reality, they're more like snapshots frozen in time—they only change when retrained and updated. Letting a language model evolve continuously based on user input sounds appealing, but it's risky (See: “Tay,” Microsoft's AI chatbot that had to be taken down just 24 hours after launch for Tweeting offensive content.) This "online learning" can cause the model's performance to degrade over time, or worse, lead to unexpected and harmful outputs.
Words, on the other hand, evolve naturally and without oversight. Over time, their meanings shift to reflect new contexts and usage patterns, often in ways we barely notice. Consider the word “awful,” which once conveyed awe and reverence but now describes something negative or unpleasant. While this kind of evolution is harmless in everyday language, ambiguity in critical terms can have far more serious consequences. Imagine, for instance, if people began referring to all weight loss drugs as GLP-1s—this could lead to confusion, misinformed decisions, and potential health risks, as not all medications function in the same way or carry the same effects.
The language of AI is no different. As the field advances at breakneck speed, terminology struggles to keep up. Words are stretched, repurposed, and diluted—sometimes for clarity, but often at the expense of precision. Misaligned language fuels inflated expectations, misinformed investments, and disillusionment when reality falls short. Calling a basic automation script an “AI agent,” for example, can lead leaders to overestimate its capabilities, undermining trust and future adoption.
Given the interdisciplinary nature of AI—spanning fields from statistics to philosophy—terminology often carries different meanings depending on context. Add in media hype and branding shorthand (think “Just ChatGPT it”), and the result is a vocabulary that feels familiar but is increasingly misaligned with reality. While language change is inevitable, maintaining a shared understanding is critical for AI’s responsible development and deployment.
With that in mind, we should ask ourselves if the words we’re using match those used by our interlocutors, and if not, reconcile the ambiguity. With that in mind, here are ten AI terms that have shifted in meaning—ranked from highest (1) to lowest (10) based on the potential negative impact of misalignment:
1. Ethics. The term “ethics” in AI initially referred to rigorous moral frameworks that guide the development and deployment of AI systems. However, it often becomes a vague catch-all for “positive” AI initiatives, reducing complex moral considerations to mere buzzwords. This misuse can result in superficial promises rather than genuine ethical commitments and actionable policies. When terms like “ethical AI” become so broad that they mean anything, they lose their ability to foster accountability and meaningful oversight (this evolution of language has, itself, given rise to a new term: “ethics-washing”). That said, the widespread use of the word “ethics” in AI does reflect a growing recognition of the critical need for ethical frameworks to guide the field’s development, which is, itself, a positive place to start.
Colloquial usage: "AI is ethical as long as it's used for good purposes."
Precise usage: "Our company upholds ethical AI by implementing [specific framework(s)], ensuring transparency in data usage, and through regular model audits."
2. Bias. Traditionally, “bias” in AI referred to systematic errors or prejudices in data and algorithms that influence outcomes. Over time, the term has broadened to encompass any unfairness or unexpected behavior in AI outputs, often without a clear understanding of its source. This expanded usage risks diluting technical discussions about how bias arises in models and the methods needed to address it effectively. Like “ethics,” the widespread focus on bias reflects genuine concern about AI’s potential to entrench or exacerbate inequities. However, addressing this issue requires a deeper understanding of the different types of bias, their underlying mechanisms, and the specific contexts in which they operate.
Colloquial usage: "The AI is biased."
Precise usage: "Bias in AI comes from systemic issues in data or algorithms, often reflecting historical inequalities or imbalanced datasets that need careful mitigation."
3. Governance. In its traditional sense, governance refers to the systems and processes through which organizations or societies make decisions and enforce rules. In the context of AI, however, the term has taken on a much broader meaning, often encompassing all activities related to strategizing, directing, and overseeing AI use. This expansion is largely a response to the widespread disruption AI is causing across organizations and the urgency leaders feel to establish control. While this broad application is understandable, it can obscure the more precise aspects of AI governance—such as ethical oversight, policy formulation, and technical safeguards—by creating the illusion that governance is fully addressed when, in reality, critical gaps remain. As the definition continues to expand to include a wider range of mission-critical tasks, organizations may assume they have addressed governance without tackling the complex, nuanced challenges originally captured by the term.
Colloquial usage: "We have established AI governance by creating a cross-functional team to oversee our AI strategy."
Precise usage: "We have implemented ethical guidelines and technical controls for our AI systems as part of our broader AI governance strategy."
4. Agents. Historically, an AI agent has been defined as an entity that gathers information from its environment through sensors and takes actions to achieve specific goals. Early AI agents were often physical robots that operated based on preprogrammed or learned logic, executing if-then style rules to navigate their environments. However, modern AI systems have evolved significantly, leveraging flexible, probabilistic models that introduce a different risk profile compared to their rule-based predecessors. By the traditional definition, nearly all AI applications in the public consciousness today could be considered agents. However, in contemporary AI discourse, the term has come to describe systems that make independent decisions or take autonomous actions in real-world scenarios. This evolving definition often overstates the autonomy and sophistication of AI systems, fueling inflated expectations about their capabilities. This is especially relevant as 2025 is being hailed as the "Year of Agentic AI," with expectations for AI agent deployment potentially outpacing what the technology can realistically deliver.
Moreover, this framing often obfuscates the risks inherent in these systems. Even when an AI agent is highly accurate in its individual actions, compounding uncertainty can significantly reduce the likelihood of achieving the intended outcome. For example, consider an agent performing a series of three actions (or even making a series of three internal calculations to perform one action), each with a 95% likelihood of success. While each action alone may seem reliable, the compounded probability of all three succeeding is just 86% (it’s simple decimal multiplication: 0.95 × 0.95 × 0.95). As agents become more complex and operate across multiple decisions or environments, the likelihood of deviation from intended outcomes increases, potentially leading to unintended consequences. This underscores the importance of tempered expectations and robust safeguards when deploying these systems, particularly in critical scenarios.
Colloquial usage: "AI agents can think and make decisions just like humans."
Precise usage: “AI agents operate within predefined rules and parameters, using data to automate tasks with varying levels of autonomy and human oversight."
5. Explainability. Explainability once referred to concrete methods and frameworks designed to make AI models and their predictions understandable, such as SHAP or LIME that help identify exactly what aspect of an input most influenced the final prediction (for instance, the model categorized an image as a dog over a cat because it ‘saw’ rounded ears). Today, it is often used more loosely to mean simply making AI “easier to understand,” without specifying the rigorous techniques behind it. This vagueness can dilute the importance of robust interpretability methods and muddy the conversation around transparency in AI decision-making. As with “ethics” and “bias”, the term reflects a broader desire for AI systems to be easily understood, but this popular usage often overlooks the techniques necessary to achieve meaningful explainability.
Colloquial usage: "AI explainability means anyone can understand how it works."
Precise usage: "AI explainability relies on techniques like SHAP or LIME, which provide insights into how AI models arrive at their decisions in a meaningful and interpretable way."
6. ML vs AI. Machine learning is a subset of AI that focuses specifically on algorithms and statistical models that enable a system to learn from data without explicit programming. However, in everyday conversation, “ML” and “AI” are often used interchangeably, masking the broader scope of AI that includes reasoning, planning, and knowledge representation beyond just learning from data. This conflation can create misunderstandings about capabilities and expectations (“in both directions,” as our Director of Engineering, Reed Coke, put it. “No need to take an F1 car for groceries”), particularly when different approaches and skill sets are required for various AI tasks.
Colloquial usage: "Machine learning and AI are basically the same thing."
Precise usage: "Machine learning is a subset of AI that focuses on teaching models to learn from data and improve over time without being explicitly programmed. AI is the broader field, encompassing a range of capabilities like reasoning and planning."
7. GenAI vs AI. Generative AI (GenAI) refers to AI systems capable of creating content such as text, images, or music, and represents a specialized domain within the broader field of AI. While GenAI has garnered significant attention and excitement due to its creative outputs (e.g., many chatbots and image generators can be thought of as “GenAI”), it is not synonymous with all AI capabilities. Equating GenAI with AI at large can lead to skewed perceptions, where people might assume that all AI systems are as versatile or creative as their generative counterparts, which is not the case. Conversely, people might assume that GenAI systems automatically have the same robustness and risk profiles as their AI predecessors that have been impacting the world for decades, which is also not the case.
Colloquial usage: "GenAI can help me do so many things, like automating tasks and generating images.”
Precise usage: “GenAI can create new content, while AI overall includes numerous other capabilities, from analytics to decision-support.”
8. "AIs" vs Models. The plural term “AIs” is often casually used to refer to various artificial intelligence systems, but this usage can be misleading and implies a level of trust that these systems have not earned. Instead of implying multiple sentient agents, we usually mean a collection of distinct models or algorithms that perform specific tasks under the broader umbrella of AI. Misusing “AIs” may inadvertently anthropomorphize these tools, obscuring the fact that they are engineered systems with defined parameters rather than autonomous beings.
Colloquial usage: "We have multiple AIs working on this problem."
Precise usage: ""What we often call 'AIs' are actually collections of models designed to perform specific tasks, without true independent thought."
9. ChatGPT. ChatGPT, a conversational AI product powered by a variety of models (e.g., GPT4o) developed by OpenAI, has become so prominent that its name is often used as shorthand for AI in general. This widespread use blurs the line between a single model and the broader field of artificial intelligence, fostering misconceptions about what AI can and cannot do. The conflation runs so deep that the term “chat” is increasingly used to describe interactions with any AI interface, even though the chat-based interaction popularized by ChatGPT is just one method. In reality, AI can also be accessed through voice commands, dashboards, augmented reality, virtual reality, and other interfaces, each with its own capabilities and limitations.
Colloquial: "ChatGPT is the best AI out there."
Precise: "ChatGPT is a commercially available product built on large language model technology, which is just one application of AI among many."
10. Algorithm. Historically, the term algorithm referred to a precise, step-by-step procedure for solving problems or performing calculations—an idea that dates back centuries. Today, however, it has become a catch-all term for nearly any automated process, including complex AI models. This broad usage can obscure the crucial distinction between algorithms and models, complicating discussions around AI functionality and system design. In AI, an algorithm is the set of rules or instructions that guide how data is processed. A model, on the other hand, is the result of applying algorithms (such as stochastic gradient descent or Q-learning) to data to create a learned representation of patterns and relationships that allow for predictions or classifications. While algorithms define the process, models embody the knowledge gained from that process. Put another way: algorithm(s) + data + training = model.
Colloquial usage: "The AI uses an algorithm to make decisions."
Precise usage: "Algorithms are structured sets of instructions used to train models or guide AI systems in processing data and making decisions, ranging from simple rule-based logic to complex machine learning computations."
These 10 terms illustrate how the language we use when talking about AI is evolving in ways that depart from the original or intended meanings of these terms. As mentioned, this shift is not inherently negative; it reflects the natural fluidity of language as it adapts to new contexts and applications. Our goal is not to freeze language in time, but rather to balance between linguistic evolution and the need for precision.
So, then, what to do about this? We recommend that while embracing the natural evolution of language, leaders also take active steps to clarify and standardize AI terms. This doesn’t mean rigidly policing every word, but rather fostering a culture of reflection and precision—one where we routinely ask, What do we really mean by this term? and Does our audience share this understanding?
By anchoring our discussions in shared definitions and acknowledging the nuances behind AI’s most common terms, we can better align expectations, separate hype from substance, and guide critical decision-making.
As Guy Deutscher wrote in Through the Language Glass: Why the World Looks Different in Other Languages, “Language shapes the way we think, and determines the limits of our reality.”3 As with navigating any complicated topic, a shared reality is the basis from which all good decisions can be made.
1.Kittredge, George Lyman, and James B. Greenough. Words and Their Ways in English Speech. The Macmillan Company, 1935.
2.Crystal, David. A Little Book of Language. Reprint ed., Yale University Press, 2011.
3.Deutscher, Guy. Through the Language Glass: Why the World Looks Different in Other Languages. Metropolitan Books, 2010.