Understanding AI starts with understanding the language.
Before you can govern AI, align your teams, or enable safe adoption, everyone in the organization needs to agree on what these terms actually mean. Without a shared vocabulary, governance conversations stall. Policies get misinterpreted. Compliance gaps go unnoticed.
This glossary is adapted from authoritative sources including NIST AI frameworks, ISO standards, U.S. federal law, and regulatory guidance. Where relevant, we have added context for how these terms apply to AI governance in healthcare and regulated industries.
Table of Contents
AI Governance and Compliance
AI Governance
The set of organizational policies, rules, frameworks, roles, and oversight processes that direct how AI is adopted, developed, deployed, and monitored within the organization. The objective is ensuring AI-related risks are identified, managed, and monitored across the AI lifecycle.
Why this matters: Most organizations have some form of data governance. Almost none have AI governance. They are different disciplines solving different problems. Data governance asks “who owns this data?” AI governance asks “who is using AI, and is it approved?”
Source: NIST AI 100-1
AI Governance Framework
A structured methodology for establishing AI governance within an organization. Includes defining what qualifies as AI, assigning ownership and approval authority, creating evaluation criteria, establishing procurement processes, and building ongoing monitoring.
Why this matters: Without a framework, governance is a policy document that sits in a drawer. With one, it becomes an operating system for how your organization adopts AI.
Source: AuthenTech AI
Shadow AI
AI tools adopted by employees without IT approval, compliance review, or executive oversight. Shadow AI is not malicious. It is a natural response to slow approval processes and unsolved problems. Employees find tools that work and use them.
Why this matters: Shadow AI is a demand signal. Every unauthorized tool your staff adopted tells you where the pain is, what problems are unsolved, and where your current processes are too slow. You cannot govern what you cannot see.
Source: AuthenTech AI
AI Risk Assessment
A risk-management process for identifying, estimating, and prioritizing risks arising from the operation and use of an AI system. Incorporates threat and vulnerability analyses and considers mitigations provided by controls planned or in place.
Source: NIST CSRC Glossary; NIST AI 100-1
AI Use Case Inventory
A maintained repository or listing of an organization’s AI use cases, intended to support governance, transparency, and risk management by documenting where and how AI is designed, developed, procured, or used, and the purpose and outputs associated with those uses.
Why this matters: This is the foundation of the Understanding pillar. Before you deploy new AI, audit what already exists. Shadow AI is almost never zero.
Source: OMB Guidance for AI Use Case Inventories; DOJ AI Inventory
Guardrails
Layered safeguards to prevent access to bad information and behavior in an AI system. These may encompass policies, technical controls, and monitoring mechanisms, and may exist at the data, model, application, and infrastructure levels.
Source: arXiv:2512.10100
Responsible AI
Conscientious design, deployment, and governance of AI systems aligned with ethical principles, societal values, and legal requirements.
Source: NIST AI 100-1
Human-in-the-Loop (HITL)
A risk-control approach for AI where a human is integrated within the AI’s decision-making process. The human reviews, approves, or overrides AI outputs before they are acted upon.
Why this matters: In healthcare, HITL is not optional. Clinical decisions, patient communications, and treatment recommendations require human oversight regardless of AI accuracy.
Source: NIST AI 100-1
AI Lifecycle
The set of phases an AI system goes through: plan and design, collect and process data, build and use model, verify and validate, deploy and use, and operate and monitor. These phases are often iterative and not necessarily sequential.
Source: NIST AI 100-1; OECD Framework for the Classification of AI Systems
AI Observability
Complete visibility into who is using AI, what tools they are using, what data is being processed, and whether usage complies with organizational policies. Observability is the foundation of AI governance: you cannot govern what you cannot see.
Source: AuthenTech AI
AI Audit Logging
The systematic recording of all AI interactions, including prompts, responses, users, timestamps, and data classifications. Audit logs provide the evidence trail needed for compliance verification, incident investigation, and ongoing monitoring.
Source: AuthenTech AI
Core AI Concepts
Artificial Intelligence (AI)
A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems use machine and human-based inputs to perceive environments, abstract those perceptions into models, and use model inference to formulate options for information or action.
Source: 15 USC 9401
Generative AI
The class of AI that emulates the structure and characteristics of input data to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.
Why this matters: When your employees use ChatGPT, Claude, or Gemini to draft emails or summarize reports, they are using generative AI. This is the category most likely to become Shadow AI in your organization.
Source: ISO/IEC TS 6254:2025
Large Language Model (LLM)
A subset of machine learning that uses algorithms trained on large amounts of data through self-supervised machine learning to recognize patterns and respond to user requests in natural language.
Why this matters: GPT, Claude, Gemini, and Llama are all large language models. When evaluating AI platforms, understanding that these are different LLMs with different strengths helps your team make informed choices rather than defaulting to whatever is most familiar.
Source: FSOC 2024 Annual Report
Machine Learning
An AI learning method that enables computational systems to learn patterns, make predictions, and optimize decisions from large amounts of data without being explicitly programmed for each task. Encompasses supervised, unsupervised, and reinforcement learning paradigms.
Source: NIST AI 100-1
Agentic AI
A category of AI systems capable of independently making decisions, interacting with their environment, and optimizing processes without direct human intervention.
Why this matters: Agentic AI is the next wave of Shadow AI risk. Unlike chatbots that respond to prompts, agentic systems take autonomous actions. Governance frameworks need to account for AI that acts, not just AI that answers.
Source: DOI:10.1016/j.array.2025.100399
Foundation Model
Large machine learning models trained on vast amounts of raw and unlabeled data through unsupervised learning that can be adapted and applied to versatile downstream tasks. Large language models are common subsets of foundation models.
Source: NIST CSRC Glossary
Retrieval Augmented Generation (RAG)
A type of generative AI system in which a model is paired with a separate information retrieval system (or “knowledge base”). The RAG system identifies relevant information and provides it to the generative AI model in context, allowing the model’s knowledge to be modified without retraining.
Why this matters: RAG is how organizations make AI useful with their own data without exposing that data to external training. A RAG system can answer questions about your internal policies, procedures, or patient protocols using your documents as the knowledge base.
Source: NIST CSRC Glossary
Predictive Analytics
A discipline within AI that leverages historical data, statistical algorithms, and machine learning techniques to identify patterns and forecast future outcomes, behaviors, or events. Distinguished by emphasis on forward-looking insights rather than descriptive analysis.
Source: NIST SP 1270
Natural Language Processing (NLP)
The ability of a machine to process, analyze, and mimic human language, either spoken or written.
Source: NSCAI Technical Glossary
AI as a Service (AIaaS)
Cloud-based systems providing on-demand services to organizations and individuals to deploy, develop, train, and manage AI models.
Source: DOI:10.1007/s12599-021-00708-w
Prompt
Natural language text describing the task that an AI should perform. The quality and specificity of a prompt directly impacts the quality of AI output.
Source: Managing AI-Specific Cybersecurity Risks in Financial Services
AI Risks and Threats
Hallucination
A phenomenon when AI produces output that is erroneous or flawed but is still in the form of a convincing narrative or presentation. Generative AI can produce flawed information even if underlying data is free of defects.
Why this matters: In healthcare, a hallucinated response about drug interactions, treatment protocols, or patient eligibility could have serious consequences. This is why output validation and human-in-the-loop controls are essential, not optional.
Source: FSOC Annual Reports 2023/2024
Bias
A systematic distortion, as opposed to random error, that reduces the representativeness or accuracy of an AI system’s outputs or performance for its intended purposes. Bias may be introduced inadvertently or purposely. Common sources include statistical/computational, systemic, and human bias.
Why this matters: AI systems trained on biased data will produce biased outputs. In healthcare, this can mean diagnostic tools that perform differently across patient populations, or predictive models that systematically disadvantage certain groups.
Source: NIST SP 1270
Black Box
The nature of some AI techniques whereby the inferential operations are complex, hidden, or otherwise opaque to their developers and end users. Understanding how classifications, recommendations, or actions are generated is limited or impossible.
Source: NSCAI Technical Glossary
Adversarial AI
Techniques and attacks used to manipulate AI systems, causing them to make incorrect or unintended predictions or decisions. These techniques exploit vulnerabilities in AI models, often by subtly altering input data, training data, or model interactions.
Source: NIST AI 100-2e2025
Prompt Injection
An attack on an AI system that exploits how an application combines untrusted input with a prompt written by a higher-trust party, such as the application designer, so the system follows the untrusted instructions.
Why this matters: If your organization deploys AI-powered chatbots or patient-facing tools, prompt injection is a real attack vector. Guardrails must be in place to prevent users from manipulating AI systems into bypassing their intended behavior.
Source: NIST AI 100-2e2025
Data Poisoning
An attack that corrupts and contaminates training data to compromise an AI system’s performance.
Source: BIS Financial Stability Institute
AI Drift / Decay
The tendency for an AI model’s performance to degrade over time when deployed in a real-world setting with differing conditions from those present in training and testing.
Why this matters: An AI model that performs well at deployment can silently degrade over months. Ongoing monitoring is not a nice-to-have. It is a governance requirement.
Source: ISO/IEC 12792:2025; NIST AI 100-1
Deepfake
AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, or other entities and would falsely appear to a person to be authentic or truthful.
Source: EU AI Act, Article 3(60)
Model Risk
The potential for adverse consequences from decisions based on incorrect or misused model outputs and reports. Model risk can be from individual models and be in the aggregate.
Source: Comptroller’s Handbook, Model Risk Management
Third-Party AI Risk
Risk that arises when an organization relies on another entity to develop, provide, host, operate, or support AI systems or key AI components such as models, data, and related infrastructure.
Why this matters: Every AI vendor your organization uses introduces third-party risk. Free consumer AI tools introduce the most risk because they rarely come with enterprise-grade data protection agreements, audit capabilities, or compliance guarantees.
Source: NIST AI 100-1; Interagency Guidance on Third-Party Relationships
Synthetic Identity
The use of a combination of real and fake personally identifiable information (PII) to fabricate a person or entity.
Source: FinCEN Financial Trend Analysis
Data and Technical Concepts
Algorithm
A clearly specified mathematical process for computation; a set of rules that, if followed, will give a prescribed result.
Source: NIST SP 800-107r1
Deep Learning
A machine learning implementation technique that uses large quantities of data, or feedback from interactions with a simulation or an environment, as training sets for a network with multiple hidden layers, called a deep neural network.
Source: NIST Big Data Interoperability Framework
Training Data
A subset of input data samples used to train a machine learning model. The quality, representativeness, and size of training data directly impact model performance and bias.
Source: ISO/IEC DIS 22989
Structured Data
Data that is divided into standardized pieces that are identifiable and accessible by both humans and computers.
Source: SEC.gov
Unstructured Data
Data that does not have a predefined data model or is not organized in a predefined way. May include multimedia files, images, sound files, or unstructured text.
Source: NIST SP 1500-1r2
Synthetic Data
Data that has been generated using a purpose-built mathematical model or algorithm, that is statistically realistic but artificial. Can be used for activities like model development and training without exposing real data.
Source: FCA Report on Synthetic Data in Financial Services
Data Lineage
The history of processing of a data element, which may include point-to-point data flows and the data actions performed upon the data element.
Source: NIST CSWP
Explainability
Property of an AI system that enables a given human audience to comprehend the reasons for the system’s behavior; the ability to understand an AI system’s output and decision given certain inputs.
Why this matters: In regulated industries, “the AI said so” is not an acceptable explanation. Stakeholders, auditors, and patients need to understand why an AI system made a specific recommendation.
Source: ISO/IEC TS 6254:2025
Deterministic (Algorithm / Model)
An algorithm or model that, given the same inputs, always produces the same outputs. Most generative AI models are non-deterministic, meaning they may produce different outputs from the same input.
Source: NIST CSRC Glossary
Source Attribution
Definitions in this glossary are adapted from the following authoritative sources:
- NIST AI 100-1 – Artificial Intelligence Risk Management Framework, National Institute of Standards and Technology.
- NIST AI 100-2e2025 – Adversarial Machine Learning Taxonomy and Terminology, National Institute of Standards and Technology.
- NIST SP 1270 – Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.
- NIST CSRC Glossary – Computer Security Resource Center Glossary, National Institute of Standards and Technology.
- ISO/IEC TS 6254:2025 – Information Technology, AI Explainability and Interpretability.
- ISO/IEC 12792:2025 – Information Technology, AI Model Lifecycle.
- ISO/IEC DIS 22989 – Information Technology, Artificial Intelligence Concepts and Terminology.
- EU AI Act – Regulation (EU) 2024/1689.
- 15 USC 9401 – National Artificial Intelligence Initiative Act, Definitions.
- FSOC Annual Reports (2023/2024) – Financial Stability Oversight Council.
- Comptroller’s Handbook – Model Risk Management, Office of the Comptroller of the Currency.
- NSCAI Technical Glossary – National Security Commission on Artificial Intelligence.
- Interagency Guidance on Third-Party Relationships – OCC, Federal Reserve, FDIC.
Where noted, additional context has been provided by AuthenTech AI based on practical experience in AI governance for healthcare and regulated industries.
