Adapting Risk Controls to Novel Threats
Adapting Risk Controls to Novel Threats refers to the proactive adjustment of risk management frameworks in response to emerging and unforeseen risks associated with AI technologie...
Domain Index
Terms and concepts for classifying AI risk, assessing impact, applying controls, and building accountability, fairness, and assurance into governance programs.
Adapting Risk Controls to Novel Threats refers to the proactive adjustment of risk management frameworks in response to emerging and unforeseen risks associated with AI technologie...
AI Risk Appetite and Tolerance Statements are formal declarations by an organization that outline the level of risk it is willing to accept in the deployment and use of AI technolo...
AI Risk refers to the unique challenges and uncertainties associated with artificial intelligence systems, which differ significantly from traditional IT risks. While traditional I...
Assessing Materiality of Bias Risks involves evaluating the significance of potential biases in AI systems and their impact on decision-making processes. This concept is crucial in...
Assumptions and constraints in AI use cases refer to the predefined beliefs and limitations that guide the development and deployment of AI systems. These elements are crucial in A...
Automated Decision-Making (ADM) refers to the use of algorithms and AI systems to make decisions without human intervention. In the context of AI governance, it is crucial to ensur...
The concept of Business Objective vs AI Capability refers to the alignment between an organization's strategic goals and the technical capabilities of AI systems. In AI governance,...
Consent and data collection in AI contexts refer to the ethical and legal requirement that individuals must provide explicit permission before their personal data is collected, pro...
Core components of an AI Impact Assessment (AIA) include identifying potential risks, evaluating ethical implications, assessing societal impacts, and ensuring compliance with lega...
Data Governance in AI Systems refers to the management of data availability, usability, integrity, and security within AI frameworks. It is crucial in AI governance as it ensures t...
Data lineage and provenance refer to the tracking and visualization of the flow of data through its lifecycle, from its origin to its final destination. In AI governance, understan...
Defining the intended purpose of an AI system involves clearly articulating the specific goals and applications for which the AI is designed. This is crucial in AI governance as it...
Designing AI use cases for multi-jurisdiction deployment involves creating AI applications that comply with the diverse legal, ethical, and cultural standards across different regi...
Designing frameworks for risk tolerance and escalation involves establishing structured approaches to identify, assess, and respond to risks associated with AI systems. This is cru...
Designing use cases to avoid prohibited or high-risk classification involves creating AI applications that do not fall into categories deemed unsafe or unethical by regulatory fram...
Documentation across the AI lifecycle refers to the systematic recording of all processes, decisions, and changes made during the development, deployment, and maintenance of AI sys...
Documenting Intended Purpose and Context involves clearly articulating the objectives and operational environment for which an AI system is designed. This practice is crucial in AI...
Dynamic Risk Reassessment Over Time refers to the continuous evaluation and adjustment of risk management strategies in response to changing conditions, technologies, and outcomes...
Early Cross-Border Risk Indicators refer to metrics and signals that help identify potential risks associated with AI systems operating across different jurisdictions. In AI govern...
Early Risk Signals During Use Case Design refer to the proactive identification of potential risks associated with an AI application during its initial design phase. This concept i...
The Ethical Evaluation of Fairness Trade-Offs involves assessing the balance between competing fairness criteria in AI systems, such as equality of opportunity versus overall accur...
Evaluating Risk Management Effectiveness Across Portfolios involves assessing how well risk management strategies perform across different AI projects or initiatives within an orga...
Explainability Expectations for Data Subject Requests refer to the obligation of organizations to provide clear, understandable explanations to individuals (data subjects) about ho...
Fairness as a Governance Objective refers to the principle that AI systems should operate without bias, ensuring equitable outcomes across different demographic groups. This concep...
Fairness trade-offs in high-stakes decisions refer to the inherent conflicts that arise when attempting to achieve fairness in AI systems, particularly in critical areas like healt...
Handling Data Subject Requests in AI Systems refers to the processes and protocols established to manage requests from individuals regarding their personal data, such as access, co...
In-scope vs out-of-scope decisions refer to the classification of decisions made during AI project development based on their relevance to the project's defined objectives and ethi...
Likelihood vs Impact in AI governance refers to a risk assessment framework that evaluates potential risks based on two dimensions: the probability of an adverse event occurring (l...
Maintaining Risk Consistency Across Decisions refers to the practice of ensuring that risk assessments and management strategies are uniformly applied across all AI-related decisio...
Managing Risk Dependencies Across Domains involves identifying and addressing interdependencies between various risk factors that can affect AI systems across different sectors or...
Model Risk Beyond Bias refers to the potential for AI models to produce harmful outcomes not just due to biased data but also from inherent model design flaws, misalignment with ob...
Planning for Risk Evolution and Accumulation involves anticipating and managing the dynamic nature of risks associated with AI systems over time. This concept is crucial in AI gove...
Portfolio-Level AI Risk Management refers to the systematic assessment and management of risks associated with multiple AI projects within an organization. This approach is crucial...
Prioritising Risks Under Resource Constraints refers to the strategic approach of identifying, assessing, and managing risks associated with AI systems when limited resources (fina...
Protected attributes refer to characteristics such as race, gender, age, or disability that should not unfairly influence AI decision-making processes. Sensitive inference involves...
AI Impact Assessments (AIAs) are systematic evaluations that analyze the potential effects of AI systems on individuals, society, and the environment. They are crucial in AI govern...
Record-Keeping vs Knowledge Sharing in AI governance refers to the balance between maintaining detailed documentation of AI systems (record-keeping) and promoting the dissemination...
Residual Risk Acceptance for High-Risk AI refers to the process of acknowledging and accepting the remaining risks associated with deploying AI systems after all feasible mitigatio...
Residual risk refers to the remaining risk after all mitigation measures have been implemented in an AI system. Risk acceptance is the decision to accept this residual risk rather...
Residual Risk Documentation and Sign-Off refers to the formal process of identifying, assessing, and documenting the remaining risks associated with an AI system after all mitigati...
Risk aggregation across AI systems refers to the process of identifying, assessing, and managing cumulative risks that arise when multiple AI systems operate in conjunction. This c...
The Risk-Based Governance Lifecycle (Identify, Assess, Treat, Monitor) is a systematic approach in AI governance that focuses on identifying potential risks associated with AI syst...
Risk-Based Prioritisation in Compliance Programs refers to the strategic approach of identifying, assessing, and prioritizing risks associated with AI technologies to ensure that c...
Risk-Based Selection of Governance Models refers to the process of choosing appropriate governance frameworks based on the specific risks associated with AI systems. This approach...
Risk Classification as a Governance Decision involves categorizing AI systems based on their potential risks to individuals and society. This classification is critical in AI gover...
Risk identification within impact assessments refers to the systematic process of recognizing potential risks associated with AI systems before they are deployed. This concept is c...
Risk Management Expectations for High-Risk AI refer to the structured processes and criteria that organizations must follow to identify, assess, and mitigate risks associated with...
Risk owners are individuals or teams responsible for identifying, assessing, and mitigating risks associated with AI systems. Accountability in risk management ensures that these o...
Risk Taxonomy for AI refers to a structured framework that categorizes potential risks associated with AI systems into distinct areas: Privacy, Bias, Safety, Security, Performance,...
Risk trade-offs between business units refer to the strategic decision-making process where organizations evaluate the potential risks and benefits associated with deploying AI tec...
Impact assessments in high-risk AI governance are systematic evaluations that analyze the potential effects of AI systems on individuals and society before their deployment. These...
Sources of Bias Across the AI Lifecycle refer to the various stages where biases can be introduced in AI systems, including data collection, model training, validation, and deploym...
The trade-offs between fairness, accuracy, and utility in AI governance refer to the challenges of optimizing these three competing objectives when designing AI systems. Fairness a...
Training data refers to the dataset used to train an AI model, while operational data is the real-time data the model encounters during its deployment. In AI governance, distinguis...
Types of AI Governance Documentation refer to the various forms of records and guidelines that organizations create to manage AI systems effectively. This includes policies, proced...
Types of Impact Assessments, including Data Protection Impact Assessments (DPIA), Algorithmic Impact Assessments (AIA), and Hybrid assessments, are frameworks used to evaluate the...
Users, subjects, and affected stakeholders refer to the individuals and groups that interact with, are impacted by, or have a vested interest in an AI system. In AI governance, ide...
Using Impact Assessments as Assurance Evidence involves systematically evaluating the potential effects of AI systems on individuals and society before deployment. This process is...
Using Impact Assessments to Inform Go / No-Go Decisions involves systematically evaluating the potential effects of an AI system before its deployment. This process is crucial in A...
Using risk appetite to shape compliance decisions involves defining the level of risk an organization is willing to accept while pursuing its AI initiatives. This concept is crucia...
Bias in AI systems refers to the systematic favoritism or discrimination that occurs when algorithms produce results that are prejudiced due to flawed training data, model design,...
An AI use case refers to a specific application of artificial intelligence technology to solve a defined problem or achieve a particular goal within an organization. In the context...
An AI Impact Assessment (AIIA) is a systematic evaluation process that determines the potential effects of an AI system on individuals, society, and the environment before its depl...
The concept of when a use case should be stopped or redesigned refers to the critical evaluation of AI applications to determine if they pose unacceptable risks or ethical concerns...
The concept of 'When Risk Becomes Unacceptable' in AI governance refers to the threshold at which the potential harms or negative consequences of an AI system outweigh its benefits...
Documentation as a governance control refers to the systematic recording of processes, decisions, and data related to AI systems. It is crucial in AI governance because it ensures...
Browse Advanced Risk Management & Tolerance concept cards that appear inside Risk, Impact & Assurance.
Visit resourceBrowse Bias Fairness & Model Risk concept cards that appear inside Risk, Impact & Assurance.
Visit resourceBrowse Data Governance & Management concept cards that appear inside Risk, Impact & Assurance.
Visit resourceBrowse Documentation & Record-Keeping concept cards that appear inside Risk, Impact & Assurance.
Visit resourceBrowse Impact Assessments concept cards that appear inside Risk, Impact & Assurance.
Visit resourceBrowse Risk Identification & Assessment concept cards that appear inside Risk, Impact & Assurance.
Visit resourceBrowse Use Case Definition & Scoping concept cards that appear inside Risk, Impact & Assurance.
Visit resourceOpen the A-Z glossary index for concept cards that start with A.
Visit resourceOpen the A-Z glossary index for concept cards that start with B.
Visit resourceOpen the A-Z glossary index for concept cards that start with C.
Visit resourceOpen the A-Z glossary index for concept cards that start with D.
Visit resourceOpen the A-Z glossary index for concept cards that start with E.
Visit resourceOpen the A-Z glossary index for concept cards that start with F.
Visit resourceOpen the A-Z glossary index for concept cards that start with H.
Visit resourceOpen the A-Z glossary index for concept cards that start with I.
Visit resourceCore ideas for defining AI governance principles, comparing frameworks, assigning responsibilities, and designing a program that can work in practice.
Visit resourcePublic concept cards covering AI-specific regulation, privacy law, legal interpretation, and the compliance obligations that governance teams must translate into action.
Visit resourcePractical concepts for monitoring AI systems, documenting governance evidence, handling incidents, and sustaining oversight after deployment.
Visit resourceHow to structure your certification prep with exams, flashcards, and AI tutoring.
Visit resourceA practical comparison of core frameworks used in responsible AI programs.
Visit resourceA weekly study structure for balancing frameworks, mock exams, and targeted review.
Visit resourceBreak down the key knowledge areas and prioritize your study time with more confidence.
Visit resourceSearch and browse the full public concept library across domains, categories, and A-Z entry points.
Visit resourceCompare free and premium plans for AI governance learning and AIGP prep.
Visit resourceSee how Startege supports practice exams, revision, and certification readiness.
Visit resourceExplore a practical training path for governance teams, compliance leaders, and AIGP candidates.
Visit resource