Capability Institute
Framework Document
Capability Institute

AI Literacy Competency Framework

Observable, trainable workplace competencies for AI fluency across any role or function.

Based on the 4D AI Fluency Framework. Dakan, Feller & Anthropic (CC BY-NC-SA 4.0) — 2026

4
Domains
24
Competencies
120
Proficiency Levels
360
Behavioural Indicators
capabilityinstitute.com

AI literacy as observable capability, not technical expertise.

This framework defines AI literacy as a set of observable, trainable workplace competencies. It is designed to help individuals and organisations assess, develop, and apply AI fluency across any role or function. Each competency uses the Dreyfus Model of Skill Acquisition, with five levels — Novice, Advanced Beginner, Competent, Proficient, and Expert — reflecting progressive development in autonomy, judgement, and the ability to handle complexity.

The framework is grounded in the 4D AI Fluency Framework developed by Prof. Rick Dakan (Ringling College of Art and Design) and Prof. Joseph Feller (University College Cork), produced in partnership with Anthropic and released under the CC BY-NC-SA 4.0 licence.

Behaviour-Based

Every indicator is observable and assessable, not theoretical.

Vendor-Agnostic

No references to specific AI tools or platforms.

Trainable

Every competency can be developed through deliberate practice.

Scalable

Applicable across all roles, functions, and industries.

Four Domains

D
Delegation
Setting goals and deciding whether, when, and how to engage AI.
D
Description
Communicating goals, context, and constraints to AI systems effectively.
D
Discernment
Accurately assessing the quality, relevance, and appropriateness of AI outputs.
D
Diligence
Taking active responsibility for how AI is used and what is done with outputs.
1
Domain 1
Delegation
Delegation covers the ability to set goals and make sound decisions about whether, when, and how to engage AI — across automation, augmentation, and agency modes of interaction.
1.1
AI Suitability Assessment
The ability to evaluate whether a given task or problem is appropriate for AI involvement. Effective performance means consistently matching tasks to the right mode of AI interaction based on task complexity, stakes, and available AI capability.
NoviceRelies on trial and error to decide whether to use AI, with limited awareness of what AI can and cannot do.+
Behavioural Indicators
  • Attempts to use AI for a task without first considering whether it is well-suited to AI involvement.
  • Selects an AI tool based on familiarity rather than fit for the task at hand.
  • Accepts a poor AI output without recognising that the task may have been unsuitable for AI in the first place.
Advanced BeginnerBegins to recognise patterns in which tasks produce useful AI outputs and which do not.+
Behavioural Indicators
  • Identifies straightforward, well-defined tasks as suitable for AI and attempts these first.
  • Avoids using AI for highly sensitive tasks after experiencing poor results, without being able to explain the underlying reason.
  • Asks colleagues or consults guidance to decide whether AI is appropriate for an unfamiliar task type.
CompetentApplies a consistent set of criteria to assess AI suitability before starting a task.+
Behavioural Indicators
  • Evaluates a task against explicit criteria — such as data sensitivity, output stakes, and task definition — before deciding to involve AI.
  • Selects between automation, augmentation, and agency modes based on the nature and scope of the task.
  • Recognises when a task is unsuitable for AI and documents the rationale to support consistent practice across the team.
ProficientMakes delegation decisions with speed and confidence across a wide range of task types.+
Behavioural Indicators
  • Scans a complex project and maps sub-tasks to appropriate modes of AI interaction before work begins.
  • Adjusts delegation decisions in real time when task scope or requirements change mid-project.
  • Coaches peers through suitability decisions, articulating the reasoning behind both delegation and non-delegation choices.
ExpertOperates as a reference point for AI delegation decisions across the organisation.+
Behavioural Indicators
  • Develops and refines organisational criteria for AI suitability that are applicable across multiple functions and task types.
  • Identifies delegation failure patterns at team or workflow level and initiates targeted interventions.
  • Contributes to policy or process design that embeds sound AI delegation decisions into standard operating procedures.
1.2
Task Decomposition for AI
The ability to break down complex goals into component sub-tasks and determine which parts benefit from AI involvement. Effective performance means structuring tasks in ways that allow AI to contribute meaningfully while keeping humans appropriately in control.
NoviceTreats tasks as single, undivided units and either delegates the whole task to AI or none of it.+
Behavioural Indicators
  • Submits an entire, complex task to AI without considering which components are appropriate for AI involvement.
  • Does not revisit a poor AI output to identify which part of the task was the root cause of failure.
  • Describes tasks to AI at the same level of abstraction used when briefing a human colleague.
Advanced BeginnerBegins to identify discrete components within a task and experiments with delegating specific parts to AI.+
Behavioural Indicators
  • Separates a task into broad stages and uses AI for one stage at a time.
  • Recognises that some sub-tasks produced better AI outputs than others but cannot consistently explain why.
  • Starts to sequence AI involvement across task stages rather than treating it as a one-step interaction.
CompetentReliably breaks tasks into well-defined components and makes deliberate decisions about where to apply AI.+
Behavioural Indicators
  • Maps a multi-step project into discrete sub-tasks and assigns each to either human effort, AI assistance, or a hybrid approach.
  • Identifies which sub-tasks require structured inputs to AI and prepares accordingly.
  • Adjusts task decomposition based on observed AI performance during a workflow, redirecting AI involvement where needed.
ProficientDesigns task workflows with AI integration as a deliberate structural element.+
Behavioural Indicators
  • Constructs multi-stage workflows that sequence AI inputs and human review points to optimise output quality.
  • Identifies interdependencies between sub-tasks and manages the risk of AI errors propagating across a workflow.
  • Adapts decomposition strategies when working with different AI modes across the same project.
ExpertEstablishes decomposition approaches that others can replicate across teams and functions.+
Behavioural Indicators
  • Designs reusable workflow templates that embed task decomposition logic for AI involvement across common job functions.
  • Identifies task decomposition as a root cause of underperformance in AI adoption programs and proposes structured remedies.
  • Leads cross-functional work to standardise how complex processes are broken down to enable consistent AI integration.
1.3
AI Capability Awareness
The ability to maintain an accurate and current understanding of what AI systems can and cannot do. Effective performance means applying a realistic, evidence-based model of AI strengths and limitations to every delegation decision.
NoviceHas a limited or inaccurate mental model of AI capability, often shaped by media portrayals or isolated experiences.+
Behavioural Indicators
  • Delegates tasks to AI with the expectation that it will perform like a knowledgeable human expert.
  • Expresses surprise when AI produces an incorrect, incomplete, or fabricated output.
  • Does not update their understanding of AI capability after encountering unexpected results.
Advanced BeginnerDevelops a working awareness of common AI strengths and failure modes through direct experience.+
Behavioural Indicators
  • Identifies task types where AI has performed well or poorly in their own work and adjusts future use accordingly.
  • Recognises common AI failure modes such as confident-sounding errors or loss of detail in long tasks.
  • Seeks out informal guidance or examples when encountering an AI task type they have not used before.
CompetentApplies a consistent and reasonably accurate model of AI capability across the majority of work tasks.+
Behavioural Indicators
  • Identifies when a task requires current, specialised, or verifiable knowledge that AI is unlikely to have, and plans accordingly.
  • Adjusts the scope and structure of AI tasks to work within known AI limitations.
  • Explains AI capability limitations to colleagues when they are about to make a delegation decision likely to produce poor results.
ProficientMaintains an up-to-date, nuanced understanding of AI capability across multiple AI modes and tools.+
Behavioural Indicators
  • Tracks changes in AI capability across model updates and adjusts team workflows and delegation norms accordingly.
  • Distinguishes between fundamental AI limitations and current-model-specific weaknesses when advising on delegation decisions.
  • Identifies tasks where AI capability is improving rapidly and flags these as priority areas for workflow redesign.
ExpertActs as an authoritative internal resource on AI capability for the organisation.+
Behavioural Indicators
  • Produces capability briefs or guidance documents that help teams across the organisation make informed delegation decisions.
  • Identifies gaps between how AI is being used and what current AI capability actually supports, and drives corrective action.
  • Engages with external sources to keep the organisation's understanding of AI capability current.
1.4
Mode Selection and Switching
The ability to identify the most appropriate mode of AI interaction — automation, augmentation, or agency — and to shift between modes as task requirements evolve. Effective performance means deliberately choosing and adjusting the mode of AI involvement based on task nature, stakes, and oversight required.
NoviceUses AI in a single, default way regardless of task type. Unaware that different modes of AI interaction exist.+
Behavioural Indicators
  • Applies the same interaction style to every AI task without considering whether a different approach might be more effective.
  • Does not recognise when a task requires ongoing human-AI collaboration rather than a single delegated output.
  • Attempts to fully automate tasks that require human judgement, or over-involves themselves in tasks AI could handle independently.
Advanced BeginnerBecomes aware of different ways of working with AI and begins to experiment with different interaction styles.+
Behavioural Indicators
  • Tries working alongside AI as a thinking partner on a task rather than simply asking it to produce an output.
  • Recognises after the fact that a different mode of engagement would have produced a better result.
  • Asks questions about how others are using AI for different task types to inform their own mode choices.
CompetentSelects AI interaction modes deliberately based on task characteristics and applies each mode with reasonable consistency.+
Behavioural Indicators
  • Chooses automation for well-defined, repeatable tasks and augmentation for tasks requiring judgement, applying these distinctions consistently.
  • Recognises mid-task when a mode switch is needed and makes the adjustment.
  • Articulates to colleagues the rationale for choosing a specific mode for a given task type.
ProficientManages mode selection across complex, multi-stage tasks with confidence.+
Behavioural Indicators
  • Plans the mode of AI interaction for each stage of a complex project before beginning, including where human-AI handoffs will occur.
  • Identifies when an agency-mode setup is more efficient than repeated manual augmentation, and makes the transition deliberately.
  • Reviews AI interaction modes used across a project and identifies where different mode choices would have improved outcomes.
ExpertDefines and models best-practice mode selection across the organisation.+
Behavioural Indicators
  • Develops guidance frameworks that help teams consistently select appropriate AI interaction modes for common task categories.
  • Audits team or workflow-level AI use to identify patterns of mode misalignment and designs targeted interventions.
  • Contributes to organisational AI strategy by articulating how the balance between automation, augmentation, and agency should evolve as AI capability matures.
1.5
Goal Clarity Before Delegation
The ability to define and articulate a clear, well-scoped objective before engaging AI on a task. Effective performance means arriving at an AI interaction with a defined outcome in mind, realistic constraints, and clarity about what a successful result looks like.
NoviceEngages AI with loosely defined or implicit goals, resulting in outputs that frequently miss the mark.+
Behavioural Indicators
  • Submits open-ended or ambiguous requests to AI without first clarifying what a successful output would look like.
  • Accepts a misaligned AI output without recognising that the root cause was an unclear initial goal.
  • Does not distinguish between what they want AI to produce and what they actually need to achieve their work objective.
Advanced BeginnerBegins to recognise the link between goal clarity and output quality.+
Behavioural Indicators
  • Adds basic context to AI requests after experiencing poor results from vague inputs.
  • Identifies the primary outcome they want but does not consistently specify format, scope, or quality criteria.
  • Reviews an AI output against their intended goal and identifies discrepancies, even if unable to immediately correct them.
CompetentDefines clear, scoped goals before engaging AI as a consistent practice.+
Behavioural Indicators
  • Writes out the intended outcome, key constraints, and success criteria before initiating an AI task on any work of moderate complexity.
  • Refines an initial goal definition when early AI outputs reveal that the original framing was incomplete or ambiguous.
  • Uses a defined goal to assess whether an AI output meets requirements or needs further iteration.
ProficientApplies rigorous goal definition to complex, multi-part tasks and ensures AI interactions remain aligned throughout.+
Behavioural Indicators
  • Structures complex goals into primary outcomes and subsidiary requirements before beginning a multi-stage AI workflow.
  • Monitors AI outputs across a task sequence to detect when responses are drifting from the original goal and intervenes with clarifying input.
  • Advises colleagues on how to sharpen goal definitions that are producing poor AI outputs.
ExpertShapes how goal definition is embedded into AI workflow design across the organisation.+
Behavioural Indicators
  • Develops goal-setting templates or pre-task frameworks that teams use to structure AI interactions across common work scenarios.
  • Identifies patterns of goal ambiguity contributing to poor AI outcomes across teams and designs targeted capability interventions.
  • Incorporates goal clarity standards into AI use guidelines, onboarding, or quality assurance processes used across the organisation.
1.6
Human Oversight Calibration
The ability to determine the appropriate level of human review, control, and intervention required when AI is involved in a task. Effective performance means continuously calibrating the degree of human involvement based on task stakes, output complexity, and confidence in AI performance.
NoviceApplies a fixed, undifferentiated level of oversight to all AI tasks.+
Behavioural Indicators
  • Submits an AI output to a downstream process without reviewing it for accuracy, relevance, or completeness.
  • Applies the same level of review to a low-stakes AI-generated summary as to a high-stakes AI-drafted recommendation.
  • Does not recognise that some AI tasks require a human decision point before outputs are acted upon.
Advanced BeginnerBegins to differentiate oversight needs based on task type or prior experience with AI performance.+
Behavioural Indicators
  • Reviews AI outputs more carefully on tasks where AI has previously produced errors, without applying a systematic approach.
  • Recognises that some outputs warrant closer human review than others.
  • Starts to build in a basic review step for AI outputs before sharing or using them.
CompetentApplies a consistent and deliberate approach to oversight calibration, adjusting human review intensity based on task stakes.+
Behavioural Indicators
  • Assesses task risk and output stakes before deciding the depth and type of human review required.
  • Establishes explicit review points within AI-assisted workflows, specifying what will be checked and by whom.
  • Adjusts oversight intensity when task parameters change, such as when an AI task involves sensitive data or public-facing outputs.
ProficientDesigns AI workflows with oversight structures built in from the outset.+
Behavioural Indicators
  • Builds human checkpoint structures into multi-stage AI workflows, defining the purpose and scope of each review point.
  • Identifies when AI performance on a task has deteriorated and escalates the level of oversight accordingly.
  • Advises colleagues on oversight calibration decisions, distinguishing between tasks where light-touch review is appropriate and those requiring substantive human judgement.
ExpertEstablishes oversight frameworks that operate at an organisational or process level.+
Behavioural Indicators
  • Develops oversight frameworks that specify human review requirements across different categories of AI-assisted tasks, scaled to risk and stakes.
  • Identifies oversight failures and redesigns workflows to prevent recurrence.
  • Contributes to organisational AI governance by defining standards for human-in-the-loop requirements across high-stakes functions.
2
Domain 2
Description
Description covers the ability to effectively communicate goals, context, and constraints to AI systems in order to prompt useful, accurate, and relevant outputs. It is the craft of human-to-AI instruction — spanning clarity of language, richness of context, and precision of structure.
2.1
Prompt Construction
The ability to craft clear, well-structured instructions that direct AI towards a specific, useful output. Effective performance means writing prompts that are specific, unambiguous, and structured in ways that consistently produce outputs aligned with the intended goal.
NoviceConstructs prompts as informal, conversational requests with little structure or specificity.+
Behavioural Indicators
  • Submits a single-sentence, vague request to AI and accepts the output without attempting to refine the instruction.
  • Uses the same prompt structure regardless of task type, complexity, or desired output format.
  • Does not recognise that a poor AI output may be the result of an inadequately constructed prompt.
Advanced BeginnerBegins to experiment with prompt structure and wording after experiencing variable AI output quality.+
Behavioural Indicators
  • Adds basic detail to prompts after receiving outputs that missed the mark.
  • Identifies that more specific prompts tend to produce more useful outputs, but applies this observation inconsistently.
  • Rewrites a prompt after receiving a poor output, making changes based on intuition rather than a deliberate diagnostic.
CompetentApplies a consistent and deliberate approach to prompt construction, incorporating task context, output requirements, and constraints.+
Behavioural Indicators
  • Constructs prompts that include the task objective, relevant context, output format, and any key constraints.
  • Diagnoses a poor AI output by identifying which element of the prompt was insufficient, and revises accordingly.
  • Adapts prompt structure based on task type, applying more detailed construction for complex or high-stakes tasks.
ProficientConstructs sophisticated prompts across a wide range of task types with speed and precision.+
Behavioural Indicators
  • Designs multi-part prompts that guide AI through complex tasks in stages, managing the risk of drift or misinterpretation.
  • Identifies in advance where an AI system is likely to default to generic outputs and pre-empts this through specific instructional framing.
  • Reviews prompt construction used by colleagues and provides targeted feedback on how structure or specificity could be improved.
ExpertOperates as an organisational authority on prompt construction.+
Behavioural Indicators
  • Develops prompt templates and construction guides that are adopted across teams for common task categories.
  • Identifies patterns of prompt failure across a team or function and designs targeted interventions.
  • Contributes to organisational AI use standards by defining prompt quality criteria embedded into workflow documentation.
2.2
Context Provision
The ability to identify and supply the background information, constraints, and situational detail that AI needs to produce relevant and accurate outputs. Effective performance means consistently enriching AI interactions with the right level of contextual detail to anchor outputs to real-world requirements.
NoviceProvides little to no context when engaging AI, treating it as a general-purpose tool with pre-existing knowledge of their situation.+
Behavioural Indicators
  • Asks AI a question or assigns a task without providing any background on who they are, what they are trying to achieve, or what constraints apply.
  • Accepts a generic AI output without recognising that richer context would have produced a more relevant result.
  • Assumes AI has access to organisational or role-specific information that has not been provided in the prompt.
Advanced BeginnerBegins to recognise the value of providing context after experiencing outputs that lack relevance or specificity.+
Behavioural Indicators
  • Adds role or organisational context to a follow-up prompt after the initial AI output is too generic to be useful.
  • Identifies that AI performed better on a task when background information was provided and attempts to replicate this.
  • Provides some context in prompts but omits key details such as audience, purpose, or constraints.
CompetentProvides context consistently and deliberately as part of standard prompt preparation.+
Behavioural Indicators
  • Opens an AI interaction by establishing relevant context — including role, task purpose, audience, and key constraints — before issuing the primary instruction.
  • Identifies the specific types of context that different task categories require and applies these consistently.
  • Adjusts the level of context provided based on task complexity.
ProficientManages context provision strategically across complex, multi-turn AI interactions.+
Behavioural Indicators
  • Structures the opening of a complex AI interaction to establish a rich contextual foundation that orients all subsequent exchanges.
  • Monitors AI outputs across a multi-turn interaction for signs of contextual drift and reintroduces key context when needed.
  • Coaches colleagues on the types of context that have the greatest impact on output quality for different categories of work task.
ExpertDefines context provision standards for AI use across the organisation.+
Behavioural Indicators
  • Develops context-setting frameworks or templates that teams use to prepare AI interactions for common task types.
  • Identifies patterns of context omission contributing to poor AI output quality at team or function level and leads targeted remediation.
  • Embeds context provision standards into AI workflow design, onboarding processes, and organisational AI use guidelines.
2.3
Output Specification
The ability to clearly define the format, structure, length, tone, and quality criteria of the AI output required. Effective performance means routinely translating task requirements into precise output parameters that AI can act on directly.
NoviceDoes not specify output requirements when engaging AI, accepting whatever format the AI defaults to.+
Behavioural Indicators
  • Submits a task request to AI without specifying the desired format, length, or tone of the output.
  • Manually reformats an AI output without recognising that better specification upfront would have reduced this effort.
  • Treats the AI's default output format as the only available option rather than as a starting point that can be directed.
Advanced BeginnerBegins to specify basic output requirements after experiencing outputs that required significant rework.+
Behavioural Indicators
  • Adds a format or length instruction to a prompt after receiving an output that was poorly structured.
  • Specifies the most visible output parameters but does not define tone, audience register, or quality standards.
  • Recognises after receiving an output that additional specification would have improved it.
CompetentSpecifies output requirements comprehensively and consistently as part of standard prompt construction.+
Behavioural Indicators
  • Includes explicit output specification covering format, length, tone, and audience register in prompts for all tasks of moderate complexity or above.
  • Adjusts output specification based on the downstream use of the AI output.
  • Uses the defined output specification as a benchmark against which to evaluate the AI output.
ProficientApplies sophisticated output specification to complex, multi-part tasks.+
Behavioural Indicators
  • Defines output specifications for each stage of a multi-part AI workflow, ensuring consistency of format, tone, and quality across all outputs.
  • Identifies mid-interaction when AI output is deviating from specification and reissues or refines the specification.
  • Advises colleagues on how to translate ambiguous quality requirements into precise, actionable output specifications.
ExpertEstablishes output specification standards at an organisational level.+
Behavioural Indicators
  • Develops output specification frameworks or templates that standardise format, tone, and quality requirements for AI-assisted tasks across the organisation.
  • Identifies patterns of specification failure contributing to poor or inconsistent AI outputs at team level.
  • Embeds output specification standards into AI use guidelines, workflow documentation, and quality assurance processes.
2.4
Iterative Refinement
The ability to treat the first AI output as a starting point and to systematically improve it through follow-up instruction, feedback, and progressive clarification. Effective performance means approaching AI interactions as an iterative dialogue.
NoviceAccepts the first AI output as the final result, regardless of its quality or alignment with the intended goal.+
Behavioural Indicators
  • Uses the first AI output without any follow-up interaction, even when it only partially meets the task requirement.
  • Abandons an AI interaction after receiving a poor initial output rather than attempting to refine or redirect.
  • Does not recognise that follow-up prompts can substantially improve the quality of an initial AI output.
Advanced BeginnerBegins to experiment with follow-up prompts to improve initial AI outputs.+
Behavioural Indicators
  • Sends a follow-up prompt to adjust an AI output after recognising it does not fully meet requirements.
  • Makes incremental, reactive changes rather than diagnosing the underlying cause of the shortfall.
  • Engages in two or three follow-up exchanges before accepting an output, without a deliberate plan for directing the refinement process.
CompetentApproaches AI interactions as a deliberate iterative process.+
Behavioural Indicators
  • Reviews an initial AI output against the defined goal and identifies specific gaps before issuing a follow-up instruction.
  • Issues follow-up prompts that address one clearly identified deficiency at a time.
  • Maintains a consistent goal orientation across a multi-turn refinement process.
ProficientManages iterative refinement across complex, extended AI interactions with strategic intent.+
Behavioural Indicators
  • Plans a refinement strategy at the outset of a complex AI task, anticipating where iteration will be needed.
  • Recognises when iterative refinement has reached diminishing returns and makes a deliberate decision to reframe the task or accept the output.
  • Models effective iterative refinement for colleagues, demonstrating how to diagnose output gaps and translate them into targeted follow-up instructions.
ExpertEstablishes iterative refinement as an organisational practice and capability.+
Behavioural Indicators
  • Develops guidance and worked examples that demonstrate effective iterative refinement across common task types.
  • Identifies patterns of single-exchange AI use at team level that are limiting output quality and designs interventions to shift practice.
  • Contributes to organisational AI use standards by defining iterative refinement as an expected practice.
2.5
Example and Constraint Setting
The ability to use worked examples, reference materials, and explicit boundaries to shape and limit AI outputs more precisely. Effective performance means routinely using examples and constraints as deliberate instructional tools.
NoviceDoes not use examples or constraints when prompting AI, relying solely on verbal description.+
Behavioural Indicators
  • Describes a desired output in abstract terms without providing a reference example or specifying what the output should not include.
  • Receives an AI output that deviates from requirements in a predictable way without recognising that an explicit constraint would have prevented it.
  • Does not consider using existing documents, prior outputs, or worked examples as reference material when instructing AI.
Advanced BeginnerBegins to use examples or constraints reactively after receiving an output that missed the mark.+
Behavioural Indicators
  • Adds a reference example to a follow-up prompt after receiving an AI output that was structured or toned incorrectly.
  • Introduces a basic constraint after an initial output exceeded scope or included unwanted content.
  • Recognises that providing an example improved a particular AI output and attempts to apply this approach to similar tasks.
CompetentUses examples and constraints deliberately and consistently as standard elements of prompt construction.+
Behavioural Indicators
  • Includes at least one reference example or worked model in prompts for tasks where output format, tone, or structure is critical.
  • Defines explicit constraints covering scope, excluded content, format limits, and audience boundaries in prompts for tasks of moderate complexity or above.
  • Reviews AI outputs against the examples and constraints provided and identifies where additional precision would have produced a better result.
ProficientUses examples and constraints as a sophisticated instructional strategy across complex, multi-part tasks.+
Behavioural Indicators
  • Sequences multiple examples across a multi-turn interaction to progressively narrow AI output towards a specific standard.
  • Designs constraint sets that manage competing requirements within a single AI task.
  • Advises colleagues on the selection and use of examples and constraints as precision tools for improving AI output quality.
ExpertDevelops example and constraint libraries that teams can deploy at scale.+
Behavioural Indicators
  • Builds and maintains example libraries and constraint templates for common AI task types.
  • Identifies patterns of output failure attributable to missing or poorly chosen examples and constraints, and develops targeted remediation resources.
  • Embeds example and constraint setting into organisational prompt standards, workflow documentation, and AI capability development programs.
2.6
Interaction Mode Adaptation
The ability to adjust communication style, instructional approach, and level of direction based on the mode of AI interaction being used. Effective performance means fluidly adapting how instructions are framed and delivered to match the demands of the interaction mode in use.
NoviceUses a single, undifferentiated communication style across all AI interactions regardless of mode, task type, or complexity.+
Behavioural Indicators
  • Frames every AI interaction as a simple instruction-and-response exchange, regardless of whether the task calls for collaborative dialogue or structured configuration.
  • Does not adjust the level of detail, structure, or tone of instructions when switching between task types.
  • Treats all AI interactions as equivalent in terms of how instructions should be framed and delivered.
Advanced BeginnerBegins to recognise that different AI tasks work better with different instructional approaches.+
Behavioural Indicators
  • Notices that a more conversational approach produces better results on exploratory tasks than a single detailed prompt.
  • Attempts to write more structured instructions when setting up a recurring AI task.
  • Asks colleagues how they structure instructions for different types of AI tasks after noticing their approach varies across situations.
CompetentAdapts communication style and instructional approach deliberately based on the mode of AI interaction in use.+
Behavioural Indicators
  • Writes structured, comprehensive instructions for automation tasks, recognising that the AI will execute without further human input.
  • Adopts a more iterative, exploratory communication style for augmentation tasks.
  • Drafts clear, persistent configuration instructions for agency-mode setups, specifying triggers, boundaries, and expected output standards.
ProficientManages interaction mode adaptation fluidly across complex tasks, switching communication styles in real time.+
Behavioural Indicators
  • Identifies mid-task when the mode of AI interaction has shifted and adjusts instructional style accordingly.
  • Designs communication strategies for complex, multi-mode AI interactions that specify a different instructional approach for each phase.
  • Reviews AI interaction transcripts with colleagues and identifies where a shift in communication style would have improved output quality or efficiency.
ExpertDefines interaction mode adaptation as an organisational capability.</div><span class="level-toggle">++
Behavioural Indicators
  • Develops instructional style guides for each AI interaction mode, providing teams with concrete communication patterns.
  • Identifies where mismatched communication styles are reducing AI output quality or efficiency at team level.
  • Contributes to organisational AI use standards by defining interaction mode adaptation as a core communication skill.
3
Domain 3
Discernment
Discernment covers the ability to accurately assess the quality, relevance, accuracy, and appropriateness of AI outputs and behaviours. It is the critical evaluation layer of AI fluency — the capacity to interrogate what AI produces rather than accept it at face value.
3.1
Output Accuracy Evaluation
The ability to assess whether an AI output is factually correct, internally consistent, and free from fabricated or misleading content. Effective performance means applying a systematic and sceptical review process to AI outputs before acting on or sharing them.
NoviceAccepts AI outputs as accurate by default, treating confident presentation as a reliable indicator of correctness.+
Behavioural Indicators
  • Uses an AI-generated output in work deliverables without checking any of the factual claims it contains.
  • Treats a well-written, fluent AI output as reliable, equating writing quality with factual accuracy.
  • Does not recognise when an AI output contains internally inconsistent information or fabricated references.
Advanced BeginnerDevelops an awareness that AI outputs can contain errors and begins to apply basic checks.+
Behavioural Indicators
  • Cross-checks specific factual claims in an AI output against known information or a readily available source when they seem uncertain.
  • Identifies an obvious factual error and corrects it, without applying a broader review of the rest of the content.
  • Becomes more cautious about AI accuracy after encountering a specific instance of an AI error, but does not generalise this into a consistent review practice.
CompetentApplies a deliberate and consistent accuracy review process to AI outputs before use.+
Behavioural Indicators
  • Reviews all factual claims, figures, and references in an AI output against credible sources before incorporating into a work deliverable.
  • Identifies the specific accuracy risks most relevant to a task type and focuses review effort accordingly.
  • Flags AI outputs that contain unverifiable claims or suspicious references rather than using or sharing them without resolution.
ProficientConducts rigorous accuracy evaluation across complex, multi-part AI outputs with speed and consistency.+
Behavioural Indicators
  • Designs task-specific accuracy review checklists that identify the highest-risk content areas in AI outputs for a given task type.
  • Identifies patterns of AI inaccuracy across multiple outputs on the same task type and adjusts prompting or workflow design to reduce recurrence.
  • Coaches colleagues through accuracy evaluation on task types where AI failure modes are well-established.
ExpertEstablishes accuracy evaluation standards and processes at an organisational level.+
Behavioural Indicators
  • Develops accuracy evaluation frameworks and review checklists that are adopted across teams for common AI-assisted task categories.
  • Identifies patterns of accuracy failure at team or workflow level and leads redesign of review processes.
  • Contributes to organisational AI governance by defining accuracy review requirements for high-stakes AI-assisted outputs.
3.2
Relevance and Fit Assessment
The ability to evaluate whether an AI output is genuinely useful and appropriately suited to the specific context, audience, and purpose for which it was requested. Effective performance means consistently assessing AI outputs not just for correctness but for genuine fitness for purpose.
NoviceEvaluates AI outputs primarily for surface-level completeness rather than contextual appropriateness.+
Behavioural Indicators
  • Uses an AI output without assessing whether its tone, depth, or framing is appropriate for the intended audience.
  • Accepts an AI output that addresses the general topic without checking whether it addresses the specific question or need.
  • Does not recognise when an AI output is technically responsive but substantively misaligned with the actual purpose of the task.
Advanced BeginnerBegins to assess AI outputs for basic contextual fit after experiencing situations where technically adequate outputs were poorly received.+
Behavioural Indicators
  • Reviews an AI output for obvious tone or register mismatches before using it in a specific context.
  • Identifies when an AI output addresses the wrong scope relative to what was needed.
  • Asks a colleague to review an AI output for contextual appropriateness when uncertain.
CompetentApplies a consistent relevance and fit assessment to AI outputs as a standard step before use.+
Behavioural Indicators
  • Assesses each AI output against defined criteria — including audience, purpose, tone, depth, and scope — before deciding whether it is fit for use.
  • Identifies relevance gaps between an AI output and the actual task need, even when the output is technically accurate.
  • Adjusts prompts or follow-up instructions based on a relevance assessment, targeting the specific dimension of fit that requires improvement.
ProficientConducts nuanced relevance and fit assessment across complex outputs and varied audience contexts.+
Behavioural Indicators
  • Evaluates AI outputs for alignment with the unstated assumptions, priorities, and expectations of the intended audience.
  • Identifies when an AI output reflects a generic or default framing that does not account for the specific organisational or stakeholder context.
  • Provides structured feedback to colleagues on relevance and fit issues in AI outputs.
ExpertDefines relevance and fit standards for AI-assisted outputs at an organisational level.+
Behavioural Indicators
  • Develops relevance and fit evaluation criteria for common AI-assisted output types.
  • Identifies patterns of contextual misalignment in AI outputs across a team or function and leads design of workflow or prompting improvements.
  • Embeds relevance and fit assessment into AI output quality standards, peer review processes, and organisational AI use guidelines.
3.3
Reasoning and Logic Scrutiny
The ability to evaluate the quality of the reasoning, argument structure, and logical coherence within an AI output. Effective performance means actively interrogating the reasoning within AI outputs and identifying where arguments are weak, incomplete, or unsupported.
NoviceDoes not evaluate the reasoning structure of AI outputs, focusing attention only on surface content.+
Behavioural Indicators
  • Uses an AI-generated analysis or recommendation without assessing whether the reasoning that supports it is sound.
  • Accepts an AI conclusion as valid because it aligns with prior expectations, without checking whether the argument is logically supported.
  • Does not distinguish between an AI output that states a conclusion and one that provides a well-reasoned justification for it.
Advanced BeginnerBegins to question AI reasoning when conclusions seem counterintuitive or when outputs are being used in higher-stakes contexts.+
Behavioural Indicators
  • Questions an AI output's conclusion when it seems surprising or contradicts existing knowledge, and asks AI to explain its reasoning.
  • Identifies an obvious logical gap or unsupported claim when it is pointed out by a colleague, but does not proactively seek these out.
  • Seeks a second opinion on AI-generated analysis before acting on it in situations where the stakes of being wrong are high.
CompetentApplies a consistent and structured approach to evaluating the reasoning quality of AI outputs.+
Behavioural Indicators
  • Reviews the reasoning structure of AI-generated analyses by tracing the argument from evidence to conclusion and identifying any unsupported steps.
  • Asks AI to explain its reasoning or justify a specific conclusion when the basis for an output is not transparent.
  • Identifies when an AI output presents a confident conclusion without adequate supporting reasoning and flags this before the output is used.
ProficientConducts rigorous reasoning scrutiny across complex, multi-part AI outputs.+
Behavioural Indicators
  • Deconstructs the argument structure of a complex AI output, identifying each logical step and assessing whether the transitions between them are valid.
  • Identifies unacknowledged assumptions embedded in AI reasoning and assesses whether these assumptions hold in the specific context.
  • Coaches colleagues in reasoning scrutiny, demonstrating how to distinguish between AI outputs that support conclusions well and those that do not.
ExpertEstablishes reasoning quality standards for AI-assisted analytical and advisory outputs at an organisational level.+
Behavioural Indicators
  • Develops reasoning evaluation frameworks that teams apply when reviewing AI-generated analyses, recommendations, or decision-support outputs.
  • Identifies patterns of poor reasoning quality in AI outputs being used across a team or function and leads design of review processes.
  • Contributes to organisational AI governance by defining reasoning quality standards for high-stakes AI-assisted outputs.
3.4
Bias and Limitation Recognition
The ability to identify when AI outputs reflect underlying biases, gaps in training data, or structural limitations that may distort, omit, or misrepresent information. Effective performance means routinely interrogating AI outputs for bias signals and adjusting use or prompting accordingly.
NoviceDoes not consider bias or structural limitations when evaluating AI outputs.+
Behavioural Indicators
  • Uses AI-generated content that reflects a narrow or culturally specific perspective without recognising or accounting for this.
  • Does not consider whether an AI output may have omitted relevant viewpoints, populations, or data.
  • Assumes that because an AI output sounds balanced or comprehensive, it is free from bias or significant omission.
Advanced BeginnerBegins to recognise that AI outputs can reflect bias or limited perspectives, particularly in contexts where bias concerns have been explicitly raised.+
Behavioural Indicators
  • Identifies an obvious bias or imbalance in an AI output when the topic is one where bias concerns are well-known.
  • Asks AI to consider alternative perspectives or reframe an output after recognising a limited or skewed viewpoint.
  • Seeks guidance from a colleague or policy resource when uncertain whether an AI output contains problematic bias.
CompetentApplies a consistent and proactive approach to bias and limitation recognition across AI outputs.+
Behavioural Indicators
  • Reviews AI outputs for signs of representational bias, framing asymmetry, or systematic omission before using them in communications or decision-making.
  • Identifies the specific types of bias most relevant to a task type or subject matter and applies targeted scrutiny to those risk areas.
  • Adjusts prompts or requests additional perspectives from AI when an initial output reflects a limited or potentially biased framing.
ProficientConducts nuanced bias and limitation assessment across complex AI outputs and sensitive subject domains.+
Behavioural Indicators
  • Identifies implicit bias in AI outputs that is not immediately obvious — such as subtle framing effects or assumptions embedded in the language used.
  • Evaluates AI outputs for knowledge limitation risks and supplements AI-generated content with additional sources where gaps are identified.
  • Advises colleagues on bias recognition for specific task types or subject areas.
ExpertDefines bias and limitation evaluation standards for AI-assisted outputs at an organisational level.+
Behavioural Indicators
  • Develops bias evaluation criteria and review processes for common AI-assisted output types.
  • Identifies patterns of bias-related risk in AI outputs across a team or function and leads the design of targeted remediation.
  • Contributes to organisational AI governance by defining bias and limitation review requirements for sensitive or high-stakes outputs.
3.5
Missing Context Identification
The ability to recognise when an AI output is incomplete, insufficiently nuanced, or based on an inadequate understanding of the specific context in which it will be used. Effective performance means actively interrogating AI outputs for what is absent, not just evaluating what is present.
NoviceEvaluates AI outputs based on what is present rather than what may be missing.+
Behavioural Indicators
  • Uses an AI output without considering whether it has omitted information that would be important in the specific context.
  • Does not ask follow-up questions to check whether an AI output is complete or whether relevant considerations have been left out.
  • Assumes that a comprehensive-looking AI output covers the topic fully, without verifying whether gaps exist.
Advanced BeginnerBegins to question the completeness of AI outputs in situations where a gap becomes apparent or causes a problem.+
Behavioural Indicators
  • Identifies a significant omission in an AI output after the gap causes a problem downstream.
  • Asks AI a follow-up question to fill an obvious gap, without systematically checking for other omissions.
  • Recognises that an AI output addressed the topic generally but missed specific constraints or conditions relevant to their context.
CompetentApplies a consistent and proactive approach to identifying missing context in AI outputs.+
Behavioural Indicators
  • Reviews AI outputs against the full set of task requirements and contextual constraints to identify gaps.
  • Asks targeted follow-up questions to probe areas of an AI output that appear underdeveloped, incomplete, or insufficiently contextualised.
  • Identifies when an AI output reflects a generic treatment of a topic and requests a more contextually specific response before using it.
ProficientConducts systematic completeness assessment across complex AI outputs, identifying missing context at multiple levels.+
Behavioural Indicators
  • Develops task-specific completeness checklists that identify the contextual dimensions most likely to be missing from AI outputs.
  • Identifies when an AI output has addressed the explicit question but missed the implicit need, and addresses this through targeted follow-up.
  • Coaches colleagues in missing context identification, demonstrating how to distinguish between an output that looks complete and one that is genuinely fit for purpose.
ExpertEstablishes completeness evaluation standards for AI-assisted outputs at an organisational level.+
Behavioural Indicators
  • Develops completeness evaluation frameworks for common AI-assisted output types.
  • Identifies patterns of contextual omission in AI outputs used across a team or function and leads redesign of prompting, review, or workflow practices.
  • Contributes to organisational AI governance by defining completeness standards for high-stakes AI-assisted outputs.
3.6
Appropriate Use Judgement
The ability to make sound decisions about when, how, and in what form AI outputs can be used — weighing quality, risk, audience, and context before acting on or sharing what AI has produced. Effective performance means consistently exercising informed judgement at the point of use.
NoviceMakes use decisions about AI outputs based on surface quality alone.+
Behavioural Indicators
  • Shares or acts on an AI output without assessing whether it has been adequately reviewed relative to the stakes of the task or sensitivity of the audience.
  • Discards an AI output that contains useful content because surface quality is low, without recognising that it could be refined or partially used.
  • Does not consider the downstream consequences of using an AI output without adequate review.
Advanced BeginnerBegins to factor basic risk and audience considerations into decisions about using AI outputs.+
Behavioural Indicators
  • Applies greater scrutiny to AI outputs before sharing them with senior stakeholders or external audiences.
  • Hesitates to use an AI output in a sensitive context and seeks a second opinion before proceeding.
  • Recognises after the fact that an AI output used in a particular context required more careful review than it received.
CompetentMakes deliberate and consistent use judgements about AI outputs, weighing quality, risk, audience, and context explicitly.+
Behavioural Indicators
  • Assesses the risk profile of a task — including stakes, audience sensitivity, and potential for harm — before deciding the level of review an AI output requires.
  • Makes an explicit decision to use, revise, supplement, or discard an AI output based on a review assessment.
  • Documents the basis for use decisions on high-stakes AI-assisted outputs.
ProficientExercises nuanced appropriate use judgement across complex, high-stakes, and ambiguous situations.+
Behavioural Indicators
  • Navigates competing pressures — such as urgency and quality risk — when making use decisions about AI outputs, applying a principled and transparent rationale.
  • Identifies when an AI output is partially fit for use and makes a deliberate decision about which elements can be used, which require revision, and which should be replaced.
  • Advises colleagues on appropriate use judgements in ambiguous or high-stakes situations.
ExpertDefines appropriate use standards for AI-assisted outputs at an organisational level.+
Behavioural Indicators
  • Develops appropriate use decision frameworks for common AI-assisted output types.
  • Identifies patterns of inappropriate use — including over-reliance on unreviewed AI outputs in high-stakes contexts — and leads design of governance improvements.
  • Contributes to organisational AI governance by defining appropriate use standards and ensuring they are applied consistently across high-risk functions.
4
Domain 4
Diligence
Diligence covers the ability to take active responsibility for how AI is used and what is done with AI outputs — encompassing ethical conduct, transparency, privacy, security, and accountability. Where Discernment asks ‘is this output good enough?’, Diligence asks ‘am I using AI in a way that is responsible, honest, and safe?’
4.1
Transparency and Disclosure
The ability to make sound, consistent judgements about when and how to disclose AI involvement in work outputs. Effective performance means applying clear, principled disclosure decisions that are appropriate to the context, relationship, and stakes involved.
NoviceDoes not consider disclosure as a relevant dimension of AI use.+
Behavioural Indicators
  • Submits AI-generated or AI-assisted work without indicating AI involvement, in contexts where disclosure would be expected or required.
  • Does not consider whether a colleague, manager, or stakeholder would want to know that AI was involved in producing a piece of work.
  • Is unaware that non-disclosure of AI involvement can carry professional, reputational, or legal consequences in certain contexts.
Advanced BeginnerBegins to recognise that disclosure of AI involvement is relevant in some contexts.+
Behavioural Indicators
  • Discloses AI involvement when explicitly required by a policy, manager, or platform, but does not consider disclosure in contexts where it has not been directly mandated.
  • Becomes more attentive to disclosure expectations after observing or experiencing a situation where undisclosed AI use caused a problem.
  • Asks a colleague or manager whether AI disclosure is expected for a specific piece of work when uncertain.
CompetentApplies a consistent and principled approach to disclosure across work contexts.+
Behavioural Indicators
  • Assesses each AI-assisted output against relevant disclosure considerations — including organisational policy, professional standards, and stakeholder expectations — before deciding how to disclose.
  • Discloses AI involvement proactively in contexts where it is material to the recipient's understanding of the work.
  • Applies a consistent disclosure standard across similar work contexts rather than making ad hoc decisions without principled criteria.
ProficientManages disclosure decisions with nuance across complex, varied, and high-stakes work contexts.+
Behavioural Indicators
  • Navigates disclosure decisions in ambiguous contexts by applying a principled framework rather than defaulting to the path of least resistance.
  • Advises colleagues on disclosure decisions in complex or sensitive situations, articulating the factors that should be weighed.
  • Identifies when organisational disclosure guidance is insufficient for the situations teams are encountering and raises this to prompt policy development.
ExpertDefines disclosure standards for AI use at an organisational level.+
Behavioural Indicators
  • Develops AI disclosure guidelines that provide teams with clear, context-sensitive standards for when and how to disclose AI involvement.
  • Identifies patterns of non-disclosure or inconsistent disclosure practice across a team or function and leads design of targeted interventions.
  • Contributes to organisational AI governance by defining disclosure requirements and embedding these into relevant policies, approval processes, and professional standards.
4.2
Data Privacy and Security in AI Use
The ability to identify and manage the privacy and security risks that arise when sharing information with AI systems. Effective performance means consistently applying sound data hygiene practices to AI interactions and treating privacy and security as non-negotiable constraints on how AI is used.
NoviceDoes not consider data privacy or security implications when interacting with AI.+
Behavioural Indicators
  • Inputs sensitive, confidential, or personally identifiable information into an AI system without considering whether this is appropriate or permitted.
  • Is unaware that information shared with AI systems may be stored, processed, or used in ways that carry privacy or security implications.
  • Does not consult organisational guidance or policy when uncertain about whether a particular type of data can be shared with an AI tool.
Advanced BeginnerDevelops an awareness of data privacy risks in AI use after exposure to organisational guidance or a specific incident.+
Behavioural Indicators
  • Avoids inputting obviously sensitive information into AI systems after receiving guidance that this is inappropriate.
  • Checks whether a specific AI tool is approved for organisational use before sharing work-related information with it.
  • Asks a colleague or IT contact whether a particular type of data can be shared with an AI tool when uncertain.
CompetentApplies consistent and deliberate data privacy and security practices to AI interactions.+
Behavioural Indicators
  • Identifies the privacy and security classification of information before including it in an AI interaction and applies the appropriate handling standard.
  • Anonymises, aggregates, or omits sensitive data from AI inputs when the task can be completed without sharing the sensitive detail directly.
  • Raises a concern with a manager or data governance contact when an AI task appears to require sharing information that may be subject to restrictions.
ProficientManages data privacy and security across complex AI workflows involving multiple data types, systems, and stakeholders.+
Behavioural Indicators
  • Reviews a proposed AI workflow for data privacy and security risks before implementation.
  • Designs AI-assisted workflows that achieve task objectives without requiring input of data subject to privacy or security restrictions.
  • Advises colleagues on data privacy and security considerations for AI interactions involving unfamiliar data types, systems, or jurisdictional requirements.
ExpertDefines data privacy and security standards for AI use at an organisational level.+
Behavioural Indicators
  • Develops data handling guidelines for AI use that specify what categories of information can and cannot be shared with AI systems, under what conditions, and with what controls.
  • Identifies patterns of data privacy or security risk in AI use across a team or function and leads design of targeted controls, workflow changes, or policy updates.
  • Contributes to organisational AI governance by aligning AI data handling standards with relevant legal, regulatory, and professional obligations.
4.3
Intellectual Property and Attribution Awareness
The ability to recognise and appropriately manage the intellectual property and attribution considerations that arise from using AI to generate, adapt, or incorporate content. Effective performance means applying sound IP and attribution judgements to AI-assisted work.
NoviceDoes not consider intellectual property or attribution implications when using AI-generated content.+
Behavioural Indicators
  • Incorporates AI-generated content into work outputs without considering whether the content may reproduce third-party material or raise ownership questions.
  • Does not consider whether AI-generated work is subject to any organisational policies on IP ownership.
  • Presents AI-generated content as entirely original work without considering whether attribution of AI involvement is appropriate or required.
Advanced BeginnerDevelops an awareness that AI use can raise IP and attribution questions.+
Behavioural Indicators
  • Checks whether organisational policy addresses the use of AI-generated content in work products before publishing or sharing externally.
  • Identifies an obvious IP concern and raises it before the output is used or shared.
  • Asks a manager or legal contact for guidance on attribution when uncertain whether AI involvement needs to be acknowledged.
CompetentApplies a consistent and principled approach to IP and attribution considerations in AI-assisted work.+
Behavioural Indicators
  • Reviews AI-generated content for potential IP concerns — including reproduction of third-party material and ownership implications — before incorporating into work outputs.
  • Applies organisational IP policy to AI-assisted work products, including decisions about ownership, licensing, and external publication.
  • Makes deliberate attribution decisions for AI-assisted work, disclosing AI involvement where required by policy, professional standards, or the reasonable expectations of the recipient.
ProficientManages IP and attribution considerations across complex, multi-stakeholder, and high-stakes AI-assisted work.+
Behavioural Indicators
  • Identifies IP and attribution risks in complex AI-assisted projects and manages these proactively.
  • Navigates ambiguous IP situations by applying a principled framework and seeking appropriate expert input.
  • Advises colleagues on IP and attribution decisions arising from AI use in complex or unfamiliar work contexts.
ExpertDefines IP and attribution standards for AI-assisted work at an organisational level.+
Behavioural Indicators
  • Develops IP and attribution guidelines for AI-assisted work that address ownership, licensing, external publication, and attribution disclosure.
  • Identifies patterns of IP risk in AI use across a team or function and leads design of governance improvements.
  • Contributes to organisational AI governance by aligning IP standards for AI use with relevant legal obligations and professional requirements.
4.4
Ethical Consequence Awareness
The ability to anticipate, identify, and respond to the ethical implications of using AI in a given context — including potential harms to individuals, groups, or communities. Effective performance means bringing active ethical consideration to AI use decisions, not treating ethics as an afterthought.
NoviceDoes not consider the ethical implications of AI use as a relevant dimension of their own practice.+
Behavioural Indicators
  • Uses AI to complete a task that affects people without considering whether the use of AI in that context is appropriate or may cause harm.
  • Does not consider whether the outputs AI produces could disadvantage, misrepresent, or cause harm to individuals or groups if acted upon.
  • Assumes that ethical responsibility for AI use rests with the organisation or technology provider rather than with the individual using the tool.
Advanced BeginnerBegins to recognise that AI use can carry ethical implications, particularly in contexts involving people or significant decisions.+
Behavioural Indicators
  • Pauses to consider the ethical implications of using AI in a task when a colleague raises a concern or when the sensitivity of the task is obvious.
  • Identifies an ethical concern about a proposed AI use and raises it with a manager or colleague before proceeding.
  • Seeks guidance on the ethical appropriateness of a specific AI application when uncertain.
CompetentApplies consistent ethical consideration to AI use decisions across a range of work contexts.+
Behavioural Indicators
  • Considers the potential ethical implications of an AI use decision — including who may be affected and how — before proceeding with tasks that involve people, sensitive information, or significant consequences.
  • Identifies specific ethical risks relevant to a task type and addresses these before acting.
  • Raises ethical concerns about proposed AI use with relevant stakeholders when personal assessment suggests the use may cause harm.
ProficientConducts nuanced ethical analysis of AI use across complex, multi-stakeholder, and high-stakes contexts.+
Behavioural Indicators
  • Identifies ethical risks in AI use that are not immediately obvious — such as aggregate harms that only emerge at scale, or downstream consequences for parties not directly involved.
  • Evaluates proposed AI applications against multiple ethical dimensions simultaneously and makes a considered judgement about whether and how to proceed.
  • Advises colleagues on ethical considerations arising from complex or novel AI use contexts.
ExpertDefines ethical standards for AI use at an organisational level.+
Behavioural Indicators
  • Develops ethical use frameworks for AI that provide teams with structured approaches to identifying, assessing, and responding to ethical risks.
  • Identifies patterns of ethical risk in AI use across a team or function and leads design of governance and capability improvements.
  • Contributes to organisational AI governance by defining ethical standards for AI use and ensuring these are reflected in policy, decision-making frameworks, training, and leadership expectations.
4.5
Accountability and Ownership
The ability to maintain clear personal and professional responsibility for AI-assisted work — including the decisions made, the outputs produced, and the consequences that follow — regardless of the degree to which AI contributed to the result. Effective performance means consistently owning the work that AI assists with.
NoviceImplicitly or explicitly attributes responsibility for AI-assisted work to the AI system rather than to themselves.+
Behavioural Indicators
  • Attributes errors or problems in an AI-assisted output to the AI system rather than taking responsibility for the quality of the work submitted.
  • Uses AI involvement as a justification for distancing themselves from the content or quality of a work output when questioned about it.
  • Does not consider themselves fully accountable for decisions or outputs that were significantly shaped by AI-generated content.
Advanced BeginnerBegins to recognise that accountability for AI-assisted work rests with the human.+
Behavioural Indicators
  • Accepts responsibility for an AI-assisted output when directly asked, even if uncertain about all aspects of the content AI contributed.
  • Reviews AI-assisted work more carefully before submission after recognising that they will be held accountable for its quality.
  • Acknowledges AI involvement in a piece of work proactively when questioned, without using this as a way to avoid personal accountability.
CompetentTakes clear and consistent ownership of AI-assisted work across all professional contexts.+
Behavioural Indicators
  • Reviews and takes personal ownership of all AI-assisted outputs before submission, applying the same quality and accountability standards as for entirely human-authored work.
  • Is prepared to explain and justify the content, reasoning, and conclusions of AI-assisted work when questioned.
  • Does not present AI involvement as a mitigating factor when an AI-assisted output falls short of the required standard.
ProficientModels clear accountability for AI-assisted work across complex, high-stakes, and multi-party contexts.+
Behavioural Indicators
  • Establishes clear ownership and accountability structures in team-based AI workflows, specifying who is responsible for reviewing and approving each AI-assisted output before use.
  • Maintains personal accountability for AI-assisted work in high-stakes contexts where the consequences of errors are significant.
  • Coaches colleagues on accountability expectations for AI-assisted work, reinforcing that AI involvement does not reduce professional responsibility.
ExpertDefines accountability standards for AI-assisted work at an organisational level.+
Behavioural Indicators
  • Develops accountability frameworks for AI-assisted work that specify ownership, review responsibilities, and escalation pathways across different types of AI use and output.
  • Identifies patterns of diffuse or absent accountability in AI use across a team or function and leads design of governance structures that establish clear human ownership.
  • Contributes to organisational AI governance by embedding accountability standards into AI policy, leadership expectations, and performance frameworks.
4.6
Continuous and Critical Reflection
The ability to regularly evaluate one's own AI use practices — identifying what is working, what is not, and how approach, judgement, and habits should evolve in response to experience, new information, and changing AI capability. Effective performance means treating every AI interaction as an opportunity to learn.
NoviceDoes not reflect on AI use practices after interactions are complete.+
Behavioural Indicators
  • Completes AI-assisted tasks without pausing to consider what worked well, what did not, and what might be done differently next time.
  • Repeats prompting approaches, delegation decisions, or use habits that have previously produced poor results.
  • Does not seek out information, guidance, or feedback to develop AI use capability beyond their current level of practice.
Advanced BeginnerBegins to reflect on AI use practices after notable successes or failures.+
Behavioural Indicators
  • Reflects on why a specific AI interaction produced a poor outcome after the fact, identifying one or two things that could be done differently in future.
  • Seeks feedback on AI-assisted work from a colleague after receiving unexpected criticism, connecting the feedback to their AI use approach.
  • Reads or engages with guidance on AI use best practice after encountering a specific challenge or gap in their current approach.
CompetentApplies regular and deliberate reflection to AI use practices as a consistent habit.+
Behavioural Indicators
  • Regularly reviews AI interactions to identify patterns — in prompting, delegation, evaluation, or disclosure practice — that are consistently producing good or poor results.
  • Actively seeks feedback on AI-assisted work from colleagues and uses this feedback to refine specific aspects of AI use practice.
  • Engages proactively with developments in AI capability, organisational AI guidance, and emerging best practice.
ProficientConducts deep, structured reflection on AI use practice across multiple dimensions of fluency.+
Behavioural Indicators
  • Evaluates AI use practice across all four fluency dimensions periodically, identifying specific strengths and development priorities.
  • Uses structured self-assessment or peer review to surface blind spots in AI use practice that are not visible through personal reflection alone.
  • Adapts AI use approach in response to changes in AI capability, organisational context, or professional standards.
ExpertCreates the conditions for continuous reflection on AI use practice across teams and the organisation.+
Behavioural Indicators
  • Designs team or organisational mechanisms — such as structured retrospectives, peer review processes, or AI use learning communities — that enable ongoing collective reflection.
  • Identifies where habitual AI use patterns across a team or function are limiting performance or creating unrecognised risk, and leads targeted interventions.
  • Contributes to organisational AI learning culture by modelling continuous reflection and creating psychological safety for others to acknowledge and learn from AI use mistakes.