AI literacy as observable capability, not technical expertise.
This framework defines AI literacy as a set of observable, trainable workplace competencies. It is designed to help individuals and organisations assess, develop, and apply AI fluency across any role or function. Each competency uses the Dreyfus Model of Skill Acquisition, with five levels — Novice, Advanced Beginner, Competent, Proficient, and Expert — reflecting progressive development in autonomy, judgement, and the ability to handle complexity.
The framework is grounded in the 4D AI Fluency Framework developed by Prof. Rick Dakan (Ringling College of Art and Design) and Prof. Joseph Feller (University College Cork), produced in partnership with Anthropic and released under the CC BY-NC-SA 4.0 licence.
Behaviour-Based
Every indicator is observable and assessable, not theoretical.
Vendor-Agnostic
No references to specific AI tools or platforms.
Trainable
Every competency can be developed through deliberate practice.
Scalable
Applicable across all roles, functions, and industries.
Four Domains
- Attempts to use AI for a task without first considering whether it is well-suited to AI involvement.
- Selects an AI tool based on familiarity rather than fit for the task at hand.
- Accepts a poor AI output without recognising that the task may have been unsuitable for AI in the first place.
- Identifies straightforward, well-defined tasks as suitable for AI and attempts these first.
- Avoids using AI for highly sensitive tasks after experiencing poor results, without being able to explain the underlying reason.
- Asks colleagues or consults guidance to decide whether AI is appropriate for an unfamiliar task type.
- Evaluates a task against explicit criteria — such as data sensitivity, output stakes, and task definition — before deciding to involve AI.
- Selects between automation, augmentation, and agency modes based on the nature and scope of the task.
- Recognises when a task is unsuitable for AI and documents the rationale to support consistent practice across the team.
- Scans a complex project and maps sub-tasks to appropriate modes of AI interaction before work begins.
- Adjusts delegation decisions in real time when task scope or requirements change mid-project.
- Coaches peers through suitability decisions, articulating the reasoning behind both delegation and non-delegation choices.
- Develops and refines organisational criteria for AI suitability that are applicable across multiple functions and task types.
- Identifies delegation failure patterns at team or workflow level and initiates targeted interventions.
- Contributes to policy or process design that embeds sound AI delegation decisions into standard operating procedures.
- Submits an entire, complex task to AI without considering which components are appropriate for AI involvement.
- Does not revisit a poor AI output to identify which part of the task was the root cause of failure.
- Describes tasks to AI at the same level of abstraction used when briefing a human colleague.
- Separates a task into broad stages and uses AI for one stage at a time.
- Recognises that some sub-tasks produced better AI outputs than others but cannot consistently explain why.
- Starts to sequence AI involvement across task stages rather than treating it as a one-step interaction.
- Maps a multi-step project into discrete sub-tasks and assigns each to either human effort, AI assistance, or a hybrid approach.
- Identifies which sub-tasks require structured inputs to AI and prepares accordingly.
- Adjusts task decomposition based on observed AI performance during a workflow, redirecting AI involvement where needed.
- Constructs multi-stage workflows that sequence AI inputs and human review points to optimise output quality.
- Identifies interdependencies between sub-tasks and manages the risk of AI errors propagating across a workflow.
- Adapts decomposition strategies when working with different AI modes across the same project.
- Designs reusable workflow templates that embed task decomposition logic for AI involvement across common job functions.
- Identifies task decomposition as a root cause of underperformance in AI adoption programs and proposes structured remedies.
- Leads cross-functional work to standardise how complex processes are broken down to enable consistent AI integration.
- Delegates tasks to AI with the expectation that it will perform like a knowledgeable human expert.
- Expresses surprise when AI produces an incorrect, incomplete, or fabricated output.
- Does not update their understanding of AI capability after encountering unexpected results.
- Identifies task types where AI has performed well or poorly in their own work and adjusts future use accordingly.
- Recognises common AI failure modes such as confident-sounding errors or loss of detail in long tasks.
- Seeks out informal guidance or examples when encountering an AI task type they have not used before.
- Identifies when a task requires current, specialised, or verifiable knowledge that AI is unlikely to have, and plans accordingly.
- Adjusts the scope and structure of AI tasks to work within known AI limitations.
- Explains AI capability limitations to colleagues when they are about to make a delegation decision likely to produce poor results.
- Tracks changes in AI capability across model updates and adjusts team workflows and delegation norms accordingly.
- Distinguishes between fundamental AI limitations and current-model-specific weaknesses when advising on delegation decisions.
- Identifies tasks where AI capability is improving rapidly and flags these as priority areas for workflow redesign.
- Produces capability briefs or guidance documents that help teams across the organisation make informed delegation decisions.
- Identifies gaps between how AI is being used and what current AI capability actually supports, and drives corrective action.
- Engages with external sources to keep the organisation's understanding of AI capability current.
- Applies the same interaction style to every AI task without considering whether a different approach might be more effective.
- Does not recognise when a task requires ongoing human-AI collaboration rather than a single delegated output.
- Attempts to fully automate tasks that require human judgement, or over-involves themselves in tasks AI could handle independently.
- Tries working alongside AI as a thinking partner on a task rather than simply asking it to produce an output.
- Recognises after the fact that a different mode of engagement would have produced a better result.
- Asks questions about how others are using AI for different task types to inform their own mode choices.
- Chooses automation for well-defined, repeatable tasks and augmentation for tasks requiring judgement, applying these distinctions consistently.
- Recognises mid-task when a mode switch is needed and makes the adjustment.
- Articulates to colleagues the rationale for choosing a specific mode for a given task type.
- Plans the mode of AI interaction for each stage of a complex project before beginning, including where human-AI handoffs will occur.
- Identifies when an agency-mode setup is more efficient than repeated manual augmentation, and makes the transition deliberately.
- Reviews AI interaction modes used across a project and identifies where different mode choices would have improved outcomes.
- Develops guidance frameworks that help teams consistently select appropriate AI interaction modes for common task categories.
- Audits team or workflow-level AI use to identify patterns of mode misalignment and designs targeted interventions.
- Contributes to organisational AI strategy by articulating how the balance between automation, augmentation, and agency should evolve as AI capability matures.
- Submits open-ended or ambiguous requests to AI without first clarifying what a successful output would look like.
- Accepts a misaligned AI output without recognising that the root cause was an unclear initial goal.
- Does not distinguish between what they want AI to produce and what they actually need to achieve their work objective.
- Adds basic context to AI requests after experiencing poor results from vague inputs.
- Identifies the primary outcome they want but does not consistently specify format, scope, or quality criteria.
- Reviews an AI output against their intended goal and identifies discrepancies, even if unable to immediately correct them.
- Writes out the intended outcome, key constraints, and success criteria before initiating an AI task on any work of moderate complexity.
- Refines an initial goal definition when early AI outputs reveal that the original framing was incomplete or ambiguous.
- Uses a defined goal to assess whether an AI output meets requirements or needs further iteration.
- Structures complex goals into primary outcomes and subsidiary requirements before beginning a multi-stage AI workflow.
- Monitors AI outputs across a task sequence to detect when responses are drifting from the original goal and intervenes with clarifying input.
- Advises colleagues on how to sharpen goal definitions that are producing poor AI outputs.
- Develops goal-setting templates or pre-task frameworks that teams use to structure AI interactions across common work scenarios.
- Identifies patterns of goal ambiguity contributing to poor AI outcomes across teams and designs targeted capability interventions.
- Incorporates goal clarity standards into AI use guidelines, onboarding, or quality assurance processes used across the organisation.
- Submits an AI output to a downstream process without reviewing it for accuracy, relevance, or completeness.
- Applies the same level of review to a low-stakes AI-generated summary as to a high-stakes AI-drafted recommendation.
- Does not recognise that some AI tasks require a human decision point before outputs are acted upon.
- Reviews AI outputs more carefully on tasks where AI has previously produced errors, without applying a systematic approach.
- Recognises that some outputs warrant closer human review than others.
- Starts to build in a basic review step for AI outputs before sharing or using them.
- Assesses task risk and output stakes before deciding the depth and type of human review required.
- Establishes explicit review points within AI-assisted workflows, specifying what will be checked and by whom.
- Adjusts oversight intensity when task parameters change, such as when an AI task involves sensitive data or public-facing outputs.
- Builds human checkpoint structures into multi-stage AI workflows, defining the purpose and scope of each review point.
- Identifies when AI performance on a task has deteriorated and escalates the level of oversight accordingly.
- Advises colleagues on oversight calibration decisions, distinguishing between tasks where light-touch review is appropriate and those requiring substantive human judgement.
- Develops oversight frameworks that specify human review requirements across different categories of AI-assisted tasks, scaled to risk and stakes.
- Identifies oversight failures and redesigns workflows to prevent recurrence.
- Contributes to organisational AI governance by defining standards for human-in-the-loop requirements across high-stakes functions.
- Submits a single-sentence, vague request to AI and accepts the output without attempting to refine the instruction.
- Uses the same prompt structure regardless of task type, complexity, or desired output format.
- Does not recognise that a poor AI output may be the result of an inadequately constructed prompt.
- Adds basic detail to prompts after receiving outputs that missed the mark.
- Identifies that more specific prompts tend to produce more useful outputs, but applies this observation inconsistently.
- Rewrites a prompt after receiving a poor output, making changes based on intuition rather than a deliberate diagnostic.
- Constructs prompts that include the task objective, relevant context, output format, and any key constraints.
- Diagnoses a poor AI output by identifying which element of the prompt was insufficient, and revises accordingly.
- Adapts prompt structure based on task type, applying more detailed construction for complex or high-stakes tasks.
- Designs multi-part prompts that guide AI through complex tasks in stages, managing the risk of drift or misinterpretation.
- Identifies in advance where an AI system is likely to default to generic outputs and pre-empts this through specific instructional framing.
- Reviews prompt construction used by colleagues and provides targeted feedback on how structure or specificity could be improved.
- Develops prompt templates and construction guides that are adopted across teams for common task categories.
- Identifies patterns of prompt failure across a team or function and designs targeted interventions.
- Contributes to organisational AI use standards by defining prompt quality criteria embedded into workflow documentation.
- Asks AI a question or assigns a task without providing any background on who they are, what they are trying to achieve, or what constraints apply.
- Accepts a generic AI output without recognising that richer context would have produced a more relevant result.
- Assumes AI has access to organisational or role-specific information that has not been provided in the prompt.
- Adds role or organisational context to a follow-up prompt after the initial AI output is too generic to be useful.
- Identifies that AI performed better on a task when background information was provided and attempts to replicate this.
- Provides some context in prompts but omits key details such as audience, purpose, or constraints.
- Opens an AI interaction by establishing relevant context — including role, task purpose, audience, and key constraints — before issuing the primary instruction.
- Identifies the specific types of context that different task categories require and applies these consistently.
- Adjusts the level of context provided based on task complexity.
- Structures the opening of a complex AI interaction to establish a rich contextual foundation that orients all subsequent exchanges.
- Monitors AI outputs across a multi-turn interaction for signs of contextual drift and reintroduces key context when needed.
- Coaches colleagues on the types of context that have the greatest impact on output quality for different categories of work task.
- Develops context-setting frameworks or templates that teams use to prepare AI interactions for common task types.
- Identifies patterns of context omission contributing to poor AI output quality at team or function level and leads targeted remediation.
- Embeds context provision standards into AI workflow design, onboarding processes, and organisational AI use guidelines.
- Submits a task request to AI without specifying the desired format, length, or tone of the output.
- Manually reformats an AI output without recognising that better specification upfront would have reduced this effort.
- Treats the AI's default output format as the only available option rather than as a starting point that can be directed.
- Adds a format or length instruction to a prompt after receiving an output that was poorly structured.
- Specifies the most visible output parameters but does not define tone, audience register, or quality standards.
- Recognises after receiving an output that additional specification would have improved it.
- Includes explicit output specification covering format, length, tone, and audience register in prompts for all tasks of moderate complexity or above.
- Adjusts output specification based on the downstream use of the AI output.
- Uses the defined output specification as a benchmark against which to evaluate the AI output.
- Defines output specifications for each stage of a multi-part AI workflow, ensuring consistency of format, tone, and quality across all outputs.
- Identifies mid-interaction when AI output is deviating from specification and reissues or refines the specification.
- Advises colleagues on how to translate ambiguous quality requirements into precise, actionable output specifications.
- Develops output specification frameworks or templates that standardise format, tone, and quality requirements for AI-assisted tasks across the organisation.
- Identifies patterns of specification failure contributing to poor or inconsistent AI outputs at team level.
- Embeds output specification standards into AI use guidelines, workflow documentation, and quality assurance processes.
- Uses the first AI output without any follow-up interaction, even when it only partially meets the task requirement.
- Abandons an AI interaction after receiving a poor initial output rather than attempting to refine or redirect.
- Does not recognise that follow-up prompts can substantially improve the quality of an initial AI output.
- Sends a follow-up prompt to adjust an AI output after recognising it does not fully meet requirements.
- Makes incremental, reactive changes rather than diagnosing the underlying cause of the shortfall.
- Engages in two or three follow-up exchanges before accepting an output, without a deliberate plan for directing the refinement process.
- Reviews an initial AI output against the defined goal and identifies specific gaps before issuing a follow-up instruction.
- Issues follow-up prompts that address one clearly identified deficiency at a time.
- Maintains a consistent goal orientation across a multi-turn refinement process.
- Plans a refinement strategy at the outset of a complex AI task, anticipating where iteration will be needed.
- Recognises when iterative refinement has reached diminishing returns and makes a deliberate decision to reframe the task or accept the output.
- Models effective iterative refinement for colleagues, demonstrating how to diagnose output gaps and translate them into targeted follow-up instructions.
- Develops guidance and worked examples that demonstrate effective iterative refinement across common task types.
- Identifies patterns of single-exchange AI use at team level that are limiting output quality and designs interventions to shift practice.
- Contributes to organisational AI use standards by defining iterative refinement as an expected practice.
- Describes a desired output in abstract terms without providing a reference example or specifying what the output should not include.
- Receives an AI output that deviates from requirements in a predictable way without recognising that an explicit constraint would have prevented it.
- Does not consider using existing documents, prior outputs, or worked examples as reference material when instructing AI.
- Adds a reference example to a follow-up prompt after receiving an AI output that was structured or toned incorrectly.
- Introduces a basic constraint after an initial output exceeded scope or included unwanted content.
- Recognises that providing an example improved a particular AI output and attempts to apply this approach to similar tasks.
- Includes at least one reference example or worked model in prompts for tasks where output format, tone, or structure is critical.
- Defines explicit constraints covering scope, excluded content, format limits, and audience boundaries in prompts for tasks of moderate complexity or above.
- Reviews AI outputs against the examples and constraints provided and identifies where additional precision would have produced a better result.
- Sequences multiple examples across a multi-turn interaction to progressively narrow AI output towards a specific standard.
- Designs constraint sets that manage competing requirements within a single AI task.
- Advises colleagues on the selection and use of examples and constraints as precision tools for improving AI output quality.
- Builds and maintains example libraries and constraint templates for common AI task types.
- Identifies patterns of output failure attributable to missing or poorly chosen examples and constraints, and develops targeted remediation resources.
- Embeds example and constraint setting into organisational prompt standards, workflow documentation, and AI capability development programs.
- Frames every AI interaction as a simple instruction-and-response exchange, regardless of whether the task calls for collaborative dialogue or structured configuration.
- Does not adjust the level of detail, structure, or tone of instructions when switching between task types.
- Treats all AI interactions as equivalent in terms of how instructions should be framed and delivered.
- Notices that a more conversational approach produces better results on exploratory tasks than a single detailed prompt.
- Attempts to write more structured instructions when setting up a recurring AI task.
- Asks colleagues how they structure instructions for different types of AI tasks after noticing their approach varies across situations.
- Writes structured, comprehensive instructions for automation tasks, recognising that the AI will execute without further human input.
- Adopts a more iterative, exploratory communication style for augmentation tasks.
- Drafts clear, persistent configuration instructions for agency-mode setups, specifying triggers, boundaries, and expected output standards.
- Identifies mid-task when the mode of AI interaction has shifted and adjusts instructional style accordingly.
- Designs communication strategies for complex, multi-mode AI interactions that specify a different instructional approach for each phase.
- Reviews AI interaction transcripts with colleagues and identifies where a shift in communication style would have improved output quality or efficiency.
- Develops instructional style guides for each AI interaction mode, providing teams with concrete communication patterns.
- Identifies where mismatched communication styles are reducing AI output quality or efficiency at team level.
- Contributes to organisational AI use standards by defining interaction mode adaptation as a core communication skill.
- Uses an AI-generated output in work deliverables without checking any of the factual claims it contains.
- Treats a well-written, fluent AI output as reliable, equating writing quality with factual accuracy.
- Does not recognise when an AI output contains internally inconsistent information or fabricated references.
- Cross-checks specific factual claims in an AI output against known information or a readily available source when they seem uncertain.
- Identifies an obvious factual error and corrects it, without applying a broader review of the rest of the content.
- Becomes more cautious about AI accuracy after encountering a specific instance of an AI error, but does not generalise this into a consistent review practice.
- Reviews all factual claims, figures, and references in an AI output against credible sources before incorporating into a work deliverable.
- Identifies the specific accuracy risks most relevant to a task type and focuses review effort accordingly.
- Flags AI outputs that contain unverifiable claims or suspicious references rather than using or sharing them without resolution.
- Designs task-specific accuracy review checklists that identify the highest-risk content areas in AI outputs for a given task type.
- Identifies patterns of AI inaccuracy across multiple outputs on the same task type and adjusts prompting or workflow design to reduce recurrence.
- Coaches colleagues through accuracy evaluation on task types where AI failure modes are well-established.
- Develops accuracy evaluation frameworks and review checklists that are adopted across teams for common AI-assisted task categories.
- Identifies patterns of accuracy failure at team or workflow level and leads redesign of review processes.
- Contributes to organisational AI governance by defining accuracy review requirements for high-stakes AI-assisted outputs.
- Uses an AI output without assessing whether its tone, depth, or framing is appropriate for the intended audience.
- Accepts an AI output that addresses the general topic without checking whether it addresses the specific question or need.
- Does not recognise when an AI output is technically responsive but substantively misaligned with the actual purpose of the task.
- Reviews an AI output for obvious tone or register mismatches before using it in a specific context.
- Identifies when an AI output addresses the wrong scope relative to what was needed.
- Asks a colleague to review an AI output for contextual appropriateness when uncertain.
- Assesses each AI output against defined criteria — including audience, purpose, tone, depth, and scope — before deciding whether it is fit for use.
- Identifies relevance gaps between an AI output and the actual task need, even when the output is technically accurate.
- Adjusts prompts or follow-up instructions based on a relevance assessment, targeting the specific dimension of fit that requires improvement.
- Evaluates AI outputs for alignment with the unstated assumptions, priorities, and expectations of the intended audience.
- Identifies when an AI output reflects a generic or default framing that does not account for the specific organisational or stakeholder context.
- Provides structured feedback to colleagues on relevance and fit issues in AI outputs.
- Develops relevance and fit evaluation criteria for common AI-assisted output types.
- Identifies patterns of contextual misalignment in AI outputs across a team or function and leads design of workflow or prompting improvements.
- Embeds relevance and fit assessment into AI output quality standards, peer review processes, and organisational AI use guidelines.
- Uses an AI-generated analysis or recommendation without assessing whether the reasoning that supports it is sound.
- Accepts an AI conclusion as valid because it aligns with prior expectations, without checking whether the argument is logically supported.
- Does not distinguish between an AI output that states a conclusion and one that provides a well-reasoned justification for it.
- Questions an AI output's conclusion when it seems surprising or contradicts existing knowledge, and asks AI to explain its reasoning.
- Identifies an obvious logical gap or unsupported claim when it is pointed out by a colleague, but does not proactively seek these out.
- Seeks a second opinion on AI-generated analysis before acting on it in situations where the stakes of being wrong are high.
- Reviews the reasoning structure of AI-generated analyses by tracing the argument from evidence to conclusion and identifying any unsupported steps.
- Asks AI to explain its reasoning or justify a specific conclusion when the basis for an output is not transparent.
- Identifies when an AI output presents a confident conclusion without adequate supporting reasoning and flags this before the output is used.
- Deconstructs the argument structure of a complex AI output, identifying each logical step and assessing whether the transitions between them are valid.
- Identifies unacknowledged assumptions embedded in AI reasoning and assesses whether these assumptions hold in the specific context.
- Coaches colleagues in reasoning scrutiny, demonstrating how to distinguish between AI outputs that support conclusions well and those that do not.
- Develops reasoning evaluation frameworks that teams apply when reviewing AI-generated analyses, recommendations, or decision-support outputs.
- Identifies patterns of poor reasoning quality in AI outputs being used across a team or function and leads design of review processes.
- Contributes to organisational AI governance by defining reasoning quality standards for high-stakes AI-assisted outputs.
- Uses AI-generated content that reflects a narrow or culturally specific perspective without recognising or accounting for this.
- Does not consider whether an AI output may have omitted relevant viewpoints, populations, or data.
- Assumes that because an AI output sounds balanced or comprehensive, it is free from bias or significant omission.
- Identifies an obvious bias or imbalance in an AI output when the topic is one where bias concerns are well-known.
- Asks AI to consider alternative perspectives or reframe an output after recognising a limited or skewed viewpoint.
- Seeks guidance from a colleague or policy resource when uncertain whether an AI output contains problematic bias.
- Reviews AI outputs for signs of representational bias, framing asymmetry, or systematic omission before using them in communications or decision-making.
- Identifies the specific types of bias most relevant to a task type or subject matter and applies targeted scrutiny to those risk areas.
- Adjusts prompts or requests additional perspectives from AI when an initial output reflects a limited or potentially biased framing.
- Identifies implicit bias in AI outputs that is not immediately obvious — such as subtle framing effects or assumptions embedded in the language used.
- Evaluates AI outputs for knowledge limitation risks and supplements AI-generated content with additional sources where gaps are identified.
- Advises colleagues on bias recognition for specific task types or subject areas.
- Develops bias evaluation criteria and review processes for common AI-assisted output types.
- Identifies patterns of bias-related risk in AI outputs across a team or function and leads the design of targeted remediation.
- Contributes to organisational AI governance by defining bias and limitation review requirements for sensitive or high-stakes outputs.
- Uses an AI output without considering whether it has omitted information that would be important in the specific context.
- Does not ask follow-up questions to check whether an AI output is complete or whether relevant considerations have been left out.
- Assumes that a comprehensive-looking AI output covers the topic fully, without verifying whether gaps exist.
- Identifies a significant omission in an AI output after the gap causes a problem downstream.
- Asks AI a follow-up question to fill an obvious gap, without systematically checking for other omissions.
- Recognises that an AI output addressed the topic generally but missed specific constraints or conditions relevant to their context.
- Reviews AI outputs against the full set of task requirements and contextual constraints to identify gaps.
- Asks targeted follow-up questions to probe areas of an AI output that appear underdeveloped, incomplete, or insufficiently contextualised.
- Identifies when an AI output reflects a generic treatment of a topic and requests a more contextually specific response before using it.
- Develops task-specific completeness checklists that identify the contextual dimensions most likely to be missing from AI outputs.
- Identifies when an AI output has addressed the explicit question but missed the implicit need, and addresses this through targeted follow-up.
- Coaches colleagues in missing context identification, demonstrating how to distinguish between an output that looks complete and one that is genuinely fit for purpose.
- Develops completeness evaluation frameworks for common AI-assisted output types.
- Identifies patterns of contextual omission in AI outputs used across a team or function and leads redesign of prompting, review, or workflow practices.
- Contributes to organisational AI governance by defining completeness standards for high-stakes AI-assisted outputs.
- Shares or acts on an AI output without assessing whether it has been adequately reviewed relative to the stakes of the task or sensitivity of the audience.
- Discards an AI output that contains useful content because surface quality is low, without recognising that it could be refined or partially used.
- Does not consider the downstream consequences of using an AI output without adequate review.
- Applies greater scrutiny to AI outputs before sharing them with senior stakeholders or external audiences.
- Hesitates to use an AI output in a sensitive context and seeks a second opinion before proceeding.
- Recognises after the fact that an AI output used in a particular context required more careful review than it received.
- Assesses the risk profile of a task — including stakes, audience sensitivity, and potential for harm — before deciding the level of review an AI output requires.
- Makes an explicit decision to use, revise, supplement, or discard an AI output based on a review assessment.
- Documents the basis for use decisions on high-stakes AI-assisted outputs.
- Navigates competing pressures — such as urgency and quality risk — when making use decisions about AI outputs, applying a principled and transparent rationale.
- Identifies when an AI output is partially fit for use and makes a deliberate decision about which elements can be used, which require revision, and which should be replaced.
- Advises colleagues on appropriate use judgements in ambiguous or high-stakes situations.
- Develops appropriate use decision frameworks for common AI-assisted output types.
- Identifies patterns of inappropriate use — including over-reliance on unreviewed AI outputs in high-stakes contexts — and leads design of governance improvements.
- Contributes to organisational AI governance by defining appropriate use standards and ensuring they are applied consistently across high-risk functions.
- Submits AI-generated or AI-assisted work without indicating AI involvement, in contexts where disclosure would be expected or required.
- Does not consider whether a colleague, manager, or stakeholder would want to know that AI was involved in producing a piece of work.
- Is unaware that non-disclosure of AI involvement can carry professional, reputational, or legal consequences in certain contexts.
- Discloses AI involvement when explicitly required by a policy, manager, or platform, but does not consider disclosure in contexts where it has not been directly mandated.
- Becomes more attentive to disclosure expectations after observing or experiencing a situation where undisclosed AI use caused a problem.
- Asks a colleague or manager whether AI disclosure is expected for a specific piece of work when uncertain.
- Assesses each AI-assisted output against relevant disclosure considerations — including organisational policy, professional standards, and stakeholder expectations — before deciding how to disclose.
- Discloses AI involvement proactively in contexts where it is material to the recipient's understanding of the work.
- Applies a consistent disclosure standard across similar work contexts rather than making ad hoc decisions without principled criteria.
- Navigates disclosure decisions in ambiguous contexts by applying a principled framework rather than defaulting to the path of least resistance.
- Advises colleagues on disclosure decisions in complex or sensitive situations, articulating the factors that should be weighed.
- Identifies when organisational disclosure guidance is insufficient for the situations teams are encountering and raises this to prompt policy development.
- Develops AI disclosure guidelines that provide teams with clear, context-sensitive standards for when and how to disclose AI involvement.
- Identifies patterns of non-disclosure or inconsistent disclosure practice across a team or function and leads design of targeted interventions.
- Contributes to organisational AI governance by defining disclosure requirements and embedding these into relevant policies, approval processes, and professional standards.
- Inputs sensitive, confidential, or personally identifiable information into an AI system without considering whether this is appropriate or permitted.
- Is unaware that information shared with AI systems may be stored, processed, or used in ways that carry privacy or security implications.
- Does not consult organisational guidance or policy when uncertain about whether a particular type of data can be shared with an AI tool.
- Avoids inputting obviously sensitive information into AI systems after receiving guidance that this is inappropriate.
- Checks whether a specific AI tool is approved for organisational use before sharing work-related information with it.
- Asks a colleague or IT contact whether a particular type of data can be shared with an AI tool when uncertain.
- Identifies the privacy and security classification of information before including it in an AI interaction and applies the appropriate handling standard.
- Anonymises, aggregates, or omits sensitive data from AI inputs when the task can be completed without sharing the sensitive detail directly.
- Raises a concern with a manager or data governance contact when an AI task appears to require sharing information that may be subject to restrictions.
- Reviews a proposed AI workflow for data privacy and security risks before implementation.
- Designs AI-assisted workflows that achieve task objectives without requiring input of data subject to privacy or security restrictions.
- Advises colleagues on data privacy and security considerations for AI interactions involving unfamiliar data types, systems, or jurisdictional requirements.
- Develops data handling guidelines for AI use that specify what categories of information can and cannot be shared with AI systems, under what conditions, and with what controls.
- Identifies patterns of data privacy or security risk in AI use across a team or function and leads design of targeted controls, workflow changes, or policy updates.
- Contributes to organisational AI governance by aligning AI data handling standards with relevant legal, regulatory, and professional obligations.
- Incorporates AI-generated content into work outputs without considering whether the content may reproduce third-party material or raise ownership questions.
- Does not consider whether AI-generated work is subject to any organisational policies on IP ownership.
- Presents AI-generated content as entirely original work without considering whether attribution of AI involvement is appropriate or required.
- Checks whether organisational policy addresses the use of AI-generated content in work products before publishing or sharing externally.
- Identifies an obvious IP concern and raises it before the output is used or shared.
- Asks a manager or legal contact for guidance on attribution when uncertain whether AI involvement needs to be acknowledged.
- Reviews AI-generated content for potential IP concerns — including reproduction of third-party material and ownership implications — before incorporating into work outputs.
- Applies organisational IP policy to AI-assisted work products, including decisions about ownership, licensing, and external publication.
- Makes deliberate attribution decisions for AI-assisted work, disclosing AI involvement where required by policy, professional standards, or the reasonable expectations of the recipient.
- Identifies IP and attribution risks in complex AI-assisted projects and manages these proactively.
- Navigates ambiguous IP situations by applying a principled framework and seeking appropriate expert input.
- Advises colleagues on IP and attribution decisions arising from AI use in complex or unfamiliar work contexts.
- Develops IP and attribution guidelines for AI-assisted work that address ownership, licensing, external publication, and attribution disclosure.
- Identifies patterns of IP risk in AI use across a team or function and leads design of governance improvements.
- Contributes to organisational AI governance by aligning IP standards for AI use with relevant legal obligations and professional requirements.
- Uses AI to complete a task that affects people without considering whether the use of AI in that context is appropriate or may cause harm.
- Does not consider whether the outputs AI produces could disadvantage, misrepresent, or cause harm to individuals or groups if acted upon.
- Assumes that ethical responsibility for AI use rests with the organisation or technology provider rather than with the individual using the tool.
- Pauses to consider the ethical implications of using AI in a task when a colleague raises a concern or when the sensitivity of the task is obvious.
- Identifies an ethical concern about a proposed AI use and raises it with a manager or colleague before proceeding.
- Seeks guidance on the ethical appropriateness of a specific AI application when uncertain.
- Considers the potential ethical implications of an AI use decision — including who may be affected and how — before proceeding with tasks that involve people, sensitive information, or significant consequences.
- Identifies specific ethical risks relevant to a task type and addresses these before acting.
- Raises ethical concerns about proposed AI use with relevant stakeholders when personal assessment suggests the use may cause harm.
- Identifies ethical risks in AI use that are not immediately obvious — such as aggregate harms that only emerge at scale, or downstream consequences for parties not directly involved.
- Evaluates proposed AI applications against multiple ethical dimensions simultaneously and makes a considered judgement about whether and how to proceed.
- Advises colleagues on ethical considerations arising from complex or novel AI use contexts.
- Develops ethical use frameworks for AI that provide teams with structured approaches to identifying, assessing, and responding to ethical risks.
- Identifies patterns of ethical risk in AI use across a team or function and leads design of governance and capability improvements.
- Contributes to organisational AI governance by defining ethical standards for AI use and ensuring these are reflected in policy, decision-making frameworks, training, and leadership expectations.
- Attributes errors or problems in an AI-assisted output to the AI system rather than taking responsibility for the quality of the work submitted.
- Uses AI involvement as a justification for distancing themselves from the content or quality of a work output when questioned about it.
- Does not consider themselves fully accountable for decisions or outputs that were significantly shaped by AI-generated content.
- Accepts responsibility for an AI-assisted output when directly asked, even if uncertain about all aspects of the content AI contributed.
- Reviews AI-assisted work more carefully before submission after recognising that they will be held accountable for its quality.
- Acknowledges AI involvement in a piece of work proactively when questioned, without using this as a way to avoid personal accountability.
- Reviews and takes personal ownership of all AI-assisted outputs before submission, applying the same quality and accountability standards as for entirely human-authored work.
- Is prepared to explain and justify the content, reasoning, and conclusions of AI-assisted work when questioned.
- Does not present AI involvement as a mitigating factor when an AI-assisted output falls short of the required standard.
- Establishes clear ownership and accountability structures in team-based AI workflows, specifying who is responsible for reviewing and approving each AI-assisted output before use.
- Maintains personal accountability for AI-assisted work in high-stakes contexts where the consequences of errors are significant.
- Coaches colleagues on accountability expectations for AI-assisted work, reinforcing that AI involvement does not reduce professional responsibility.
- Develops accountability frameworks for AI-assisted work that specify ownership, review responsibilities, and escalation pathways across different types of AI use and output.
- Identifies patterns of diffuse or absent accountability in AI use across a team or function and leads design of governance structures that establish clear human ownership.
- Contributes to organisational AI governance by embedding accountability standards into AI policy, leadership expectations, and performance frameworks.
- Completes AI-assisted tasks without pausing to consider what worked well, what did not, and what might be done differently next time.
- Repeats prompting approaches, delegation decisions, or use habits that have previously produced poor results.
- Does not seek out information, guidance, or feedback to develop AI use capability beyond their current level of practice.
- Reflects on why a specific AI interaction produced a poor outcome after the fact, identifying one or two things that could be done differently in future.
- Seeks feedback on AI-assisted work from a colleague after receiving unexpected criticism, connecting the feedback to their AI use approach.
- Reads or engages with guidance on AI use best practice after encountering a specific challenge or gap in their current approach.
- Regularly reviews AI interactions to identify patterns — in prompting, delegation, evaluation, or disclosure practice — that are consistently producing good or poor results.
- Actively seeks feedback on AI-assisted work from colleagues and uses this feedback to refine specific aspects of AI use practice.
- Engages proactively with developments in AI capability, organisational AI guidance, and emerging best practice.
- Evaluates AI use practice across all four fluency dimensions periodically, identifying specific strengths and development priorities.
- Uses structured self-assessment or peer review to surface blind spots in AI use practice that are not visible through personal reflection alone.
- Adapts AI use approach in response to changes in AI capability, organisational context, or professional standards.
- Designs team or organisational mechanisms — such as structured retrospectives, peer review processes, or AI use learning communities — that enable ongoing collective reflection.
- Identifies where habitual AI use patterns across a team or function are limiting performance or creating unrecognised risk, and leads targeted interventions.
- Contributes to organisational AI learning culture by modelling continuous reflection and creating psychological safety for others to acknowledge and learn from AI use mistakes.