top of page
Blank White Canvas

AI Governance Is Simpler Than You Think

  • Mar 6
  • 23 min read

Updated: Mar 7

Your AI footprint is growing complex. Give it structure before someone gets hurt.
Your AI footprint is growing complex. Give it structure before someone gets hurt.


Your employees are already using AI, and your IT or Security team has no idea. The biggest threat to your enterprise isn't the complexity of AI technology; it's the fact that no one in your organization actually owns the risk. But fixing it is simpler than you think.


The organizations that manage this effectively aren't necessarily the largest or most well-funded. They are the ones that began by asking a few simple questions, documented their findings, and assigned responsibility for each answer. That's the essence of it.


This guide provides everything you need to establish a functional governance program. The five steps apply whether your organization has 50 employees or 50,000. The difference with size is the depth you explore, not the actions you take.


Reading time: 25 minutes. Time to build the foundation: 90 minutes. Ongoing effort: 2 to 4 hours per month.



WHAT THIS GUIDE COVERS

Step 1  |  Find out what AI your organization is actually running

Step 2  |  Score each system by the risk it carries, not by how it works

Step 3  |  Build an AI Risk Register from scratch, with a working template

Step 4  |  Set up your governance structure, including the oversight committee

Step 5  |  Deal with the AI tools your staff are already using without approval

  +  A 90-day plan to put it all in place


Before You Start: The Four Essential Questions for Every AI Program

Every framework, standard, and audit checklist in AI governance is built on top of four basic questions. If you can answer all four for each AI system you run, you are governing it. If you cannot, you are not.


Question 1: Are we doing the right things?

What You Are Really Asking

Is this AI system aligned with our values and strategy? Does it treat the people it affects with respect?


Question 2: Are we doing them the right way?

What You Are Really Asking

Are risk, fairness, and security being actively managed, not just assumed?

Question 3: Are we getting the results we expected?

What You Are Really Asking

Is the system performing as intended? Do we actually know when it is not?

Question 4: Can we be accountable when something goes wrong?

What You Are Really Asking

Do we have the documentation and the ownership structure to show we acted responsibly?


Try this now. Pick the most important AI system your organization uses. Try to answer all four questions out loud. If you hesitate on any of them, that is your first governance gap. This guide shows you how to close it.


STEP 01   Find Out What AI You Are Running

You cannot govern what you have not counted. This is consistently the most underestimated step in the whole process.


When organizations run a proper inventory for the first time, they almost always find 40 to 60 percent more AI systems than IT initially reported. The gap is rarely in the systems that went through formal procurement. It is in the tools that teams adopted on their own. The writing assistant. The meeting summarizer. The AI-powered contract reviewer that legal licensed without telling IT. Those informal adoptions are called Shadow AI, and we cover them in Step 5.


How you run the inventory depends on how large your organization is. Sending a message to all staff works well for a 50-person company. In an organization with 5,000 people it generates noise and weeks of inconsistent data to sort through. Larger organizations need structured sources, not a broadcast.


The Right Approach for Your Organization Size

All the paths listed below lead to the same destination: a comprehensive list of AI systems, prepared for evaluation in Step 2.


Under 200 people

Where to Look

Software spend review. Cloud environment scan. Known vendor contracts.

Who to Ask

A short message to all staff: 'What AI tools do you use in your day-to-day work?' This works at this scale.

What to Avoid

Multi-question surveys or formal questionnaires.

200 to 2,000 people

Where to Look

IT software inventory. Procurement and finance spend data. Cloud platform audit across AWS, Azure, and Google Cloud. Vendor contracts.

Who to Ask

A structured 30-minute interview with each function head: HR, Finance, Legal, Sales, Marketing, Operations, Customer Service, Product, Engineering.

What to Avoid

Sending the question to all staff. Too broad. Too much noise.

Over 2,000 people

Where to Look

Enterprise IT asset management system. Procurement system spend analysis. Cloud governance tools using tag-based workload discovery. Security and web proxy logs flagging AI-related domains. The legal team's vendor contract register.

Who to Ask

Targeted conversations with VPs and Directors by business unit. One question only: 'What AI systems or tools does your team rely on to make or inform decisions?' Do not ask general staff.

What to Avoid

All-staff surveys. All-hands requests. Any approach that requires self-reporting across thousands of people.


Six Sources to Identify 90 Percent of Your AI Footprint

These six sources are effective for organizations of any size. They collectively reveal most of the information you need, without requiring a single survey.

 

  1. Procurement and finance expenditure data. Collect all software subscriptions and contracts from the past two years. Highlight vendors with product descriptions mentioning artificial intelligence, machine learning, automation, prediction, or scoring to identify overlooked AI tools, like an HR platform with a screening tool, a CRM with lead scoring, or a financial tool with forecasting.

  2. Your IT asset management system. This system includes all registered applications. Compare it with known AI vendors. If your IT team struggles to describe a system simply, it likely has an AI component needing further investigation.

  3. Cloud platform audit. AWS, Azure, and Google Cloud provide service-level reporting. Focus on services like Amazon SageMaker, Azure Machine Learning, Google Vertex AI, serverless functions for inference, and managed AI API connections to platforms like OpenAI, Anthropic, or Google AI. This helps identify developments by engineering that procurement missed.

  4. Security and web proxy logs. Your security tools and web proxy already show which AI-related websites and services your staff are using. Look for traffic to tools like ChatGPT, Claude, Gemini, Microsoft Copilot, Midjourney, Perplexity, Grammarly, and Jasper. The logs tell you what is in use. Your follow-up conversations tell you what it is being used for. Important caveat: this method only catches browser-based tools. AI embedded inside your existing enterprise software will not appear here. That is what source 5 is for.

  5. Platform administrator interviews. A 30-minute conversation with the administrator of each major enterprise platform can reveal AI features that proxy logs may miss. Ask which features in the platform use scoring, ranking, prediction, or automated routing. This helps identify non-generative AI like CRM lead scoring, HR screening algorithms, fraud detection models, demand forecasting, and document classification that operate quietly within your existing tools..

  6. Targeted interviews with function heads. Conduct a 30-minute conversation with each business function head. Ask one question: 'What AI systems or tools does your team use to make or inform decisions?' Function heads know what their teams rely on but may not realize those tools are AI-powered. Your role in the interview is to help them recognize this. Inquire about hiring, customer workflows, reporting, and content production.


FOR LARGER ORGANIZATIONS: USE YOUR LOGS, NOT A SURVEY
  • Your proxy logs and security tools already have the data on Shadow AI adoption. They are just not being read for this purpose.

  • Ask your security or IT team to pull a 90-day report of the top AI-related domains your staff are accessing. That report, combined with your procurement data and function-head interviews, gives you a more accurate picture than any survey, without the noise or the weeks of data cleaning.

  • Save direct employee outreach for targeted follow-up. Once you know a team is using a particular tool, a five-minute conversation with the team lead tells you everything you need. A blanket survey of 10,000 people asking what AI tools they use does not.


WHAT TO RECORD FOR EACH SYSTEM
  • For every AI tool or system you find, capture eight things: (1) Name and vendor. (2) What it does. (3) What decisions it informs or makes. (4) What data it processes, including whether it touches customer, employee, or financial data. (5) Which business function benefits from it. (6) One named owner, not a team. (7) Whether it was built internally or purchased. (8) Governance status: Assessed or Not Yet Assessed.


  • A shared spreadsheet is a perfectly good starting point. The discipline of completing the exercise matters far more than the format you use. In large organizations, assign one coordinator per business unit to complete their section rather than centralising all data collection.


STEP 02   Score Each System by the Risk It Carries

AI systems do not all pose the same level of risk. A meeting summarizer and a tool for making credit decisions present different challenges. If you treat them the same, you might over-regulate minor issues or under-regulate significant ones.


The key question for determining your risk score isn't the complexity of the technology. Instead, consider: what is the realistic worst-case scenario if this system generates incorrect results on a large scale?

Evaluate each system based on two dimensions, then multiply them to obtain a risk score.


The Risk Scoring Matrix

Score each system on two dimensions, then multiply for your risk score.


Dimension 1: Impact if the system fails

SCORE

What it means

1

Trivial. Easily fixed. No individual harmed.

2

Minor inconvenience. Internal only.

3

Moderate. A customer or employee is affected, but the outcome is recoverable.

4

Significant. Legal, financial, or reputational exposure.

5

Severe. Individual rights, safety, or large-scale harm.


Dimension 2: Likelihood of failure

SCORE

What it means

1

Very unlikely. Extensively tested. Stable environment.

2

Unlikely. Well tested. Low complexity.

3

Possible. Some edge cases exist. Moderate complexity.

4

Likely. Limited testing. Inputs change frequently.

5

Near certain. Minimal testing. High complexity.

Risk Score equals Impact multiplied by Likelihood. 1 to 5: Minimal risk. 6 to 10: Standard. 11 to 18: High. 19 to 25: Critical.


Critical - Score 19 to 25

What It Means

This system can cause serious, hard-to-reverse harm to specific individuals at scale. Credit decisions, hiring screens, healthcare alerts, safety-critical systems.

Minimum Governance Required

Full documentation. A human reviews outputs before any action is taken. Monthly monitoring. A named executive is accountable. An incident response plan must exist before the system goes live.

High- Score 11 to 18

What It Means

Significant business impact. Customer-facing. Sensitive data or decisions with meaningful consequences for the people affected.

Minimum Governance Required

A brief system summary (1 to 2 pages). Quarterly fairness check. Named business owner. Documented escalation path.

Standard- Score 6 to 10

What It Means

Operational in nature. Primarily internal use. Outcomes are recoverable. Limited sensitive data.

Minimum Governance Required

Registry entry. Annual review. IT owner. Basic output monitoring.


Minimal- Score 1 to 5

What It Means

Assistive only. A human always makes the final decision. No sensitive data. Errors are easy to catch and fix.

Minimum Governance Required

A registry entry only. No structured governance required beyond that.

A QUICK CLASSIFICATION TEST
  • Could this system appear in a news story if it produced the wrong output?  Then classify it as at least High.

  • Could it cause measurable harm to a specific person?  Then classify it as Critical.

  • Does a human always review the output before anything is decided?  Then it is likely Standard or Minimal.

  • When you are not sure, classify up. The cost of governing a low-risk system too carefully is small. The cost of governing a high-risk system too loosely is not.


STEP 03   Build Your AI Risk Register

The Risk Register serves as the central component of your governance program. It combines your inventory with risk scores, assigns ownership, and tracks actions taken for each risk. In the event of an issue, this document proves your organization's responsible actions.


Specialist software is not required to create it. For most organizations, a well-maintained spreadsheet is an ideal starting point. Consistent maintenance is more important than the platform used.


The Ten Columns Your Register Needs

#

Field & What to Put There

1

System name. Plain-language name. Include the vendor if it is a bought tool.

2

Risk category. Data, Model, Operational, Ethical, Legal and Regulatory, Reputational, or Supply Chain.

3

Risk description. One sentence: "This system could cause [harm] because [reason]."

4

Likelihood score. 1 to 5, using the matrix in Step 2.

5

Impact score. 1 to 5, using the matrix in Step 2.

6

Risk score and tier. Likelihood multiplied by Impact. Tier: Minimal, Standard, High, or Critical.

7

Controls already in place. What is currently preventing or detecting this risk?

8

Risk response. Mitigate, Accept, Avoid, or Transfer. If accepting, write down why.

9

Owner. One named person. First and last name. Not a team. Not a department.

10

Next review date. Critical: monthly. High: quarterly. Standard: annually.

The Seven Risk Categories Explained

Each system in your register should be evaluated across all seven categories. Typically, most systems will present significant risks in three or four of these categories.


1. Data Risk

Covers

Quality, accuracy, privacy, and availability of the data that feeds the AI system.

Examples

Training data with historical bias. Personal data used without consent. Outdated data. Data that has been tampered with.

2. Model Risk

Covers

How accurately, fairly, and reliably the AI model itself performs.

Examples

Accuracy that quietly degrades over time. Outputs that differ between demographic groups. Confident but wrong answers. Reasoning that cannot be explained.

3. Operational Risk

Covers

Day-to-day reliability, integration failures, and how the system is used in practice.

Examples

System downtime disrupting a business process. An integration that breaks after a software update. Users who trust outputs without questioning them.

4. Ethical Risk

Covers

Unintended consequences for individuals or groups beyond the immediate technical performance.

Examples

A hiring tool that disadvantages candidates from certain postcodes. A pricing tool that discriminates by geography. A system that reinforces existing inequalities.

5. Legal and Regulatory Risk

Covers

Non-compliance with laws on automated decisions, data use, or sector-specific regulation.

Examples

An automated decision made without the human oversight some regulations require. Personal data processed without a legal basis. Failing to disclose that a decision was made by AI.

6. Reputational Risk

Covers

Damage to trust and brand from how AI is used or perceived.

Examples

An AI-generated response published externally that turns out to be wrong. A customer who discovers AI made a consequential mistake about them.

7. Supply Chain Risk

Covers

Risks from vendors, pre-built models, and external services your AI depends on.

Examples

A vendor that updates their model without telling you. A pre-trained model containing problematic training data. Excessive dependency on a single external API.

THREE RULES THAT MAKE THE REGISTER USEFUL
  • Rule 1: One owner per risk. A team cannot own a risk. A department cannot own a risk. A named person can.

  • Rule 2: Every accepted risk needs a written reason. 'We are accepting this risk because [reason], and we will monitor it using [method] on [schedule].' Acceptance without documentation is just inattention with a polite name.

  • Rule 3: Review dates are actual commitments. A register that is never reviewed is worse than no register at all, because it creates a false sense of security. Set calendar reminders the day you create each entry.


STEP 04   Set Up Your Governance Structure

To govern AI responsibly, you don't require a dedicated AI ethics team, a new executive hire, or a specialized compliance platform. What is essential is having clear authority: someone who oversees the entire program, a committee to make difficult decisions, and a process for determining what gets approved and what doesn't.


The most frequent reason governance programs fail is not due to inadequate policy. It's the lack of a single accountable person. A policy without an owner becomes everyone's excuse and no one's priority.


The Three Layers of AI Governance

Governance happens at three levels. Each level has a specific job. When organizations blur these boundaries, decisions get stuck or made by the wrong people.


Strategic Layer : Board and CEO

Their Job

Approve the AI strategy and the organization's overall risk appetite. Receive a quarterly briefing on AI risk. Hold the governance committee accountable.

What They Should Not Do

Manage individual AI systems. Make day-to-day governance calls. Approve specific systems except in the most serious cases.

Governance Layer : The Committee

Their Job

Set AI policy. Approve high-risk and critical systems before they go live. Review the overall risk picture across all systems. Make decisions that are beyond any individual's authority to make alone.

What They Should Not Do

Build or operate AI systems. Manage individual projects. Monitor day-to-day system performance.

Operational : System & Data Owners

Their Job

Maintain the inventory and Risk Register. Monitor how systems are performing. Escalate issues to the committee. Put committee decisions into practice.

What They Should Not Do

Set policy. Approve their own high-risk systems. Accept risk on behalf of the organization.

The Governance Committee

The committee is where formal authority over AI sits. It approves systems that carry high or critical risk before they go into production. It sets the policies that govern all AI activity. It makes decisions that no single person should make alone.


The committee has eight seats split into two groups. Both groups need to be present for the committee to work properly. Technical judgment without legal and ethical perspective leads to reckless decisions. Legal and compliance expertise without technical grounding leads to unworkable ones.

 

GROUP 1: BUSINESS REPRESENTATIVES (FIRST LINE OF DEFENCE)

These members provide operational accountability and business context. They are closest to how AI systems are actually used and what the consequences of getting it wrong would be.

 

Business Unit Representative

Accountable For

Operational context for each system under review. The experience of the people who use it.

Contribute to Decision

Whether governance requirements are workable in practice. What it costs the business if a system is paused or rejected.

AI and Machine Learning Engineering Lead

Accountable For

Technical soundness of every reviewed system. Engineering compliance with approved policies. Model performance standards.

Contribute to Decision

How AI models actually work and how they fail. Whether proposed controls are technically sound. What monitoring is genuinely achievable.

Data Architecture Lead

Accountable For

Data governance standards. Training data quality and traceability. Data-related risk assessment across all AI systems.

Contribute to Decision

Whether training data meets quality and fairness standards. Whether data practices create privacy or security exposure.

Chief Information Security Officer (CISO)

Accountable For

Security controls across all AI systems. Adversarial threat assessment. Deployment security, vendor and API risk. Incident response authority.

Contribute to Decision

Whether a system is secure against active threats including data poisoning, model theft, and prompt injection. What happens when a security incident occurs.

 

GROUP 2: RISK AND COMPLIANCE PARTICIPANTS (SECOND LINE OF DEFENCE)

These members provide independent oversight. They set the standards, challenge the first group's assessments, and make sure risk is being managed rather than just described.

 

Chief Risk Officer (Committee Chair)

Accountable For

The organisation's overall AI risk posture. Final accountability when a risk is formally accepted. The link between the committee and the board.

Contribute to Decision

Enterprise-wide risk perspective and the authority to break a tie. The deciding voice when the two groups conflict.

Legal Counsel

Accountable For

Legal risk assessment for every reviewed system. Vendor contract review. Regulatory interpretation across all operating jurisdictions.

Contribute to Decision

How current laws apply to AI-driven decisions. Litigation risk of specific AI uses. Whether a use case is lawful in every market it operates in.

Privacy Officer

Accountable For

Privacy assessments for all systems handling personal data. Individual rights obligations. Data retention and minimisation standards.

Contribute to Decision

Whether data practices comply with privacy law. Whether individuals can practically exercise their rights given how the system was designed.

Compliance Manager

Accountable For

Regulatory standards and audit readiness across all AI systems. Documentation requirements by risk tier.

Contribute to Decision

What regulators and auditors look for in practice. Whether governance records would hold up to external scrutiny.


NON-VOTING OBSERVER: INTERNAL AUDIT (THIRD LINE OF DEFENCE)

Internal Audit attends all committee meetings but does not vote. Their job is to independently assess whether the governance programme is working as it should, and to report their findings to the Board's Audit Committee, not to the governance committee itself. That independence is only meaningful if it is protected, which is why they observe rather than participate in the decisions they will later audit.

 

The Three Lines of Defence Model Applied to AI

This framework, defined by the Institute of Internal Auditors, clarifies who owns each layer of risk management. Applied to AI governance, it maps directly to your committee structure.

 

First Line Own the risk

Who

Business Unit Representative. AI and ML Engineering Lead. Data Architecture Lead. CISO.

Role

Build and operate AI systems within the boundaries the committee sets. Put controls in place. Monitor day-to-day performance. Bring new systems to the committee for review before go-live.

Second Line Oversee the risk

Who

Chief Risk Officer (Chair). Legal Counsel. Privacy Officer. Compliance Manager.

Role

Set the risk framework and policies. Challenge first-line assessments independently. Make decisions when first-line authority is not sufficient. Escalate serious issues to the board.

Third Line Provide independent assurance

Who

Internal Audit (non-voting).

Role

Assess independently whether the governance programme is working as designed. Report directly to the Board, not to the committee. Identify gaps between what policies say and what actually happens.

 

Who Approves What

The most important thing the committee must agree on before its first meeting is where each level of decision-making authority sits. Unclear authority is the root cause of most governance failures.


Level 1: Internal operational tools

Risk Level : Minimal or Standard

Examples : Meeting summarisers. Internal search. Code assistants. Document routing.

Who Approves : System Owner plus the AI Oversight Lead.

Committee role : Notified via the monthly dashboard only. No review required.

 

Level 2: Customer-facing or employee-affecting systems

Risk Level : Standard or High

Examples : Customer service chatbots. Resume screening. CRM lead scoring. Churn prediction.

Who Approves : Majority committee vote. At least five of eight members must agree.

Committee role : Full review before go-live. System Owner presents. Decision recorded in writing.

Level 3: Consequential decisions

Risk Level : High or Critical risk

Examples : Credit and lending decisions. Fraud scoring. Performance management tools. Clinical decision support.

Who Approves : Majority committee vote plus written sign-off from the Chief Risk Officer. At least six of eight members.

Committee role : Comprehensive documentation required. Human oversight mandatory. Quarterly monitoring report.

Level 4: Rights-affecting or safety-critical systems

Risk Level : Critical

Examples : Hiring or dismissal decisions without human review. Medical diagnosis. Autonomous safety systems.

Who Approves : Full committee plus sign-off from both the Chief Risk Officer and the Chief Executive.

Committee role : External technical audit required before deployment. Monthly monitoring. Any member can suspend immediately without a vote.

 THE EMERGENCY SUSPENSION RULE
  • Any committee member can pause any AI system immediately if they believe it is causing or is about to cause serious harm. They do not need a committee vote to do it.

  • They must notify the Chief Risk Officer within one hour and the full committee within 24 hours. The committee then meets within 48 hours to review the decision.

  • Write this rule into your committee charter before your first meeting. It sounds dramatic. It is also the rule that has prevented a number of real-world AI failures from becoming much worse.

 

Right-Sizing the Committee for Your Organization

The eight-seat structure described above is the right model for organizations of around 500 people and above. Here is how to adapt it without losing what matters.

 

Under 200 people

Structure

Three people: CEO or COO as chair, the most senior technical person, and someone with legal or compliance responsibility. External legal counsel on demand for Level 2 or above reviews.

How Often

Quarterly standing review, plus a specific session before any Level 2 or above system goes live.

200 to 2,000 people

Structure

Full eight-seat structure, with some roles held part-time or shared. One permanent Business Unit Representative, others rotate by system. Named chair, ideally the Chief Risk Officer.

How Often

Full monthly meeting, plus on-demand sessions for Level 3 and Level 4 systems.

Over 2,000 people

Structure

Full eight-seat structure with a dedicated AI Oversight Lead managing the agenda, register, actions, and board reports. Consider an independent ethics adviser for Level 4 system reviews.

How Often

Monthly full committee. Weekly operational check-in for Critical-tier systems. Quarterly board reporting.

 

One rule that applies at every size: the person who builds or champions an AI system cannot be the only person who approves it. Even in the smallest organization, a second independent perspective must review any Level 2 or above system before it goes into production.

 

The Committee's First Three Deliverables

In the first 60 days, the committee should produce three working documents. Not a vision statement. Not a strategy framework. These three things.

 

  1. An approved AI tools list. Which tools are approved, for what purposes, and under what conditions. Includes a process for requesting approval for new tools, with a target turnaround of five days for low-risk tools.

  2. A data off-limits rule. A single page, in plain language, listing what categories of data must never be put into any AI tool without explicit authorization. At a minimum: customer personal data, employee personal data, non-public financial information, and anything covered by a non-disclosure agreement. Every employee should read and acknowledge it.

  3. An AI acceptable use policy. What AI can and cannot be used for in your organization. Three minimum requirements: AI-generated content must be reviewed by a person before it goes external; AI must not make final hiring, dismissal, or credit decisions without human review; any person who is affected by an AI-driven decision must have a clear path to request a human review.

 

HOW THE MONTHLY COMMITTEE MEETING RUNS

  • Item 1: Risk dashboard review (20 minutes). The AI Oversight Lead presents the current state: systems by tier, any new additions, monitoring status for Critical systems. Data first, discussion second.

  • Item 2: Incident and near-miss review (15 minutes). Any issues since the last meeting. Root cause if known. Zero incidents should be noted explicitly, not skipped.

  • Item 3: New system reviews (30 minutes). System Owners present any system seeking Level 2 or above approval. Standard format: what it does, who it affects, risk score, controls in place, monitoring plan. The committee votes. The decision is written down.

  • Item 4: Policy items (15 minutes). Any proposed changes to policy or governance standards.

  • Item 5: Open items (10 minutes). Shadow AI reports, vendor concerns, regulatory changes that need a response.


Every meeting produces a decision record, not traditional minutes. Decision records capture what was decided, by whom, on what basis, and what the next action is. Circulated to all members within 24 hours.

 

STEP 05   Deal With the AI Your Staff Are Already Using Without Approval


This risk is not theoretical. It is happening right now in your organization. Your employees are using AI tools that IT does not know about, that have no vendor contract, and that carry none of the controls you are putting in place.


When employees are asked, in private, what AI tools they use at work, the answer is consistently and significantly higher than what IT has recorded. This is not usually malicious. It is practical: people find tools that help them work better, and they use them. The problem is what they put into those tools.

 

Four Specific Risks This Creates

  • Data leaving the building. An employee pastes customer records into a public AI tool to analyse trends. That data is now being processed by a third-party service with unknown data retention practices. Depending on your legal obligations, this may be a reportable breach.

  • Decisions with no audit trail. A manager uses an AI tool to help assess job applications. The outputs shape hiring decisions. There is no record, no test for fairness, and no accountability if the outcomes turn out to be discriminatory.

  • Intellectual property exposure. A developer puts proprietary source code into an AI assistant to help debug it. That code may be retained by the vendor or used to train future models. Your trade secrets become someone else's training data.

  • Wrong information used as fact. AI-generated content in a customer response, a financial summary, or a legal briefing is treated as accurate without anyone checking it. When it turns out to be wrong, the organization is responsible for the consequences.

 

Why Banning It Does Not Work

The instinct when you discover Shadow AI is to prohibit it. That does not work. Prohibition drives the problem underground, removes whatever visibility you currently have, and creates an adversarial relationship with the employees who are often your most productive people.


The goal is structured enablement. Make approved tools easy to find and easy to use. Make the rules about data clear enough that people can follow them without a legal degree. Make the approval process for new tools fast enough that people do not need to go around it.

 

Five Actions to Take

  1. Use your system logs to find what is already in use. Ask your IT or security team for a 90-day report of AI-related domains being accessed by your staff. ChatGPT, Claude, Gemini, Microsoft Copilot, Midjourney, Perplexity, Grammarly, Jasper, and any AI API endpoints. This tells you what is happening without asking anyone. For smaller organizations without this capability, a targeted message to department heads is a reasonable alternative. Avoid sending blanket surveys to all staff in organizations of more than 200 people.

  2. Publish your approved tools list immediately. Even a draft version. Staff who know an approved list exists will use it. Staff who see nothing will fill the gap themselves.

  3. Issue a one-page data off-limits rule. Three categories, in plain language. Customer personal data, employee personal data, anything confidential or under NDA. None of these should go into any AI tool without explicit authorization. Post it. Email it. Confirm it has been read.

  4. Create a fast-track approval process for new tools. A five-day lightweight review for low-risk tools: legal checks the terms of service, IT reviews the data handling, the AI Oversight Lead approves. If people can get a tool approved quickly, they have no reason to go around the process.

  5. Treat first incidents as a learning opportunity. The employee who put customer data into a public AI tool is almost certainly not trying to cause a problem. They are trying to do their job well. Correct it, explain why it matters, update the guidance if it was unclear. Reserve formal action for repeated or deliberate violations.

 

SHADOW AI DETECTION CHECKLIST

  • IT and security proxy log report pulled for AI-related domains, covering the last 90 days

  • Unrecognised tools from the logs added to the inventory list for follow-up

  • Targeted conversations with function heads completed to understand how tools are being used

  • Approved tools list published, even as a working draft

  • Data off-limits rules sent to all staff in plain language

  • Fast-track approval process for new tools created and announced

  • First-incident policy confirmed: educate, not punish

 

Your 90-Day Plan

Do these steps in order. Do not move to quarterly reviews before you have an inventory. Do not set up the committee before you have named an AI Oversight Lead.

 

Days 1 to 30: Lay the Foundation


☐    Week 1. Run the AI inventory. Use the five structured sources: procurement data, IT asset management, cloud platform audit, security and proxy logs, and targeted function-head interviews. Match your approach to your organization's size, following the table in Step 1.

☐    Week 1. Name your AI Oversight Lead. Announce it formally. Update their job description or terms of reference to include this responsibility.

☐    Week 2. Score every system in your inventory. Impact multiplied by Likelihood. Assign each system a tier: Critical, High, Standard, or Minimal.

☐    Week 2. Name a System Owner for every High and Critical system. One person. First and last name.

☐    Week 3. Start your AI Risk Register. Begin with Critical and High systems. Complete all ten columns for each.

☐    Week 3. Send the data off-limits rules. One email. One page. Ask for acknowledgement.

☐    Week 4. Confirm your committee roster. Name all eight seats. Confirm the chair. Put the first six monthly meeting dates in everyone's diary.

☐    Week 4. Publish your approved AI tools list. A draft is fine. State clearly that it will be updated monthly.

 

Days 31 to 60: Start Governing


☐    Draft the committee charter. Purpose, membership, decision authority by system level, quorum rules, meeting schedule, record-keeping requirements, and the escalation path to the board. Ratify it at the first committee meeting.

☐    Hold the first committee meeting. Review the inventory. Ratify the charter. Identify the top three risks and assign owners. Produce and circulate a decision record within 24 hours.

☐    Complete the Risk Register for Standard systems. A simplified six-column version covering name, risk, score, controls, owner, and review date is sufficient at this tier.

☐    Write the three core policies. Approved tools list, data off-limits rules, and acceptable use policy. Each should be under 500 words. Have legal review them. Publish.

☐    Write your first incident response plan. Choose your highest-risk system. Document who gets called, in what order, under what conditions the system is paused, and who has the authority to restart it.

 

Days 61 to 90: Embed the Programme


☐    Run your first quarterly review. All System Owners present. Every High and Standard system reviewed. Risk scores updated. Confirm that ownership is still correct. Add any new systems found in the past 60 days.

☐    The committee reviews its first Level 2 or above system. Use the standard process. Produce a decision record. This is your practice run. Use it to learn the process before a genuinely high-stakes decision lands.

☐    Brief leadership and the board. One page: how many AI systems are running, breakdown by risk tier, top three risks and what is being done about each. This becomes your standard quarterly board briefing going forward.

☐    Close the Shadow AI gap. Confirm the proxy log review is complete, function-head interviews are done, the approved tools list has been updated, and anyone using an unapproved tool has been directed to the right alternative or has submitted a request.

☐    Set the annual review date. Full inventory refresh, policy review, and staff training. In the diary now.

 

WHAT A GOOD PROGRAMME LOOKS LIKE AT 90 DAYS

  • You have a complete AI inventory, built from structured sources rather than surveys.

  • Every system has a risk score, a tier, and a named owner.

  • Every High and Critical system has a Risk Register entry with documented controls and a review date.

  • The governance committee has met twice and produced two written decision records.

  • The committee charter has been ratified and every member understands their role.

  • Your staff know what data they cannot put into AI tools.

  • An approved tools list exists and there is a process for adding to it.

  • Leadership has seen a summary of your AI risk position.

  • At least one Level 2 or above system has been through a formal committee review.

  • None of this required a new hire, a specialist platform, or an expensive consultant.

 

The Bottom Line

AI governance is not a technology project. It is a management decision. The organizations that do it well are not the most technically sophisticated. They are the ones that decided to take it seriously and then followed through.


The committee is not an overhead. It is the structure that gives everything else in this guide its authority. Without it, the Risk Register is a spreadsheet no one acts on. The policies are documents no one enforces. The inventory is a list no one updates.


Start with the inventory. Set up the committee. Everything else follows from those two decisions.


Sources: NIST AI RMF 1.0, ISO/IEC 42001:2023, ISO/IEC 23894:2023, IIA Three Lines of Defense, and CSA AI Organizational Responsibilities

 
 
 

Comments


  • LinkedIn

Follow Us On:

Subscribe to get exclusive updates

bottom of page