AI governance in 2026: what every org actually needs
The EU AI Act is in force. The US NIST AI Risk Management Framework is the de facto standard for federal contracts. SOC 2 Type II now requires a documented AI posture. This checklist is the minimum β 16 items across policy, risk, operational controls, and people β that an org with 50+ employees shipping AI in production should have.
Four pillars
Policy & scope
A written acceptable-use policy. An approved-tool inventory. An exception path (because people will experiment regardless β give them a safe lane). A public-facing AI-use disclosure if your regulator requires one. These four items together catch 80% of the "someone leaked data to ChatGPT" failure modes.
Risk management (NIST AI RMF)
A risk register with classification (prohibited / high / limited / minimal per EU AI Act), DPIAs for AI touching personal data, a model card per deployed system, and a quarterly red-team review of high-risk systems. The classification step is load-bearing β it tells you which items need formal DPIAs versus lighter review.
Operational controls
SSO and MFA on every AI tool (consumer accounts banned for work data). Audit logs retained 12+ months. Data-loss prevention on AI inputs. Vendor security reviews (SOC 2 Type II, DPA, sub-processor list, incident disclosure SLA). A named incident-response playbook for AI incidents.
People & training
Annual AI-literacy training for all staff (30-60 min). Specialized training for engineers and data teams (prompt injection, eval, caching, routing). Code-of-conduct addendum on AI-generated content (disclosure in client work, copyright, academic integrity for edtech).
Frameworks to align with
- EU AI Act: Risk classification + conformity assessment for high-risk systems.
- NIST AI RMF: Govern / Map / Measure / Manage lifecycle. De facto US standard.
- ISO/IEC 42001: AI management system standard. Pairs with ISO 27001 if you already have it.
- SOC 2 Type II AI addendum: AICPA added AI criteria in 2025. Auditors now ask.
- OWASP LLM Top 10: Technical security baseline for LLM apps.
Start small, mature over 12 months
A small startup doesn't need ISO 42001 certification in month one. It needs a written AUP, an approved-tool list, DPAs with Anthropic/OpenAI/Google, and an incident playbook. Work up from there. Enterprise can target ISO 42001 over 12-18 months using this checklist as a gap analysis.
Governance is a revenue enabler
Enterprise buyers in 2026 ask AI-governance questions in the first security questionnaire. Not having answers costs deals. Having a documented AUP, DPAs, and a model registry turns a 6-week security review into a 1-week review. That's a growth lever, not a cost.
- AI Adoption Roadmap β Where to slot governance into the 90-day plan.
- Enterprise AI Security Checklist β The operational controls side of governance.
- AI Readiness Assessment β Tier your org before adopting this checklist.
- AI Product Launch Planner β Governance items in a product launch.