Digital asset

AI Governance Readiness Scorecard: 5-Minute Self-Assessment

Faiz Ahmad

Faiz Ahmad

CEO @ 1% AI Fund | Governance Researcher

See all products from Faiz Ahmad

What the scorecard measures

Fifteen questions across five dimensions that determine whether your AI governance would survive a real audit, procurement review, or customer compliance assessment.

1. REGULATORY AWARENESS — Do you know what the law actually requires?

Whether your team can identify high-risk AI systems under applicable regulations, whether you have mapped your AI portfolio to EU AI Act, NIST AI RMF, or ISO 42001, and whether someone owns tracking regulatory changes.

2. RISK CLASSIFICATION AND ASSESSMENT — Can you categorize and measure the risks your AI systems create?

Whether you have a documented taxonomy for AI risks, whether you run quantitative risk assessments before deployment (not just qualitative checklists), and whether you can produce a model card for any production system.

3. TECHNICAL GOVERNANCE INFRASTRUCTURE — Is governance built into your pipeline, or bolted on after the fact?

Whether automated monitoring for drift, bias, and performance degradation exists, whether you have version control and audit trails across training data and model versions, and whether an incident response process is documented.

4. ORGANIZATIONAL READINESS — Do you have the people, roles, and authority

to govern AI?

Whether AI governance is explicitly someone's job (not bolted onto an existing role), whether that function has authority to block risky deployments, and whether engineering and compliance have a structured channel for AI risk.

5. DOCUMENTATION AND AUDIT PREPAREDNESS — If someone audited your AI

systems tomorrow, what would they find?

Whether your documentation would satisfy an external auditor, whether you can demonstrate the decision-making behind your AI design choices, and whether you have run any internal or external AI audit in the past 12 months.

Score each statement 1 (not started) to 5 (fully operational). Add the scores. Land in one of four readiness levels: Starting, Aware, Building, or Leading.

Who this scorecard is for

This is built for people who already have AI in production (or about to) and who are being asked hard questions about governance by someone else.

YOU SHOULD SCORE YOUR ORGANIZATION IF:

- You are a CTO, VP of Engineering, or Head of AI and your board has started asking about AI governance

- You are a risk, compliance, or legal leader being asked to "assess"

AI systems you did not build

- You are an engineer or data science manager trying to figure out what

"AI governance" even means in practice

- You are a product or platform leader whose enterprise customers are starting to ask for AI documentation

YOU CAN SKIP IT IF:

- Your organization has already passed a formal AI governance audit in the last 12 months

- You have a full-time AI governance team with a published framework already in operation

- You do not deploy AI and are unlikely to in the next 12 months

One honest note: most people who take this scorecard score between 30 and 45 out of 75. That is not a failure. It is the current market. The difference between teams at 35 and teams at 55 is about six months of focused governance work, not a decade of maturity.

What you will know in 5 minutes

By the time you finish the 15 questions, you will have a defensible, specific answer to questions you are currently answering with hand-waves:

"HOW READY ARE WE FOR THE EU AI ACT?"

A number out of 75, with a specific breakdown showing exactly which sections are strongest and which are weakest.

"WHERE SHOULD WE START?"

The lowest-scoring dimension is your highest-leverage starting point. The scorecard tells you which one, not in abstract terms but in specific capability gaps.

"ARE WE AS BAD AS I THINK?"

The four readiness levels show where your organization sits relative to the broader market. Most teams score Starting or Aware. A small number score Building. Very few score Leading.

"WHAT WOULD AN AUDITOR ACTUALLY ASK?"

Every question in the scorecard is a question an auditor, procurement team, or enterprise customer is currently asking AI-deploying organizations. Seeing the questions is itself an education.

Who built this

Dr. Faiz Ahmad is an AI safety researcher and governance advisor. His current research on certified machine unlearning, fairness under erasure, and formal limits of AI remediation has been submitted to NeurIPS and IEEE Transactions on AI.

He has trained 300+ professionals at top research institutions including Penn State (Engineering Science & Mechanics department-wide faculty pilot on AI workflows), Air University, Quaid-i-Azam University, LUMS, and COMSATS. He advises AI startups and enterprises on production governance frameworks.

PhD from Pennsylvania State University. Postdoctoral research at University of Delaware and Penn State. This scorecard draws from five years of applied AI governance work, reviewed against NIST AI RMF, EU AI Act Article 13 requirements, and what actually gets flagged in enterprise procurement reviews. If governance is on your roadmap for 2026, the scorecard is the first five minutes of that work. Download it and score honestly.

Free

15 questions, 5 minutes. Find out if your AI governance would survive an audit, EU AI Act review, or customer.