Most boards are not yet asking the right questions about artificial intelligence. They are either asking none - delegating the topic entirely to management - or asking surface-level questions about which tools the company is using. Neither approach constitutes governance. AI is a strategic capability with material risk implications. It requires the same fiduciary rigor boards apply to financial oversight, cybersecurity, and executive compensation.
Here are seven questions that move board AI oversight from performative to substantive.
1. Is our AI strategy aligned with our enterprise strategy - or running parallel to it?
AI initiatives that exist as standalone innovation projects almost always fail to scale. The board should ask whether each AI investment can be traced directly to a strategic objective. If it cannot, the initiative is a science experiment, not a business capability.
2. Do we own our data infrastructure, or are we building on rented foundations?
AI is only as reliable as the data it consumes. Boards should understand whether the organization controls its data pipeline - or whether critical data sits inside vendor platforms, partner systems, or legacy architectures that limit access and portability.
3. What is our actual exposure if an AI system produces a harmful or biased output?
Boards need a clear answer to the liability question. Not a theoretical answer. A specific one: which AI systems touch customer-facing decisions, what safeguards exist, and what is the financial and reputational exposure if a model fails?
4. What is our workforce displacement plan?
AI will automate tasks, reconfigure roles, and eliminate some positions. Boards should ask whether management has a credible workforce transition plan - including retraining, redeployment, and communication strategy - or whether they are deploying AI first and managing the human consequences later.
5. How concentrated is our vendor dependency?
If a single AI vendor provides the models, the infrastructure, and the analytics layer, the organization has a concentration risk that the board should evaluate. The question is not whether to use external vendors. It is whether the organization retains enough internal capability and contractual flexibility to switch providers if needed.
6. How are we measuring AI ROI - and who controls the metrics?
If the team deploying AI is also the team reporting its performance, the board has an objectivity problem. Ask for an independent measurement framework with clear baselines, defined success criteria, and a kill switch for initiatives that do not produce measurable returns within a defined timeframe.
7. Are we compliant with current and emerging AI regulation in every jurisdiction where we operate?
The regulatory landscape for AI is moving faster than most organizations realize. The EU AI Act, emerging US state-level legislation, and sector-specific requirements create compliance obligations that boards must monitor proactively - not reactively after an enforcement action.
These seven questions will not make a board an AI expert. That is not the board's job. But they will ensure that AI governance receives the same structured oversight that boards apply to financial risk, human capital, and strategic planning. That is the board's job.