← Back to blog

AI · Diagnosis

AI Readiness: 10 questions to know if you're ready

This isn't a marketing quiz. These are the ten questions I ask in the first serious assessment. Answer no to more than four and you're not ready — bringing AI in will be expensive burnout.

Published on May 10, 2026 · 9 min read · By Adán Mejías

"AI Readiness" is the latest excuse for corporate frameworks. There are matrices with 70 dimensions, three-color scorecards, 90-question surveys. Almost all of it is noise. The reality is that in my assessments with companies in banking, pharma, energy and fintech, there are ten questions that predict 90% of success or failure. If you answer honestly, you'll know in an hour whether your organization is ready or whether bringing AI in today is gifting money to a vendor.

The ten, in no order of importance, because they all weigh.

1. Who is the specific executive owner of the AI program?

Not "the executive committee supports it". A specific person, with a name, with dedicated time, with signed budget and with authority to say yes or no to initiatives. If you don't have one, you're not ready. AI cutting across departments without an owner ends in political paralysis.

How to diagnose

Ask three executives who's responsible for AI in the company. If the three answers don't match, there's no owner. "The CIO" doesn't count if the CIO has another seven priorities at the same level.

2. Are there three identified, prioritized and signed-off use cases?

Concrete cases. Not "improving customer service" but "reducing first-response time on tier-1 tickets by 40%". Three because one alone is fragile and ten is dispersion. And signed off means a functional owner has agreed to defend them.

3. Is data accessible, not perfect?

I'm not asking if you have a pristine data warehouse. I'm asking whether the people who'll work with AI can access the data they need to execute the use cases, in less than a week, without opening tickets that take months. Data perfection comes later; what kills you is inaccessibility.

Accessibility matters more than perfection. A decent data point within reach is worth more than a perfect data point locked behind policy.

4. Is there a basic governance and security framework agreed?

Three minimums: what information can leave for external models, who approves a new AI use case, and how usage is monitored. You don't need an 80-page policy. You need three pages the leadership committee has read and signed. If this isn't in place, any pilot touching sensitive data will stall halfway.

5. Has the organization completed at least one significant digital project in the last two years?

This question predicts a lot. A company that hasn't been able to roll out Salesforce, Workday, Office 365 or a new ERP in the last two years isn't going to roll out AI. Change capacity is a muscle. If it's atrophied, the new thing falls on its own.

The flag case

In my time consulting for an industrial company, I discovered they had been trying to renew their document management system for three years. They had failed twice. When they asked me to "implement AI", the honest answer was: fix change capacity first. AI on top of incapacity to change is rain on dry land.

6. Is there someone internal with AI technical judgment, not just curiosity?

One person, even half-time, who understands what a language model does, what an embedding is, how RAG works, what fine-tuning means, what each API call costs, what the risks are. Doesn't need a PhD. Does need to be able to argue eye-to-eye with a vendor without swallowing the whole pitch. Without that figure, you'll be sold whatever.

7. Can AI investment decisions be made in less than six weeks?

If your investment approval cycle is six months, you're not ready for AI. The pace of the field is such that six-month decisions land in a different world from the one that requested them. You need a parallel mechanism, agile, with a pre-authorized investment ceiling for AI initiatives. If this doesn't exist, everything else jams.

8. Is there an honest feedback channel from the front line?

The people using AI every day are the front line. If their feedback doesn't reach decision-makers, pilots are designed in the abstract and fail in the concrete. You need a real channel (not a half-yearly survey) where a support person can say "this doesn't work, here's why" and have it reach the sponsor in less than 48 hours.

9. Does the culture tolerate failures without hunting for blame?

AI involves trying and failing. Models hallucinate, pilots crash, costs slip. If your culture punishes failure with drama, people will hide problems until they can't be hidden, at which point it'll be much worse. The question isn't whether you fake tolerance for failure: the question is whether the last time something genuinely failed, there were heads or there was learning.

10. Is there a learning budget, not just an implementation budget?

The implementation budget covers the project. The learning budget covers what you'll discover doesn't work, the maintenance months, the continuous training, the experiments that don't reach production. In AI projects, this budget is usually 30% to 50% of the implementation. If you hide it to look cheap, you'll pay for it midway with worse mood.

How to read your answers

If you answer yes to all ten with honesty, you're in better shape than 80% of companies. If you answer yes to seven to nine, you're ready, but fix the weaknesses before scaling. If you answer yes to four to six, don't scale yet: do a small pilot whose main job is to build the missing pieces. If you answer yes to three or fewer, it's not AI time yet: invest first in governance, accessible data and change capacity.

The optimist's bias

The most common bias is answering yes to almost everything because "it's in progress". "It's in progress" isn't the same as "we have it". Apply a strict criterion: if you can't prove it with a document or a specific person, it's a no.

The mistake I see most often

The mistake I see most often is jumping to a pilot before answering these questions. The urgency of "not falling behind" pushes people to start without diagnosis, and nine months in they realize the pilot is stuck — not because of the technology, but because of one of the ten pieces missing from the start. The cost of doing the assessment is low; the cost of skipping it is between 50,000 and 300,000 euros of painful learning.

The rule I apply: before committing the first AI license or the first vendor contract, spend two weeks answering these ten questions with a small mixed group. The investment is small, the savings on errors are huge, and the conversation it produces is worth it on its own.

AI Readiness isn't a matrix, it's an honest conversation with yourself and the organization. Ten well-asked questions give more information than ten frameworks applied half-heartedly. And the discipline of answering them truthfully, before starting, marks the difference between those who'll be five years ahead and those who'll still be talking about pilots in 2030.

Found this useful?

Book a free 15-min assessment. I'll send you a personalized guide afterwards.

Book my assessment