We launched Molecule One a few months ago. Small team, handful of customers, everything still being figured out. One thing we did not have to figure out was where the hard problem would be. It was not the technology. It was not picking the right AI model. It was getting a room full of procurement professionals, people who have been doing this work for years and doing it well, to actually change how they operate.
We are writing this while we are still early because we think there is value in sharing what we are learning in real time rather than packaging it up after the fact. We do not have hundreds of engagements behind us. What we do have is direct, hands-on experience running AI adoption and upskilling programmes with procurement teams and a clear view of what is working and what is not.
The thing that surprised us most: procurement teams do not resist AI because they think it will not work. They resist it because the news cycle makes it feel threatening, leadership mandates make it feel punitive, and nobody has shown them what it actually looks like in their day-to-day work. The real barrier is not technical. It is emotional.
Here is what we are seeing on the ground.
Why most AI training programmes fail in procurement
Before we get into what works, it helps to understand why the default approach (a training session, a mandate, and a deadline) keeps failing.
Procurement professionals are not slow adopters. They are careful ones. Their careers depend on accuracy, compliance, and risk management. Every output they produce has a consequence: a contract that binds the organisation, a supplier relationship that took years to build, a spend decision that shows up in the next audit. When you ask them to use a tool that generates text they did not write, you are asking them to take on a new kind of professional risk. That is a reasonable thing to be cautious about.
The teams that recognise this build adoption programmes around trust first. The teams that skip it build training decks and wonder why nobody logs in after the first week. We wrote about this pattern in why most procurement AI projects fail. The change management gap shows up again and again.
Start with hands-on workshops, not training decks
The first thing we do with every procurement team is run a hands-on workshop. No delivery pressure, no KPIs attached, no performance reviews in the room. Just the team, the tools, and space to experiment.
Procurement professionals are used to every action having a consequence. A low-pressure workshop breaks that pattern and lets people see what AI actually does, rather than what the headlines say it does.
We structure these workshops around a mix of professional and personal use cases. Alongside drafting an RFP section or analysing a supplier contract, we ask people to analyse a company's financials they personally invest in, or write a short article on something they care about outside work. The personal use cases are deliberate. They lower the guard. When someone sees AI help them with something personal, the reaction is different than when they are watching a demo on test data.
Three variables determine whether a workshop succeeds: who is in the room (mix seniority levels and functions), how the session is structured (guided exploration, not passive demonstration), and whether people leave with something they built themselves. If someone walks out of a 90-minute session having produced a draft RFP, a supplier comparison, or a contract risk summary, they are far more likely to open the tool again the following week.
For teams that want to see what AI can do across procurement workflows before running a workshop, our AI for Procurement Teams playbook walks through specific use cases with prompt templates and workspace configurations.
Soft mandates outperform hard deadlines
A blanket mandate with a compliance date is one of the fastest ways to increase resistance. We have seen it happen. Procurement teams are good at working around requirements they do not believe in. They have been doing it with ERP systems for decades.
What works instead is a soft mandate combined with visible recognition. Identify your early adopters (every team has them) and put them at the centre of the programme. Give them time to experiment. Celebrate how they are using AI publicly. Make their wins visible across the function.
People do not want to be told to use a new tool. They want to see someone they respect using it and getting results. Build that proof first. The rest tends to follow on its own.
Seed the early adopters
Identify and support 3-5 early adopters. Give them time, tools, and direct access to coaching. Let them find their own high-value use cases.
Amplify the results
Have early adopters share results with the wider team through show-and-tell sessions. Peer credibility does the heavy lifting.
Soft expectation
Set a gentle expectation: all team members try AI on at least one workflow within 30 days. By now, social proof makes this natural, not forced.
That sequence has worked better than any top-down mandate we have tried.
Run show-and-tell sessions regularly
Show-and-tell sessions do not get enough credit as a procurement AI upskilling tactic. Get practitioners to demonstrate what they are actually using AI for: drafting supplier communications, summarising RFP responses, tracking contract obligations, preparing for category reviews. Keep it concrete.
For teams that are slow to adopt, watching a colleague run a real task in three minutes instead of thirty changes something. It answers the question most people carry around but do not ask: what would I actually use this for?
We recommend running these every two weeks during the first 90 days. After that, monthly. The format is simple: 10 minutes, one person, one real workflow. No slides. Just a live demonstration on screen.
The sessions that work best follow a pattern. The presenter states the task, shows how long it used to take, runs it live with AI, and shares the output. The audience asks questions, and those questions are where the real value is. "Can it do that with our supplier data?" "What happens if the contract is in a different format?" "Would that work for our category review template?" Each question is a new use case the team discovers on its own.
Something we have noticed: the people who present become the strongest AI advocates on the team. Preparing a demo forces them to sharpen their workflow. The positive reaction from peers reinforces their own use. It compounds. Teams that keep this going outperform the ones that treat upskilling as a one-time event.
Build shared infrastructure: prompt libraries and context documents
AI is only as useful as the context it has access to. Most procurement AI adoption programmes skip this part entirely.
We work with procurement teams to build three things:
Prompt library
Vetted, tested approaches for supplier evaluation, contract review, spend analysis, RFP drafting, and negotiation preparation. Procurement-specific prompts refined through actual use on real tasks.
Skills library
Reusable configurations that handle specific procurement workflows end to end. The building blocks that turn a general-purpose AI tool into a procurement-trained assistant.
Context documents
Supplier relationship histories, internal policies, category strategies, evaluation criteria, approved clause libraries. This is where most of the value comes from.
Without context, AI produces generic outputs that require heavy editing. With it, the outputs are specific enough to use right away. These are not one-time documents. They are living assets that the team updates as workflows evolve, suppliers change, and policies get revised. The teams that treat them as shared infrastructure get 3-5x more value from AI than the teams that leave everyone to figure it out alone.
This is also where a structured AI readiness assessment helps. It identifies which parts of your data and documentation need work before AI can produce useful results.
Get the 2-page cheat sheet
Everything in this article condensed into a printable reference: the 3-phase approach, workshop design variables, show-and-tell format, shared infrastructure checklist, leadership do's and don'ts, measurement framework, and an 8-week quick-start timeline.
Capture context from non-traditional sources
The first tool we recommend to almost every procurement team is an AI meeting notes taker. It sounds minor. It is not.
Deploy a note-taker across procurement meetings: category reviews, supplier business reviews, internal planning sessions, stakeholder alignment calls. Save the summaries in a shared location. Feed those summaries into a centralised meeting context file. You now have a team-wide source of institutional knowledge that used to be locked in local folders, email threads, and people's heads.
AI can then analyse that context, surface patterns across conversations, and produce a weekly digest of what is moving across the function. We have seen this single change give procurement leaders more visibility into their team's activities than any reporting tool they had before.
The same applies to supplier emails. Structured capture of communication patterns over time becomes a data asset that most procurement teams do not realise they are sitting on. When that data is available to AI, it can flag relationship risks, spot communication bottlenecks, and surface negotiation leverage that would otherwise go unnoticed.
Document and map every workflow
Well-documented procurement processes are a direct input to automation. Every workflow you map in detail (purchase requisition to order, supplier onboarding, contract renewal, invoice exception handling) is a candidate for AI-assisted or fully automated execution.
We start this one workflow at a time. The documentation itself surfaces inefficiencies people stopped noticing years ago. A contract renewal process with seven approval steps when it only needs three. A supplier onboarding workflow that collects the same information four times across different forms. An RFP process that starts from scratch every time because nobody can find the templates from the last round.
Once a workflow is documented with enough specificity (what triggers it, what data it needs, what decisions are involved, what outputs it produces) you can evaluate which steps AI can handle, which steps need human judgment, and where the handoffs should sit.
This is also the foundation for measuring AI's impact. Without documented workflows and baseline metrics, you have no way to show whether AI is delivering value. Our ROI calculator can help model the expected value before you deploy, but the workflow documentation is what makes the measurement credible.
There is a benefit here that is easy to miss. The documentation process itself is a form of upskilling. When a procurement professional maps their own workflow in detail, covering every decision point, every exception, every handoff, they develop a much sharper sense of where AI fits and where it does not. It moves them from "I do not know what AI would do for me" to "I can see exactly which steps AI should handle." That is often where individual adoption starts.
Leaders have to go first
Nothing moves procurement AI adoption faster than a leader who visibly uses AI in their own work.
One practice we have implemented with several of our customers: leaders draft their team communications using AI and say so explicitly. An announcement email that notes "I drafted this in eight minutes using AI" does more for adoption than any internal training session. It shows the team that this is real, it is safe, and leadership is not asking for something they will not do themselves.
If a CPO or VP of Procurement mandates AI adoption but never uses it themselves, the team notices immediately. The mandate becomes another corporate initiative that everyone goes through the motions on. The leadership behaviour component is one of the strongest predictors of whether an AI programme gains traction or stalls.
Leaders should use every channel they have (team meetings, leadership calls, all-hands sessions, even Slack messages) to show specific examples of how they are using AI. Specific tasks, specific tools, specific outcomes. "I used Claude to prepare for yesterday's supplier review and it saved me 40 minutes" is worth more than a 30-slide AI strategy deck.
We have also seen effective leaders share their failures with AI openly. When a leader says "I tried using AI for this task and the output was not good enough, here is what I learned and what I would do differently," it normalises experimentation in a way that pure success stories cannot. Procurement teams need to know that trying AI and getting a bad result is fine. That signal has to come from the top.
How to measure whether your upskilling programme is working
Adoption is not binary. You need leading indicators that tell you whether the programme is gaining traction before the operational numbers (productivity, cycle time, cost savings) move.
Track adoption momentum
Tie to operational metrics
If you cannot connect AI usage to operational metrics after 90 days, the programme has a measurement problem, not an adoption problem. Our measurement framework guide covers how to set up this tracking from day one.
What we are learning
We are still early. Molecule One is a few months old and we are learning something new with every customer engagement. But one thing has become clear: this is not a technology problem. It is a people problem. Teams need to see AI work on tasks they actually do, feel safe trying it, and watch people they respect use it on real work.
The programme that works builds trust piece by piece: hands-on workshops, peer recognition, shared prompt infrastructure, documented workflows, and leadership that goes first. The early results are encouraging. The teams we work with that follow this approach are seeing 60-70% voluntary adoption within 90 days, compared to 10-20% from mandate-driven programmes. We will keep sharing what we learn as we go.
Looking to build an AI upskilling programme for your procurement team?
Talk to our team