Why AI implementation into bid functions is hard
Many Sales or Bid function leaders have already ‘totally done AI’ in bidding.
They bought a tool. They ran a pilot. They saw proposals being drafted faster, summaries generated in seconds, answers appearing where there had previously been blank pages. For a short period, it looked like progress. Where’s my bonus?
Then real bids happened.
Reality check
Reality hits.
Security questions were raised over what data had been shared and how wide the access was. Had anyone read the SAL? Reviewers lost confidence reading what initially sounded good but on second look was generic slop. Commercial wanted clarity on accountability if the org could not deliver what was promised on paper. Teams quietly stopped using the tool when deadlines loomed (or reverted to their own preferred tool). Eventually the conclusion formed, often unspoken but widely shared: AI doesn’t really work for bidding just yet.
That conclusion is understandable. It is also wrong.
What usually failed was not the technology, but the way the problem was framed.
Optimising the wrong thing
Most AI initiatives start with an implicit assumption: that bidding is primarily a proposal production problem. Too much writing, too little time; therefore, faster content generation equals better performance.
This is the same ‘faster horses’ logic seen across many AI programmes: optimise the visible bottleneck without questioning whether the underlying system is fit for what you are asking it to do. Even bidding practitioners who recognise that people and process need to change limit their scope of thinking to proposals. A bid is not just a proposal. It never has been. It will increasingly be less so precisely because of AI.
Bidding functions, especially those supplying the public sector, are not designed for start-up, ‘move fast and break things’ speed and behaviour. They are designed with a fine balance of flexibility and control within a landscape that’s inherently uncertain. They exist to win contracts they can actually deliver, coordinate across silos without full authority, beat the competition, and survive external scrutiny. Speed matters because the deadlines are there, but predictability and defensibility should matter more.
AI challenges that design at a structural level. Treating it as an incremental efficiency tool means you smash into the system rather than change it – you’re putting a jet engine on a biplane with the inevitable results.
Why AI change is harder in bidding
AI is difficult to implement well in any part of the organisation, as we are all finding out. In bidding, the difficulty compounds.
AI itself is uncertain. Capability evolves quickly, behaviour is probabilistic and based on crap-in-crap-out, and assurance models are immature. Who knows what the tools will be capable of tomorrow? Leaders are being asked to make decisions about tools whose limits are shifting in real time.
Bidding functions have to be horizontally integrated by nature. A serious bid spans sales, delivery, commercial, finance, HR, legal, operations, and senior leadership. No single function owns the whole outcome, yet all are exposed to the consequences.
Bidding runs on immovable deadlines. You can’t pause pursuing your pipeline to refine governance, retrain people, or resolve a data question. Shareholders will crucify you for a quarterly dip. When time pressure rises, teams revert to muscle memory.
Feedback loops are long. You may not discover whether a bid approach was acceptable until months later, often with limited transparency. Failure is expensive, but learning is slow.
This is not a simple licence purchase
The point at which most AI-in-bidding initiatives go wrong is when they are treated as software deployments. AI does not just change how proposals are written; it changes how opportunities are shaped, intelligence is gathered, options are analysed, decisions are made, and positions are defended. That reaches across the entire bid lifecycle, not just the document at the end of it. Treating this as a tooling exercise guarantees friction elsewhere.
Failure modes are depressingly consistent. When the proposal AI software vendor says ‘all your competitors are using our product’ in their marketing, what they really mean is lots of people have bought the licence and that’s all that matters. Many clients tell us that reality didn’t live up to the promise. All. The. Time.
Governance is the first fault line. Someone must be accountable for AI-assisted activity across the bid lifecycle, not just for drafted answers. Decision rights need to be explicit: where AI can scan and synthesise market and client intelligence, where it can analyse data and model options, where it can recommend, where it can draft, and where humans must decide. In public sector bidding, “we reviewed it” is not an assurance model, however often ‘human in the loop’ is bleated.
Information management issues surface immediately. AI forces long-ignored questions into the open: how did our data get into a state where we cannot trust it or draw confident conclusions; which bid and delivery artefacts are genuinely reusable; how data is classified and controlled when teams mess around with open or semi-open models; what constitutes intellectual property; and how long material is retained despite confidentiality and compliance commitments. When these questions are unanswered, risk does not disappear, it accumulates quietly.
Processes are exposed next. Early positioning, shaping, solutioning, reviews, approvals, and sign-off rhythms were designed around human-led analysis and authorship. AI changes volume, velocity, and variation across the entire bid, not just the proposal. If the process does not adapt, quality drops, assurance weakens, or people bypass controls to hit deadlines. Under pressure, workarounds and hero behaviours appear. AI is used outside approved tools. Prompts and outputs are copied into documents with no audit trail. Access is loosely controlled. Security and legal concerns surface late, often when bids are already live.
Quality becomes an issue. Low-quality or non-compliant content is reused at scale, faster than reviewers can contain it. Historic bids, never intended to function as a structured knowledge base, become de facto training data. Where content is thin or absent, generic copy is churned out. Weak processes are amplified rather than corrected.
People are the final constraint. Roles shift away from pure bid production towards judgement, orchestration, and challenge. Reviewers stop trusting what they are reading. Bid leads stop believing the process protects them. AI becomes associated with risk rather than support. When deadlines hit, muscle memory takes over – teams revert to known behaviours, AI is quietly dropped, CTRL C + CTRL V becomes your friend, and the organisation concludes it was ‘not there yet’.
None of this is solved by configuration or some perky Customer Success Manager.
Unmanaged adoption is the dangerous option
Many leaders assume that doing something quickly with AI is safer than doing nothing – we’re actually encouraged to experiment and have a go. That’s when it hits the fan.
Unmanaged adoption increases commercial risk through inconsistency and non-compliance. It increases reputational risk through weak auditability and challengeable practices. It increases people risk by undermining trust and deskilling judgement.
Deliberate, governed change looks slower at the start. It requires uncomfortable conversations about ownership, data, assurance, and capability. You may hear on the grapevine what the competitors are achieving so far ahead of you (supposedly). But ask an evaluator what they feel when they know how little effort has gone into what they read. Ask delivery what the impact is. Ask anyone who’s been made to stand up in court to justify what they did or wrote in a bid. Bids matter.
In a function that waits for no one, capability and quality matters.
A more credible approach
There is no generic playbook (there never is…), but some principles hold consistently.
Start with the bidding system, not the tool. Understand how work actually flows under pressure, where decisions are made, and where risk is absorbed. Think of the pillars in PAS 360 – leadership, governance, practices, information, capability. They all matter.
Design assurance, security, and accountability first. If you can’t explain how AI-assisted content and decisions would be defended in a challenge, it is not ready.
Be explicit about human/AI decision boundaries. Ambiguity here destroys trust faster than any model error.
Introduce AI horizontally, not in silos. Sales/bidding alone cannot carry the risk; legal, commercial, delivery, security and IT must be aligned from the outset.
Learn in real environments, but within tight guardrails. In bidding, ‘experimentation’ means controlled exposure, not free play.
Measure your functional benchmarks so you know what the impact really is. Proper metrics, not just win rates.
This approach is less exciting than buying software and running pilots. It is also far more likely to survive first contact with a real competition.
Why you’re all doing it wrong
Implementing AI into a bidding function sits at an awkward intersection. Especially a function focused on public sector business.
It requires experience of complex, high-stakes bids where failure has consequences for entire organisations and the citizens of the country. It requires a deep understanding of how bidding functions are built, governed, and assured – and we mean reality not ‘best practice’ process. And it requires an informed, sceptical grasp of what AI can and cannot do today, and where it is heading. You don’t believe that Redbull gives you wings, so why do you believe the software vendor’s marketing claims?
Most initiatives are strong in one or two of these areas. Very few sit comfortably in all three.
That, more than any technical limitation, is why AI in bidding is hard – and why the organisations that do make it work look different long before the technology is switched on.