In our Whitepaper, ‘Adapt or Be Automated: What AI Means for Improvement Professionals’, we explored the threats and opportunities facing the profession. In this series of 3 blogs, we will be exploring the key elements of this paper in a bit more detail, starting with the 9-point checklist for CI Leaders.
The future of improvement will be defined not by how organisations adopt AI, but by how CI leaders guide that adoption. Insights shared in the paper reveal both opportunities and anxieties. Employees are excited by AI’s potential, but fearful of its impact, which limits their engagement. Leaders are curious but uncertain about the governance required to transform it from a successful initiative to another failed change. Against this backdrop, CI leaders need a practical playbook for action.
Practical Recommendations: A 9-Point Checklist for Leading in an AI-Enabled World
The following nine recommendations are offered as advice for leaders. Each is grounded in CI practice, informed by cross-sector insights, and designed to help professionals navigate an AI-enabled future with confidence. This is followed by a visual representation of what this system could look like, using the “House of Lean” analogy.
CI Practitioners do not need to become data scientists. What they do need is literacy. An understanding of what AI can and cannot do, the types of problems it is suited for, and the risks of bias and misuse. This is similar to how CI practitioners use control charts or statistical tools; they may not design the mathematics, but they know enough to interpret outputs and make sound decisions which lead to improvements. By upskilling employees, organisations reduce the risk of data mishandling and ensure AI supports customer-facing activities smoothly, avoiding errors, outages, and rework.
One of the most consistent concerns is the risk of over-reliance on the technology. Repeated use of AI can encourage people to “outsource” their reasoning, producing weaker thinking over time. This directly challenges CI’s reliance on structured problem-solving.
CI leaders should make tools like A3 thinking and Root Cause Analysis the antidote. Teams should be trained to treat AI as a source of insight, not an answer machine. Leaders in technology highlight the importance of asking, “What assumptions underlie this AI output?” and “How do we test it?”. By modelling this questioning mindset, CI leaders protect the integrity of problem-solving.
AI offers remarkable capability in capturing customer sentiment, behaviour, and unmet needs at scale. But customer insight cannot be reduced to dashboards alone. Leaders in public services note that algorithmic analysis may reveal “what” citizens feel, not always “why.”
CI leaders must ensure the Voice of the Customer remains grounded in direct engagement. AI can identify friction points, but workshops, focus groups, and Waste Walks are still needed to hear lived experiences. In finance, for example, AI may flag customers at risk of default, but only dialogue can uncover the circumstances driving that risk. The future of customer-centric CI lies in combining digital signals with human listening.
AI shifts the role of the CI professional from problem solver to orchestrator, just as a facilitator of Kaizen events does not provide the answers but creates the conditions for others to discover them.
In education, adaptive learning systems demonstrate how AI can scale content while teachers orchestrate context. Similarly, CI professionals should position themselves as interpreters and coaches, ensuring that AI insights are applied responsibly and inclusively. The differentiator is not technical control, but the human skills of facilitation, systems thinking, and cultural leadership.
At the same time, CI teams themselves will need to evolve to harness AI effectively. Many organisations will benefit from embedding new capabilities within improvement teams, such as AI Specialists, Data Engineers, or Digital Analysts. These roles can provide the technical depth to build, validate, and govern AI applications, while CI professionals focus on culture, adoption, and impact. The result is a more integrated function, where technical expertise and human-centred improvement work side by side.
AI excels at enforcing consistency. This creates opportunities to strengthen Standard Work, ensuring processes are followed accurately and reliably across dispersed teams.
The risk, however, is rigidity. Standard Work is not meant to be static but the baseline for evolution through PDCA and frontline engagement. CI Practitioners should use AI to support consistency, but also create mechanisms for feedback and continuous adaptation. This balance ensures that AI enhances reliability without stifling innovation.
Fear of replacement is real across sectors. Healthcare staff worry about professional autonomy; finance teams about algorithmic decision-making; public servants about accountability. Cultural fear can stall adoption if left unaddressed.
CI leaders are well-equipped to tackle this with existing tools. For example, Stakeholder Analysis can reveal who is most anxious; Change Readiness Assessments can identify barriers to adoption; Visual Management can make progress transparent and inclusive. Leaders should acknowledge fears openly rather than dismiss them. Demonstrating how AI augments human contribution is critical to building trust.
The most effective leaders will model a cycle of experimentation. Rather than imposing sweeping AI solutions, they will encourage “sandbox” pilots, review results, and scale what works; the essence of PDCA thinking. A practical entry point is to apply AI to routine, frustrating, or low-value tasks. By beginning in these safe areas, organisations can build confidence, generate early wins, and free up people to focus on more meaningful improvement activity.
In technology, leaders describe running AI pilots on narrow problems such as fraud detection before expanding to enterprise-wide use. In public services, pilots allow staff to gain confidence incrementally rather than facing abrupt transformation. CI leaders should adopt the same approach, showing their teams that AI is something to be explored with curiosity and discipline, not feared as an unknown.
As with any improvement activity, AI adoption must be aligned to business metrics and outcomes. This ensures AI is deployed where it creates measurable value rather than being used for novelty or convenience and it remains aligned with the core CI principle of purposeful improvement, where every intervention is tied to strategic and operational goals.
AI-enabled improvement cannot succeed in isolation; it requires a robust organisational system that aligns people, processes, and technology with leadership and strategy. At its foundation are reliable data pipelines, leadership commitment, AI specific policies, and governance and ethics frameworks. Without these, AI risks becoming fragmented experiments rather than a sustained capability.
Strong system design includes secure data practices. The use of enterprise-grade accounts, with clear governance and access controls, is essential to protect sensitive information and ensure that AI adoption strengthens rather than compromises organisational resilience.
Equally important is continuity. Leaders must recognise that AI systems are not foolproof; models can fail, outputs can be biased, and platforms can suffer outages or regulatory disruption. Just as Lean emphasises poka-yoke and error-proofing to safeguard processes, AI-enabled improvement must include fallback mechanisms, “human-in-the-loop” safeguards, and contingency plans. By embedding resilience into the system, CI leaders can ensure that customer needs are still met, even when the technology fails.
Building resilience also means choosing AI partners wisely. The market is volatile, with some providers already exiting or being acquired. Partnering decisions should therefore weigh not only capability, but also longevity and transparency, ensuring that AI adoption remains sustainable in the long term.
Those pioneers who design systems for both capability and continuity will unlock the true potential of AI-enabled CI while protecting their organisations from risk.
Final Thoughts
Artificial Intelligence is no longer a distant possibility but a present reality reshaping processes across industries. For Continuous Improvement professionals, the question is no longer if AI will affect their work, but how they will shape its role.
The consistent message across sectors is clear, that technology alone does not deliver improvement, systems do. Without strong foundations of governance, data, and culture, AI becomes a patchwork of experiments rather than a sustainable capability.
For CI leaders, the path forward is clear:
Adaptation is not optional. CI professionals who integrate AI responsibly and creatively will secure their role as catalysts of organisational learning and transformation. Those who fail to shape the system will not be replaced by AI, they will be replaced by those who can.
This blog is part of our whitepaper ‘Adapt or Be Automated: What AI Means for Improvement Professionals’. You can download the full whitepaper below.