LCS - Blog

AI in Continuous Improvement: Your Questions Answered

Written by Tim Edwards | Mar 10, 2026 8:10:48 AM

 

Artificial Intelligence is rapidly moving from experimentation to real-world application across organisations, and many Continuous Improvement (CI) practitioners are exploring how it can support improvement work in practice.

In our recent webinar, Tim Edwards, SME at LCS, and Zoe Hawkes, Head of Continuous Improvement at Computacenter, explored how AI and Continuous Improvement can work together in real organisational settings.

Drawing on their experience leading improvement initiatives across complex organisations, the session examined how AI can support improvement work, the risks organisations must manage, and the behaviours required to adopt the technology responsibly.

One theme came through consistently during the session: AI works best when built on strong Lean foundations. Organisations with mature improvement cultures are often better positioned to adopt AI because they already prioritise stable processes, quality data, structured problem solving, and continual learning.

During the webinar, attendees raised thoughtful questions covering leadership, governance, sustainability, and the future of human expertise in an increasingly automated world.

Below is a selection of the live Q&A discussion from the session, capturing the questions many CI teams are currently exploring as AI adoption accelerates.

 

1. How can you simplify AI/Continuous Improvement for people new to both?

Continuous Improvement (CI) is a mindset, supported by habits and structured problem solving. It is not just about tools, but about behaviours, learning, and continual refinement of processes.

Artificial Intelligence (AI) is an accelerator, not a replacement. AI only adds value when the foundations such as stable processes, quality data, clearly defined business outcomes, are already in place.

A simple way to explain their relationship:

  • Lean is the foundation; AI is the extension built on top.
Analogy – Tony Stark & Jarvis (Iron Man): Tony Stark begins with no superpowers. Through a CI- like process, he builds the Iron Man suit and develops Jarvis. Jarvis augments Tony, but:
  • The suit wouldn’t work without Tony.
  • Tony wouldn’t become a superhero without the technology.

This illustrates CI (the thinking) + AI (the augmentation) working together.

 

2. How do you think AI can help improve team leadership?

AI supports leadership by:

  • Providing predictive insights to make better-informed decisions.
  • Strengthening awareness around data quality and reinforcing quality mindsets.
  • Reducing administrative burden, enabling leaders to focus more on coaching, Gemba, and people engagement.


3. If AI can identify patterns and recommend improvements, how do we ensure CI remains a culture people own?

We said in the webinar that good AI adoption is a behaviour and that is also true here. CI has always been a behaviour first concept and a culture that is built on:

AI cannot replace people-led CI practices, such as:

  • Reflection and learning cycles
  • Daily huddles, Gemba walks, and visual management
  • Human-driven ideation and problem solving

AI should always be framed as:

“A tool to inform, not a tool to decide.”

Or simply:

AI supports; people improve.

 

4. Where or how would you recommend a team to start utilising AI to support its CI efforts?

Firstly, it’s important to ask what problem it is we’re trying to solve with AI. What gaps do we have in capability or efficiency that we need AI to plug? That will help us target its use in the right area.

Following that, the recommended next steps are:

  • Begin with small, low risk experiments, in a safe environment.
  • Choose areas where data quality is strong, and processes are stable.
  • Prioritise processes where CI has already created clarity, structure, and good measurement.


5. Are there key lessons to be learnt from transformation? We are great at chasing the new shiny thing and putting in digital before aligning the processes. I think poor control of Ai could make this issue even more prominent.

Introducing AI is itself a transformation, and just as with traditional transformations, the success or failure of that will depend on robust governance. This is why we have introduced the Operating Model for AI-CI, to ensure that a robust system is in place prior to chasing the new shiny thing.

Critical lessons include:

  • Without solid Lean foundations, we aren’t ready for AI.
  • High-quality data and stable processes must be built before introducing automation or AI.
  • Effective transformation follows the sequence:

> Process alignment → Data quality → Automation/AI.

Remember: AI amplifies what already exists. Poor processes become worse and more complex when automated.

 

6. What gap will shifting to a tool like Celonis be closing for you?

Celonis has 2 pieces: process management & process mining

  • Using Celonis for process management will help us standardise our processes across teams (e.g. in different countries) and better connect them across organisational boundaries to prevent gaps and overlaps in the end-to-end flow.  
  • Combining this with Celonis' process mining capabilities will help us identify where our actual operations vary from the way we think our processes should operate.  We can use this information to proactively diagnose problems and validate improvement opportunities.

7. Keen to know from Zoe when getting that AI driven VOC – is this looked at in real time? What frequency are you analysing it?

In the Service Desk, AI enables near-real-time insights; however, operational practice varies.

In most teams, VOC is reviewed as part of the Performance Boards and Daily Habits, resulting in at least daily or weekly review cycles depending on the operation.

 

8. How can we ensure that CI is prioritised alongside AI with senior leaders who see AI as a quick win or silver bullet?

Approaches include:

  • Position CI as the foundation for AI.
  • Demonstrate how AI depends on stable processes and high-quality data, both created by CI.
  • Use the CI maturity model to show that culture comes first, and AI success relies on it.
  • Highlight that CI mindsets drive use cases for AI

Ultimately, this comes down to leadership behaviour.

 

9. Will CI be required in the future if humans are not needed?

It is true that some jobs will be entirely automated in the future, likely resulting in a shift in what we consider sustainable vocations. In a lot of industries, the reality is that is at least a generation away from becoming the norm.

Irrespective of all that, CI as a principle and a mindset will always be required because:

  • CI is about thinking, learning, behaviours, and problem-solving, not manual activity.
  • As AI introduces new complexity, human guidance is even more important.
  • AI can provide analysis, but humans must interpret, plan, prioritise, and execute.

Human creativity, curiosity, and judgement remain irreplaceable.

 

10. One of the most commonly raised concerns in organisations I work with is the environmental impact of AI. Are there any good resources you’ve found to help understand the cost-benefit analysis of this before implementation?

This is a really important topic, and one that is increasingly being discussed across both the public and private sectors.

AI systems do have a measurable environmental footprint, primarily through the electricity and water required to run large data centres that train and serve AI models. Training large models can require significant computational power, and global data-centre electricity demand is expected to grow rapidly as AI adoption increases. However, it is important to view this impact in context. Many analyses suggest that while AI consumes energy, it can also generate significant environmental benefits by improving efficiency in areas such as energy systems, transport, manufacturing, and supply chains.

For organisations considering implementation, the most practical approach is to treat environmental impact as part of the same systems thinking and governance that we discussed during the session. Rather than evaluating AI in isolation, it is useful to assess:

  • The problem being solved – does the AI application deliver measurable value (e.g., reduced waste, improved resource use, better planning)?
  • The scale of deployment – lightweight AI applications may have negligible impact compared with large-scale model training.
  • Infrastructure choices – cloud providers increasingly operate highly efficient data centres and invest heavily in renewable energy.
  • Lifecycle impact – considering not only the energy used during operation but also hardware, data storage, and model training.
  • OECD – Measuring the Environmental Impact of AI (guidance on evaluating AI’s energy and carbon footprint)
  • World Economic Forum – AI’s Energy Paradox (analysis of the trade-off between AI energy use and its potential to reduce emissions across industries)
  • Google’s research on measuring the environmental impact of AI at scale, which explains how energy, carbon, and water usage can be monitored in production environments.
  • FAS – Measuring and Standardising AI’s Energy Footprint, which explores the need for clearer metrics and reporting on AI’s environmental impacts.

Several useful resources that explore this topic in more depth include:

The key takeaway is that AI’s environmental impact should not be ignored, but neither should it be considered in isolation. Like any improvement intervention, the right question is whether the overall system outcome improves. In many cases, AI can enable organisations to reduce waste, optimise operations, and make better decisions at scale. These benefits may outweigh its direct environmental footprint when implemented responsibly.

 

11. For Zoe, how do you have AI perform the Root Cause Analysis in your device lifecycle management solution?

The approach taken was:

  1. High-quality process data ensured through CI foundations.
  2. AI interrogates the existing process library and knowledge base.
  3. AI performs automated clustering and anomaly detection.
  4. Humans validate the findings and determine the true root causes.

AI accelerates RCA but does not replace human judgement.

 

12. An interesting Vibe Coding session last week highlighted a 4 step software creation model similar to a lean thinking model. AI Software: Plan, Code, Check, Test. Lean Thinking Plan, Do, Check, Act. Anyone got any thoughts on this?

Our assumption is: Lean thinking is universal and applies across disciplines.

The AI/software cycle mirrors PDCA because the underlying logic is the same:

Plan → Execute → Validate → Learn/Adjust

This shows how CI principles remain relevant even in fast-evolving technological domains and how robust the PDCA framework is.

 

13. Are we at risk of eroding human expertise over time and actually losing the ability to properly check and appraise outputs? How can we ensure the message around 'Thinking before Expertise' and ensuring outputs are critically appraised is heard far and wide?

To safeguard expertise CI teams and leaders must:

  • Mandate human validation of AI outputs.
  • Train people in AI literacy and critical appraisal skills.
  • Reinforce the importance of data quality, process stability, and critical thinking.

AI should augment human capability—not reduce the need for skilled thinking.

Essentially this comes back to the need to establish robust governance and frameworks.

 

Final Thoughts

One of the clearest messages from the discussion is that Artificial Intelligence does not replace Continuous Improvement, it amplifies it.

Stable processes, high-quality data, strong leadership behaviours, and a culture of learning remain the essential foundations for improvement. When these foundations exist, AI can act as a powerful accelerator—surfacing insights, identifying patterns, and freeing up time for leaders and teams to focus on problem solving and people development.

But without those foundations, AI simply scales existing problems faster.

For organisations exploring AI today, the real opportunity lies in combining the discipline of Lean and Continuous Improvement with the capabilities of modern AI tools.

This Q&A is just a snapshot of the discussion from the session - watch the full recording of the webinar AI in Continuous Improvement: Building Smarter Systems, Unlocking Opportunity, and Managing Risk’.