AI and Humans: Why we need context and guardrails on the shopfloor
Tuesday, January 6, 2026


We’ve all seen the demo.
An operator asks a chatbot about a yield drop, and the AI magically surfaces the root cause, writes a maintenance ticket, and optimises the setpoint.
It looks great in a boardroom. But put that same system on a brownfield line running 24/7, and the reality is different. Without deep knowledge of the physical process, the AI hallucinates. It suggests changes that are unsafe, physically impossible, or irrelevant to the current shift.
The result? Operators ignore it. Central IT blocks it. The pilot dies.
The problem isn’t that the models aren't smart enough. It’s that they are operating in a vacuum.
AI on the shop floor only creates value when it is grounded in live context and governed by the people who own the process.
Without Context, AI Guesses
LLMs are reasoning engines, not knowledge bases. If you feed them raw tag data without the surrounding metadata - genealogy, machine state, upstream constraints - they will find correlations that don’t exist.
To be useful, AI needs the Who, What, Where, and Why:
- Material Context: Which product am I working on? What material type is it and which other part-numbers is it connected to?
- Asset Context: Is this machine in maintenance mode? What are its physical limits?
- Process Context: What recipe is running? What happened at the previous station?
- Environmental Context: Is it shift change? Is the sensor drifting, or is the process actually shifting?
When you anchor AI in a unified context graph, it stops guessing and starts analysing.
High-Value, Near-Term Wins
We don’t need autonomous factories tomorrow. We need help with the problems consuming hours today. These aren't moonshots; they are practical problems that context-aware AI can tackle immediately.

1. Drift detection with the "Why"
Standard SPC tells you that a variable is drifting. Context-aware AI tells you why. It can correlate a temperature spike with a specific raw material batch, a change in tool wear, or an ambient humidity shift. It’s not a generic alert; it’s a hypothesis grounded in the physics of the line.
2. Faster Investigations
When a quality issue hits, engineers spend hours playing "data detective": pulling logs, checking shift notes, and matching timestamps in Excel. A context-aware system does this instantly. It assembles the genealogy, highlights the anomalies, and shows you who solved a similar problem three months ago.
3. SOP Copilots
Static PDFs are where knowledge goes to die. An AI copilot turns procedures into interactive, station-aware guides. It translates instructions, adapts to current line conditions, and lets operators challenge or improve steps. The system learns what actually works, not just what was written down five years ago.
4. Cross-Plant Benchmarking
Old-school standardisation meant forcing every plant to use the exact same hardware and naming convention. That rarely works in brownfield reality.
With a shared context layer, you can compare processes, not just KPIs. You can see that Plant A’s cooling cycle is 10% more efficient than Plant B’s, even if they use different chillers. You identify the logic that works and scale the best practice, without having to re-engineer the physical line.
A Safe Maturity Path: Read, Recommend, Act
Trust isn't binary. You don't flip a switch and hand the keys to an algorithm. Trust is built in steps.
- Read: The AI explains what changed and ranks hypotheses. It recommends the next check. No actions are taken without human hands.
- Recommend: The AI suggests setpoint nudges or startup checks. The operator reviews and approves. Everything is logged.
- Act: Limited, auditable actions are executed by the system within strict policy bounds.
Governance is Architectural, Not Procedural
This is the most critical shift. You cannot rely on a chat interface or a written policy to ensure safety.
Governance must travel with the data.
Lineage, access rights, and safety thresholds must be embedded in the context layer itself. If an AI agent recommends a setpoint change, that recommendation must carry the context of who authorised the agent, what the safe thresholds are, and how to reverse it.
This changes the culture. Frontline teams become co-designers because they know the guardrails hold. Central teams stop being "Blockers" and start being "Architects": defining the safety bounds while local teams iterate.
The Bottom Line
The near future isn't about removing humans. It's about giving them a system that explains, recommends, and assists.
If you want AI that operators actually trust, stop throwing data at models. Start building the context and guardrails that make them safe.

About Marc Krüger-Sprengel
Marc Krüger-Sprengel is the Co-Founder and CEO of context/fab, where he and the team are building the context layer that helps manufacturers use AI at scale across entire production networks. Before founding context/fab, Marc led Data & AI at Bosch Rexroth - building up platforms, teams and ecosystems that made Data and AI impactful for operations, quality and planning. With roots in mechanical engineering and experience across manufacturing, automotive, aviation, space and medtech, he focuses on results over slideware: unify OT and IT, turn it into context, and deliver results that move KPIs.