Automation isn’t the decision — defining the boundary is
8 min read
Responsible autonomy begins with a leadership line: what machines may do alone, where humans must remain accountable, and how to make that boundary work in real operations.

Historically, the leadership decision focused on the level of automation — is there a payback to automate? Today, most operations will automate; with AI-enabled automation, the real question is the appropriate degree of autonomy these systems should have.
Leaders don’t debate whether to automate anymore. The real decision is where to draw the line: what machines do autonomously and where accountability must stay human. If a wrong decision can endanger people, a human stays in the loop. Everything else is a question of distributing responsibility and risk, consciously and by design.
Michiel Veenman, Vice President Product & Portfolio Management at Körber Business Area Supply Chain, frames the leadership tension plainly: “As a leadership question, automation is about balancing flexibility and efficiency. Higher automation can drive productivity and cost, but it can also reduce flexibility, and in a volatile world, you can’t afford to be rigid.” That balance applies to the level of automation; the appropriate degree of autonomy is set by consequences, governance, and accountability — not ideology or hype.
Start where clarity is strongest
“Good automation starts with understanding your processes and your cost base,” says Veenman. High-cost, relatively standard processes are often strong candidates for automation and positive ROI; low-cost but high-complexity activities are far harder to automate effectively. Frequency matters as much as complexity. If a task occurs rarely, even elegant automation will underperform. This orientation reframes “automation strategy” as a management system: diagnose the process landscape, classify by standardization, impact, and frequency, then decide whether the level of automation is warranted for the business case.
Decision autonomy: how far to go
With the level of automation defined, leaders set the degree of autonomy — how much decision‑making the system may perform within guardrails, and when a handover to a human is required. In practice, this means assigning decision rights by consequence, specifying data/model quality thresholds for automated choices, and defining explicit handover criteria (confidence, variance, time, impact) with documented fallbacks. Roles and escalation paths make accountability visible, while explainability, logging, and audit trails keep decisions traceable in day‑to‑day operations. Start conservative and expand autonomy only as evidence from safe operations, tests, and audits supports it.
The hard line: safety — and why it isn’t negotiable
Every autonomy and automation strategy begins with a non-negotiable: safety. Standards and interpretations vary by region, but the principle does not. “A system can only be autonomous if it’s still 100% safe,” Veenman stresses. That includes an emergency stop and certified communication that guarantees the machine will halt. Fixed systems are straightforward: cut power and conveyors or stacker cranes stop. Mobile and next-generation systems complicate the picture. Battery-powered, decision-making robots must still reliably and immediately comply with an emergency stop. Humanoids sharpen the question further: where is the stop, and can an operator reach it safely when it matters? That is the immovable boundary.
Regulatory exposure sits close to that line, especially in life sciences. If an autonomous decision can ultimately affect patient safety, caution becomes the rule. “You need to be super careful, almost to the point of ‘don’t do it,’” says Veenman. By contrast, minor damage to low-value fast-moving consumer goods (FMCG) items may be tolerable if it meaningfully reduces costly human interventions. Leaders decide by consequence, not dogma.

“A system can only be autonomous if it’s still 100% safe. That is the hard line.”
Michiel Veenman
Vice President Product & Portfolio Management at Körber Business Area Supply Chain
Compliance-guided judgment within defined boundaries
Beyond safety, leaders make decisions within clearly defined standards. One pragmatic case: applying AI vision under an approved override protocol to resolve a traditional sensor’s false positive in layer-picking, continuing operations only when the “obstacle” is validated as inconsequential (for example, plastic film) and safety is uncompromised. The benefit is fewer stops and fewer manual checks. Governance anchors this: verified training quality, documented decision thresholds, and fail-safe fallbacks. Leadership ensures these controls are in place and followed, in line with safety and compliance, not as exceptions but as standard operating procedure. In other words, high automation of the physical flow can coexist with a low autonomy tier where safety and compliance set the limit.
A practical rubric: automation levels and autonomy tiers
To orient decisions, Veenman distinguishes between the level of automation and the degree of decision autonomy. On the automation axis, systems progress from mechanized and sensorized flows to highly automated, integrated operations. On the autonomy axis, decision-making progresses in tiers. At the base, deterministic decision support runs on hard-coded rules (“if this, then that”), with human confirmation for exceptions. In the second tier, small, defined disruptions can be handled autonomously: the system detects and resolves common deviations (a missing item, an out-of-spec pallet) within guardrails and hands off to an operator when thresholds are reached. At the advanced tier, learned optimization — validated in a digital twin and backed by auditability — adjusts storage, routing, and throughput within agreed limits. Moving up autonomy tiers is not only about accuracy; it requires thresholds, fallbacks, and audit trails that trace decisions back to inputs and criteria.
Making accountability visible
Explainability is not a luxury; it is a control. One compelling mechanism Veenman highlights is the ability to “ask the machine.” Operators can query a robot’s last action — why did you do that, which inputs drove your choice — and receive an understandable explanation. It is not perfect, he notes, but it builds the right kind of trust: an operational understanding of how the system behaves under real conditions. Around this, leaders need visibility on signals and trends: alarms when variables move out of bounds, analytics to see whether incidents are rising or falling, and the ability to drill into decision paths — the “why left, not right?” that turns a black box into a glass box. Underneath, consistent logging and audit trails tie decisions to inputs and evaluated criteria, so accountability is more than a slogan; it is embedded in everyday operations.
“Ask the machine: Why did you do that? Explainability builds trust.”
Michiel Veenman
Vice President Product & Portfolio Management at Körber Business Area Supply Chain
Efficiency versus control: decide by consequence
The acceptable trade-off between efficiency and controllability depends on consequences. “If the wrong autonomous decision ships medication to the wrong patient, you keep a human in the loop — no one will accept otherwise,” Veenman says. “If the downside is a damaged six-pack of soda while saving significant labor, that’s a different decision.” This is why leaders should decouple automation from autonomy: they keep flows automated for efficiency while retaining human oversight for high-consequence decisions.
The consequence-based view aligns autonomy with value at stake, instead of treating “more autonomy” as a virtue in itself. It is leadership’s job to set that boundary in a way that operations can understand, follow, and improve over time.
Culture is a control
Metrics matter, but so does trust. “What is probably as important is buy-in from your people,” Veenman emphasizes. If frontline teams don’t understand what’s happening or feel threatened by autonomy, they will double-check, add workarounds, or resist the change, eroding performance and confidence. Involving teams in boundary setting, explaining the purpose behind design choices, and making the benefits tangible to their daily work are not soft factors; they are control mechanisms that determine whether autonomy actually performs. Particularly in high-cost regions, responsible autonomy is about protecting competitiveness and enabling local production to compete, not replacing people for its own sake.
Designing for regional guardrails
Autonomy also plays out differently across regulatory and cultural contexts. “In the US, systems could theoretically access more data — people are less sensitive about tracking performance per employee and using that in autonomous decisions,” Veenman notes. “In DACH, that’s a no-go. You can’t track or store individual employee performance for such purposes.” At the other extreme, China’s very limited privacy expectations can translate into broader data availability. The upshot: the same technology faces different guardrails by market. Leaders need to design boundaries accordingly and make them explicit in governance and system configuration from day one.

When less autonomy is the better decision
Another way to view autonomy is the run time without human intervention; in many environments, material feeding defines that boundary. There are moments when lowering the degree of autonomy is the higher quality choice while keeping the automation of material flow high. Veenman describes a recurrent pattern: autonomous vehicles move pallets from warehouse to production and drop them near the line, and then it stops. The remaining steps — opening boxes, feeding multiple infeeds with different materials, handling variation — are infrequent and varied. A robot might sit idle for long stretches or require a highly complex, multifunctional gripper and substantial intelligence to handle low-frequency, high-variety work. Today, the business case is borderline. Early pilots are interesting, but scaling awaits changes in dexterity, AI maturity, and task frequency. In volatile environments, keeping this boundary lower protects controllability and ROI and preserves the flexibility to adapt as product ranges or demand patterns change.
Designing boundaries that last
The most resilient autonomy decisions are felt at the boundary: automate where tasks are structured and safety is assured; apply AI where it demonstrably reduces interventions or improves flow without obscuring accountability; and build governance that keeps transparency and adaptability alive over time. This is leadership in practice: drawing a clear line for machine action, keeping people oriented and in control, and making explainability and auditability part of everyday operations rather than an afterthought.
What leaders should decide now — toward 2035
Translating all of this into durable practice means committing to a few decisive moves. Define the hard lines first: explicit safety and compliance boundaries that no autonomy will cross. Then test them rigorously across technologies and sites. Map, per process, both the level of automation and the degree of autonomy, using value at stake, frequency, and reversibility to justify each; decide handover points and fallbacks in advance, so no one has to improvise under pressure. Build explainability and auditability into the stack: logs, alarms, and query-based explanations (“ask the machine”) should be part of daily operations, not an add-on. Where consequences demand it, keep a human in the loop and document why. Treat culture as a control: involve teams in setting and refining boundaries, communicate the rationale, and track adoption behaviors like double-checks and workarounds as leading indicators to address. Finally, design for regional constraints from the start: assume different privacy and data-use rules by market and configure autonomy to what is legally and socially acceptable across regions.
Humans first, by design
Asked for a rule of thumb, Veenman is unequivocal: “Humans first.” People need to understand what’s changing, why it matters, and how autonomy supports, not replaces, them. Leaders who define the boundary explicitly — hard lines on safety and compliance, pragmatic thresholds for autonomy, visible accountability, and a culture that trusts the system — won’t just automate. They will build resilient organizations where technology, data, and people perform as one coherent system.
You may also be interested in:

Connect with us
Let’s talk about future-ready solutions for your business.
Contact

