AI Is a Supply Chain, Not a Magic Button: What NIS2 Means for AI Governance
by Ilpo Elfving / Econ CEO
by Ilpo Elfving / Econ CEO
A graduated AI engineer delivers food for Wolt under threat of deportation while 50-70% of international students drop out from specialized fields. How the Integrator-PM model and internal AI can solve Finland's talent integration crisis.
Read moreFinland's Cybersecurity Act 124/2025 places personal liability on food industry leadership for cybersecurity. Critical deadlines, sanctions of up to €7 million, and concrete steps to achieve compliance.
Read moreRecently I had a first-hand experience where a leading AI company couldn't resolve a simple billing dispute after seven weeks (still ongoing as of writing) of AI-assisted support. Emails were silently routed to chatbots, tickets fragmented across parallel threads with a slice of context and history, duplicates created and silently resolved and context lost after escalating to a human representative, and falling back several times to AI loops after human made a decision and promised to refund. This reveals something important about how AI actually works in practice.
This is not an isolated vendor problem. It demonstrates what happens when AI-driven support is treated as a convenient automation layer rather than what it actually is: a digital supply chain of chat, email systems, routing logic, language models, plugins, and back-office integrations - each with different owners, logs, and failure modes.
For leaders in food, trade, and health sectors now subject to Finland's Cybersecurity Act (124/2025), this matters directly. If AI touches your production scheduling, order promising, cold chain monitoring, or patient documentation, you are legally accountable for governing that entire chain. Not just "the AI part".

Section 10 of Finland's Cybersecurity Act requires management to maintain sufficient expertise in cybersecurity risk management and to personally approve security measures. This obligation extends to AI systems that touch essential operations. Delegating AI governance entirely to IT is no longer legally defensible.
The Cybersecurity Act (124/2025) implements the EU's NIS2 Directive in Finland. Its core requirements are straightforward:
Article 21 of NIS2 explicitly includes supply chain security as a mandatory risk management area. AI systems - whether embedded in your ERP, powering customer service, or automating clinical documentation - are part of that supply chain.
The reason AI governance feels slippery is that "AI" is never one thing. Every AI-enabled service is a stack of components and suppliers:
Email addresses, chat interfaces, API endpoints, mobile apps - each with routing rules that may silently redirect messages.
Ticketing systems, workflow automation, agent frameworks - deciding where requests go and what context travels with them.
LLMs generating responses, classification models routing tickets, prediction models suggesting actions - each with different providers, versions, and behaviours.
Connectors to ERP, WMS, EHR, CRM, billing, logistics - where AI outputs become real-world actions.
Where prompts, context, and outputs are stored—with implications for data residency, retention, and regulatory compliance.
When something goes wrong - context lost, decisions made on incomplete data, escalations that never reach humans - the failure rarely sits in "the AI." It sits in the seams between these layers.
NIS2 does not care whether a failure is caused by a malicious act or if it starts in an LLM, a plugin, or an email router. It looks at the impact on your essential services and whether you managed the risks across that entire digital supply chain.
The abstract becomes concrete when you map AI touchpoints in actual operations.
AI analysing temperature sensor data to predict spoilage or trigger alerts. If the model misclassifies a critical deviation, product safety is compromised.
AI predicting order volumes for production planning. Wrong predictions cascade into overstock, waste, or stockouts across the supply chain.
AI-assisted GDSN submissions and product descriptions. Errors propagate to retailers, compliance systems, and consumer-facing labels.
AI routing and responding to supplier emails. Lost context or misrouted messages can disrupt procurement timing.
AI planning delivery routes based on real-time data. Model errors or stale data can delay time-sensitive shipments.
AI-assisted classification and declaration preparation. Misclassification creates compliance exposure and border delays.
AI chatbots handling order status, ETAs, and exceptions. If escalation paths break, customer issues fester.
AI adjusting prices based on demand, inventory, and competition. Ungoverned models can create margin erosion or customer trust issues.
AI transcribing consultations or summarising patient histories. Errors in medical context can affect care decisions.
AI triaging patient requests and scheduling. Misrouted urgent cases create patient safety risks.
AI-assisted prescription checking or dosage calculations. Model failures in this context have direct safety implications.
AI handling appointment reminders, follow-ups, and queries. Lost messages or incorrect information erodes trust and care continuity.
We deploy AI only where it delivers measurable, governed value. Not for convenience or novelty. The governance gaps that appear even at leading AI vendors illustrate exactly why: ungoverned AI creates ungoverned risk. If you cannot map the supply chain behind an AI feature, you cannot manage it. And under NIS2, you are accountable for managing it.
When AI-enabled processes break down, the root causes are typically not sophisticated. They are mundane governance gaps:
Invisible channel switching. User messages silently routed from email to chat to AI without notification. From a governance perspective, this is loss of transparency and traceability.
Fragmented context. Model updates, platform changes, or system integrations that break continuity. Later human agents or subsequent AI interactions lack the full history.
No clear owner of the whole chain. Email, chat, AI, and back-office systems behave as separate silos. No one is accountable for end-to-end resolution and data consistency.
Policy vs. reality gaps. The billing system accepted a cancellation; the AI keeps insisting on a different process. The ERP shows stock available; the AI chatbot says otherwise.
None of this requires malicious AI. It is what happens when you treat AI as a tool you plug in, forgetting that you have extended your digital supply chain with more integration points, more moving parts, and more ways to lose context and ownership.
Executives do not need another 40-page framework. They need a shortlist of non-negotiables that connect AI governance to existing NIS2 obligations.
Make an explicit board-level decision: AI systems touching essential operations fall under your NIS2 risk management regime. Document this decision.
For every critical AI use case, diagram: communication channels, orchestration/routing, model providers, plugins/connectors, data stores and locations. One page per use case.
Nominate a process owner for each AI-enabled flow, not "the AI team." That owner is accountable for outcomes regardless of how many tools are chained together.
Require AI solutions to log channel switches, preserve case history across updates, and provide humans with complete context. Ban "sealed AI loops" with no escalation path.
Add AI and cloud model providers to your vendor risk programme. Cover data usage, locations, sub-processors, incident notification, and support SLAs in contracts.
Embed AI into your ISMS(Information Security Management System): access control, change management, secure development, business continuity. Treat critical AI configuration like code—review, test, document.
If AI touches HACCP-adjacent systems, cold chain monitoring, traceability, or GDSN product data, include it in your cybersecurity risk management procedure. The Finnish Food Authority supervises NIS2 compliance for food operators. Align AI governance with their expectations.
AI in patient pathways, clinical documentation, or medication management carries both NIS2 and sector-specific regulatory implications. Ensure AI governance integrates with existing patient safety and data protection frameworks.
AI in customs classification, route optimisation, or B2B communication affects trade compliance and customer relationships. Map AI dependencies in EDI/Peppol, WMS, and TMS systems as part of your digital supply chain assessment.
Use this checklist to assess your current AI governance posture and identify gaps.
Formally decide that AI systems tied to essential services fall under NIS2 risk management. Document the decision.
List all AI/ML tools in production or pilots, including external SaaS with embedded AI and internal experiments.
For each critical AI use case, diagram channels, orchestration, models, plugins, and data locations.
Each AI-enabled process has a named owner accountable for outcomes and risk - not just "the AI team."
Define how cases move between AI and humans. Require full history transfer. Ban sealed AI loops.
AI and cloud model providers included in vendor risk programme with documented assessments.
AI is already part of your digital supply chain. The only question is whether you govern it deliberately or discover the gaps during an incident.
Finland's Cybersecurity Act is now in force. Management is personally accountable. Supply chain security - including AI vendors - is explicitly in scope.
The good news: if you have already implemented NIS2 risk management for your traditional IT and OT systems, extending that framework to AI is achievable. The same principles apply: map dependencies, assign owners, manage suppliers, integrate controls, prepare for incidents.
If you get this right, AI becomes a governed capability that supports resilience and compliance. If you ignore it, you risk finding yourself - and your customers - stuck in an ungoverned AI loop where no one owns the outcome.
And unlike a billing dispute, the cost of that loop in food safety, patient care, or trade compliance will not be trivial.
Sources:
Define AI-specific incident scenarios: data exfiltration via AI, unsafe automated decisions, process outages. Include isolation procedures and evidence preservation.
Management must understand: LLMs predict tokens, not truth; context windows are finite; routing layers can silently drop information. This meets the "sufficient familiarity" requirement.
AI contracts cover data usage limits, incident notification, regulatory cooperation, and feature control.
AI embedded in existing access control, change management, secure development, and BC/DR policies.
Critical AI systems log prompts, context, tool calls, and outputs. Anomaly monitoring configured.
IR playbooks include AI-specific scenarios, isolation procedures, and evidence preservation.
Management trained on AI behaviour, limits, and risks to meet "sufficient familiarity" requirement.
Practical "how to use AI safely" training for staff who interact with AI-enabled systems.
AI governance aligned with sector-specific expectations (Finnish Food Authority, Valvira, etc.).
AI architectures, risk assessments, controls, and approvals documented for regulatory demonstration.