Global · London Hub · Crisis Command · Major Incident Doctrine · Operational Resilience
2026: Crisis Command · Major Incident Doctrine · Decision Architecture

Control Fails Before Systems Do.

Doctrine for organisations operating under pressure, uncertainty, and systemic disruption.

Crisis does not create failure. It exposes structures that were already weak. These are not playbooks. They are decision systems for environments where information is incomplete and consequences are irreversible.

Signature Doctrine Systems

The Architecture of Crisis Coherence

Four proprietary frameworks. Control must be established before action is taken.

System 01
Control Collapse Model™

Organisations fail when decision authority fragments under pressure. This model maps the cascade from initial disruption through authority fragmentation to operational paralysis.

Decision AuthorityAuthority Fragmentation
System 02
Crisis Decision Hierarchy

Single authority. Clear escalation. No ambiguity. The structural prerequisite for coherent action under time pressure.

Single AuthorityEscalation Protocol
System 03
Failure Cascade Mapping

How small disruptions become systemic breakdowns. A diagnostic framework for identifying structural vulnerability before crisis reveals it.

Failure PatternsVulnerability Mapping
System 04
Operational Integrity Index

A measure of whether an organisation can still make coherent decisions. When this degrades, technical recovery becomes irrelevant.

Operational MetricsControl Measurement
● Live Intelligence

Threat Intelligence Feed — April 2026

Curated threat intelligence from CISA, ENISA, Mandiant, CrowdStrike, Cyble, and proprietary doctrine analysis. Updated daily.

Critical — Ransomware

65 Active Ransomware Groups

808 victims claimed in March 2026. Cl0p exploiting file transfer zero-days. RansomHub and Akira expanding double-extortion operations. 87% of attacks now involve data exfiltration before encryption. Average ransom demand: $2.1M.

Source: Cyble, Mandiant — Updated 06 Apr 2026
Elevated — AI-Powered Threats

Deepfake-as-a-Service Now Commoditised

Real-time voice cloning and video impersonation available as subscription services on dark markets. Executive impersonation attacks targeting wire transfers. $25.6M Hong Kong fraud precedent. AI-generated phishing bypassing traditional email filters at scale.

Source: CrowdStrike, ENISA — Updated 06 Apr 2026
Critical — Zero-Day Exploits

Active Exploitation Campaigns

Chrome CVE-2025-2783 sandbox escape used in targeted espionage. FortiClient EMS SQLi (CVE-2023-48788) mass exploitation ongoing. Ivanti Connect Secure vulnerabilities chained for persistent access. CISA KEV catalogue expanding weekly.

Source: CISA KEV, Google TAG — Updated 06 Apr 2026
Monitoring — DDoS & Availability

Record 5.6 Tbps DDoS Mitigated

Cloudflare mitigated 5.6 Tbps attack in Q4 2025 — Mirai-variant botnet using 13,000 IoT devices. Hyper-volumetric attacks now routine. Application-layer attacks increasingly sophisticated with AI-generated request patterns.

Source: Cloudflare Radar — Updated 06 Apr 2026
Elevated — Supply Chain

npm & Open-Source Supply Chain Attacks

Typosquatting and dependency confusion attacks targeting npm ecosystem. Axios impersonation packages detected. SolarWinds-pattern persistent access campaigns observed in managed service providers. SBOM requirements accelerating under EU CRA.

Source: Snyk, ENISA — Updated 06 Apr 2026
Regulatory — NIS2 & DORA Enforcement

First NIS2 Penalties Issued in EU

EUR 850K penalty issued Q1 2026. DORA on-site ICT risk inspections underway. Ireland NIS2 transposition bill expected H1 2026 amid EC infringement proceedings. Director personal liability now statutory under NIS2 Art.20. CISO accountability precedents expanding.

Source: ENISA, ESAs — Updated 06 Apr 2026

Intelligence feed refreshed daily at 07:30 UTC from CISA, ENISA, Mandiant, CrowdStrike, Cyble, and proprietary doctrine analysis.

Incident Domains

Major Incident Categories

Five major incident types. Each requires distinct decision architecture and recovery doctrine.

Domain 01
Ransomware

Enterprise Disruption. Control failure event with technical symptoms. Becomes major incident when core operations are disrupted, data integrity uncertain, authority fragmented.

Extortion AttackEncryption
Domain 02
Distributed Denial of Service

Operational Pressure. Attack is about which services survive sustained load. Decision architecture determines what remains available, degrades, abandoned.

AvailabilityService Prioritisation
Domain 03
Data Exfiltration & Breach

Information Compromise. Breach doctrine: identify scope, notify regulatory bodies, establish disclosure governance, restore stakeholder confidence.

Data IntegrityRegulatory Notification
Domain 04
Identity & Privileged Access Compromise

Access Doctrine. Attacker moves laterally with legitimate credentials. Access must be frozen, integrity verified, authority restored before systems return.

IAMCredential Compromise
Domain 05
Supply Chain Disruption

Cascade Doctrine. Third-party compromise spreads to core systems. Isolation, vendor accountability, upstream verification required. Organisation stops as a system.

TPRMVendor Risk
Major Incident Doctrine

Ransomware — Enterprise Disruption

Ransomware is not a cyber incident. It is a control failure event with technical symptoms.

Situation

Ransomware becomes a major incident when core operations are disrupted, data integrity is uncertain, and decision authority becomes fragmented.

Organisations often respond with paralysis. Attack teams move fast. Decision teams move slowly. Authority splits into technical response, legal liability, payment consideration, disclosure governance, and board notification.

This fragmentation is where control collapses.

First 60 Minutes: Control Establishment Protocol

Assign single decision authority. Board-mandated incident commander. One person. One decision chain. Speed increases when authority is single.

Halt uncontrolled system changes. Do not confuse urgency with direction. Lock all non-isolated systems. Preserve evidence integrity. Freeze all non-essential system changes.

Isolate affected environments logically, not blindly. Segment based on control plane, not just network. Preserve backups offline.

Establish communication cadence. Board briefing: minute 15, 30, 60. Stakeholder notification: minute 45. Regulatory notification: based on legal mandate (usually within 72 hours).

Decision Architecture: Five Parallel Tracks

Track 1 — Containment: Isolate affected systems. Verify isolation. Document evidence. Preserve forensics. Scope assessment. Does threat continue to spread?

Track 2 — Operational Continuity: Which systems restore first? Which operations are non-negotiable? Business continuity plan activation. Failover decisions. RTO/RPO enforcement.

Track 3 — Payment Consideration: Do not delegate. Board-level decision. Legal/regulatory consultation. Law enforcement notification. Negotiation only after board decision. Track payments if made.

Track 4 — Disclosure Governance: Who knows? Who needs to know? Regulatory filing thresholds. Customer notification timelines. Media response. Board communication.

Track 5 — Recovery Doctrine: Systems return online. Control must return to leadership. If incident commander walked into a structure that was already fragmented, fragmentation returns.

Board-Level Questions
  • Can the incident commander make a payment decision, or must that escalate to the board?
  • What is the RTO for critical operations? Is backup restoration realistic or aspirational?
  • Which regulators must be notified? What are the timelines?
  • What happens to customer data if recovery fails? What is the disclosure plan?
  • What is the organisational narrative? (Story matters. Narrative controls the regulatory response.)
Failure Modes

Fragmentation: Multiple decision makers. Multiple decisions. No alignment. Speed increases. Control decreases. By hour 4, no one knows who decided what.

Technical Confidence: Dashboards show activity. Leadership assumes progress. Reality: direction is absent.

Payment Negotiation Before Control: Attackers negotiate while organisation still cannot define scope. Payment becomes higher. Decryption tools unreliable. Recovery remains impossible.

Disclosure Delay: Regulators expect notification within 72 hours. Delaying to "understand scope" creates secondary breach. Notification is mandatory.

Operational Restart Without Verification: Systems restore. But backups were poisoned. Attacker returns. Control did not return.

Recovery Doctrine

Systems returning online is not recovery. Control returning to leadership is.

Organisations that restore systems but do not restore decision authority remain operationally unstable. This is where secondary incidents originate.

Verification: All systems must prove integrity before acceptance. Cryptographic attestation. Not visual inspection.

Structural Analysis: Why did this succeed? What control failed? Answer before resuming normal operations.

Authority Restoration: Incident commander hands control back to permanent leadership. Decision authority becomes consolidated again. Single strategic voice.

Operational Doctrine

DDoS — Service Availability Under Pressure

DDoS is not about attack. It is about which services survive sustained pressure.

Situation

DDoS becomes a major incident when critical customer-facing services degrade or fail. Unlike ransomware, data is not exfiltrated. But reputation, revenue, and trust are lost in minutes.

Decision architecture must answer: Which services must stay available? What degrades acceptably? What can be abandoned?

First 60 Minutes: Prioritisation Protocol

Identify critical services. Not all services have equal value. Payment processing outage is existential. Marketing website outage is reputational.

Activate DDoS mitigation. Upstream filtering, capacity increase, geographic load distribution.

Establish customer communication. Status page active. Public messaging. Board briefing. Regulatory notification if mandated.

Measure duration. Is attack sustained? Is attacker escalating? Or is this brief probe?

Decision Architecture: Service Hierarchy

Tier 1 (Survive): Payment systems. Authentication systems. Core operational systems. This tier must remain available.

Tier 2 (Degrade Acceptably): Customer portals. Reporting systems. Capacity can reduce. Performance degrades. Availability maintained.

Tier 3 (Abandon): Analytics. Marketing automation. Reporting dashboards. Can be shutdown without operational impact. Restore after attack ceases.

Failover decision: Geographic isolation, service shedding, rate limiting. Which tool applies to which service?

Failure Modes

Indiscriminate Mitigation: Shutdown all services to protect one. Result: attacker wins. Everything is offline.

Inadequate Capacity Planning: Normal load is close to capacity limit. Attack adds 10x load. Organisation cannot handle it.

No Decision Authority: Network team sheds traffic. Application team disagrees. Support team makes promises. No coordinated response.

External Dependency: DDoS mitigation is ISP-dependent. ISP cannot scale. Organisation is hostage to external capacity.

Recovery Doctrine

Recovery is stability under load, not absence of attack.

Organisations that restore service only when attack stops are not recovered. They are temporarily lucky.

Load Testing: After attack ceases, simulate attack load. Can systems sustain it? Or do they cascade?

Capacity Increase: Attack exposed capacity limits. Increase them. Permanently.

Supplier Accountability: ISP/CDN provider failed? Contract renegotiation. Backup provider activation. Do not remain dependent on single supplier.

Breach Doctrine

Data Exfiltration & Breach — Information Compromise

Breach doctrine: identify scope, notify regulatory bodies, establish disclosure governance, restore stakeholder confidence.

Situation

Data exfiltration becomes a major incident when personal, financial, or proprietary data leaves the organisation's control. Scope is unknown. Attacker retains copy indefinitely.

Regulatory response is mandatory. GDPR, CCPA, sector-specific regulations all require notification. Delay creates secondary breach.

First 60 Minutes: Scope & Notification

Identify data type. Is data encrypted in transit and at rest? Was encryption bypassed? Or was data exfiltrated unencrypted?

Quantify scope. How many records? What data elements? Personal identifiers or just usernames?

Regulatory notification. Most jurisdictions require notification within 72 hours. Begin drafting notification immediately. Do not wait for investigation completion.

Customer communication plan. What will you tell affected customers? When? Via what medium?

Decision Architecture: Five-Track Response

Track 1 — Forensics: What was exfiltrated? When? How? Preserve evidence. Do not overwrite logs.

Track 2 — Regulatory Notification: GDPR: 72 hours. CCPA: "without unreasonable delay." Other jurisdictions: varies. Do not delay for investigation completion.

Track 3 — Customer Notification: Affected customers must be informed. Notification must contain: what data, why, what steps organisation is taking, what customers should do.

Track 4 — Credit Monitoring: If financial or identity data exfiltrated, offer credit monitoring for 12–24 months. Regulatory requirement in many jurisdictions.

Track 5 — Containment: Stop the bleeding. Close the exfiltration vector. Isolate affected systems. Verify attacker cannot continue.

Board-Level Questions
  • Can scope be determined quickly, or is investigation ongoing?
  • What is the regulatory exposure? Which regulators must be notified?
  • What is the customer notification message? What are the financial implications?
  • Does the organisation have cyber insurance? Can it cover breach costs?
  • What is the organisational narrative for the market? (Third-party breach vs. internal failure = different message)
Failure Modes

Scope Creep: Investigation reveals more data than initially assessed. Each wave of discovery requires new notification. Regulatory exposure increases.

Notification Delay: Waiting for perfect investigation = regulatory violation. Notification is mandatory. Incomplete investigation is acceptable. Update regulators as scope becomes clear.

Inadequate Customer Communication: "We had a breach" is not notification. Notification requires specificity: what data, why it matters, what customers should do.

No Credit Monitoring: Many jurisdictions mandate credit monitoring for identity data breaches. Omitting it creates secondary regulatory violation.

Recovery Doctrine

Recovery is trust restoration, not data recovery (data is gone).

Stakeholder Communication: Continuous. Weekly updates to affected customers. Regulatory reports on containment progress. Board updates on resolution.

Root Cause Mitigation: Why did exfiltration succeed? Control failure? Third-party compromise? Fix it. Permanently.

Trust Signals: Third-party audit. Security certification. Regulatory validation. Visible restoration of controls.

Access Doctrine

Identity & Privileged Access Compromise

Attacker moves laterally with legitimate credentials. Access must be frozen, integrity verified, authority restored before systems return.

Situation

Identity compromise is the most dangerous major incident. Attacker has legitimate access. They look like an insider. Detection is hard. Scope is unclear.

If privileged accounts are compromised, attacker can create backdoors, steal data, modify logs, and maintain persistence indefinitely.

First 60 Minutes: Credential Freeze Protocol

Identify compromised credentials. Which accounts? Privileged or standard? How long were they active?

Freeze all affected credentials. Force password reset. Revoke API keys. Revoke session tokens. Do not wait for investigation.

Identify lateral movement. Where did attacker go? What systems were accessed? What data was touched?

Verify system integrity. Attacker may have created backdoor accounts. Search for: new user accounts, privilege escalations, new services, modified logs.

Decision Architecture: Access Restoration

Tier 1 — Credential Remediation: All affected credentials revoked. New credentials issued. Force re-authentication across organisation.

Tier 2 — Backdoor Elimination: Identify all attacker-created access points. Remove them. Verify removal.

Tier 3 — System Integrity Verification: All systems touched by attacker must prove integrity before re-entry. Cryptographic attestation. Not visual inspection.

Tier 4 — Privilege Re-Establishment: Affected privileged users must re-validate. Identity verification. Capability verification. Slow re-certification of privilege.

Failure Modes

Incomplete Credential Freeze: Attacker still has one valid credential. Attacker re-enters systems. Incident recycles.

Missed Backdoors: Attacker created hidden user accounts, API keys, or SSH access. Organisation believes incident is closed. Attacker remains.

Premature System Restoration: Systems restored before integrity verification complete. Attacker's modifications persist.

No Privilege Re-Certification: Privileged accounts restored to same users without re-validation. If attacker stole password, attacker regains access immediately.

Recovery Doctrine

Recovery is trustworthy identity, not fast identity restoration.

Identity System Audit: All access control systems must be audited. Active Directory, Okta, privilege management tools. Attacker may have modified these directly.

Privilege Model Redesign: Why did attacker succeed with legitimate credentials? Privilege was too broad. Principle of least privilege must be enforced.

Continuous Verification: Identity compromise requires ongoing suspicion. Behaviour analytics. Access pattern anomaly detection. Continuous monitoring.

Cascade Doctrine

Supply Chain Disruption

Third-party compromise spreads to core systems. Isolation, vendor accountability, upstream verification required. Organisation stops as a system.

Situation

Supply chain incidents are distinctive. Organisation did not fail. Vendor failed. But organisation's systems are compromised.

Scope is unclear because vendor's scope is unclear. Remediation is slow because vendor drives timeline. And organisation may not even know it was compromised until attacker activates payload.

First 60 Minutes: Vendor Isolation & Assessment

Identify vendor compromise. Which product? Which version? When was it deployed?

Isolate vendor systems. If possible, network-isolate all affected systems. If isolation is dangerous (critical production), plan isolation carefully.

Assess organisational exposure. Which systems run vendor software? Which data is accessible? What is the blast radius?

Vendor communication. Request immediate technical briefing. What do they know? What have they not told you?

Decision Architecture: Isolation & Remediation

Track 1 — Network Isolation: Affected systems isolated from internet. Air-gapped if possible. Limits attacker's exfiltration capability.

Track 2 — Vendor Patch Timeline: When is patch available? Is organisation willing to patch production immediately, or does testing delay patch deployment?

Track 3 — Upstream Verification: Have other customers been compromised? Is vendor being transparent? Are regulators aware?

Track 4 — System Integrity: Even after patching, system integrity is suspect. May need rebuild from clean backup or full replacement.

Track 5 — Vendor Accountability: Contract renegotiation. Remediation timelines. Financial responsibility. Consider vendor replacement.

Failure Modes

Vendor Defensiveness: Vendor denies compromise or minimises severity. Organisation waits for truth. Delay increases exposure.

Slow Patch Deployment: Vendor takes weeks to release patch. Organisation is exposed. Patch is eventually forced, but window was long.

Insufficient Isolation: Affected system remains connected to network. Attacker continues lateral movement. Isolation was incomplete.

No Supply Chain Verification: Organisation did not verify upstream vendors. Vendor itself compromised its supplier. Chain extends further than expected.

Recovery Doctrine

Recovery is vendor independence and supply chain resilience.

Vendor Redundancy: Critical systems should have backup vendor. If primary vendor fails, secondary takes over. No single vendor should be mission-critical.

Supply Chain Audit: All vendor products must be periodically audited. Not just compliance checks. Security assessment. Code review if possible.

Contract Clauses: Contracts must include: security incident notification, remediation timeline commitments, liability for breach, supply chain transparency.

Emerging Doctrine

AI & Autonomous Systems — Incident Command

When AI systems fail, traditional incident response fails with them. Decision authority must adapt to non-deterministic systems, adversarial manipulation, and cascading model failures.

The Situation

AI systems are now embedded in critical business processes: fraud detection, credit decisioning, clinical triage, autonomous operations, content moderation. When these systems fail or are compromised, the failure mode is fundamentally different from traditional IT incidents.

Key differences: AI failures are often silent — the system continues to operate but produces wrong outputs. Traditional monitoring does not detect model drift, adversarial inputs, or training data poisoning. The blast radius is determined by how many downstream decisions depend on the compromised model.

Threat vectors: Deepfake-as-a-Service for executive impersonation. Adversarial inputs that bypass classification models. Training data poisoning that corrupts model behaviour over weeks. Prompt injection attacks against LLM-powered workflows. Model extraction attacks that steal proprietary AI capabilities.

First 60 Minutes

Minute 0–15 — Model Isolation: Identify all systems consuming output from the compromised AI model. Determine blast radius: how many business decisions are affected? Switch to manual fallback or rule-based override. Do not wait for root cause analysis to begin isolation.

Minute 15–30 — Decision Authority: AI incidents require cross-functional command. Data science alone cannot arbitrate business impact. Establish incident commander with authority over: model rollback decisions, customer communication, regulatory notification, and business continuity.

Minute 30–60 — Impact Assessment: Determine: how long has the model been compromised? How many decisions were affected? Are those decisions reversible? What is the regulatory exposure (EU AI Act, sector-specific requirements)? Begin evidence preservation for forensic analysis of model behaviour, training data, and inference logs.

Decision Architecture

Track 1 — Model Containment: Rollback to last known-good model version. If no clean version exists, switch to deterministic rules engine. Accept degraded performance over compromised AI output.

Track 2 — Impact Quantification: Enumerate every decision made by the compromised model during the exposure window. Classify decisions by reversibility: fully reversible, partially reversible, irreversible. Prioritise remediation of irreversible decisions.

Track 3 — Regulatory & Legal: EU AI Act requires incident reporting for high-risk AI systems. Determine classification of affected AI system. Prepare notification to relevant supervisory authority. Document all containment actions taken.

Track 4 — Stakeholder Communication: Customers whose decisions were affected by compromised AI must be notified. Board requires briefing on AI risk exposure. Regulators require technical incident report with model performance data.

Track 5 — Root Cause & Hardening: Was this adversarial attack, data poisoning, model drift, or infrastructure compromise? Implement model monitoring (input validation, output anomaly detection, drift detection). Establish AI-specific incident playbooks.

Failure Modes

Silent Degradation: AI model produces plausible but incorrect outputs. No alerts trigger. Downstream decisions accumulate errors over weeks. By the time detection occurs, remediation scope is massive.

Adversarial Exploitation: Attacker manipulates model inputs to produce desired outputs. Fraud detection model approves fraudulent transactions. Content moderation model approves prohibited content. Organisation does not detect manipulation because model metrics appear normal.

Cascade Through Dependencies: One compromised model feeds data to three other models. Downstream models inherit corrupted inputs. Error propagates through ML pipeline. Blast radius exceeds initial assessment because dependency mapping was incomplete.

Regulatory Exposure: Organisation fails to report AI incident within required timeframe. Regulatory authority determines AI system was high-risk under EU AI Act. Penalty is assessed not just for the incident but for failure to classify, monitor, and report.

Recovery Doctrine

Recovery from AI incidents requires more than model retraining.

Model Governance: Implement model inventory with risk classification. Every AI model in production must have: owner, risk tier, monitoring dashboard, rollback procedure, and manual fallback process.

Continuous Validation: Deploy automated model monitoring: input distribution monitoring, output anomaly detection, performance drift alerts, adversarial input detection. Alert thresholds must be set by business impact, not just statistical deviation.

AI Incident Playbook: Traditional IR playbooks do not cover AI-specific scenarios. Develop playbooks for: model compromise, training data poisoning, adversarial attack, model extraction, and AI-generated social engineering.

Board-Level AI Risk: Board must understand AI risk exposure. Quarterly AI risk briefing covering: model inventory, incident history, regulatory compliance status, and emerging threat vectors (deepfakes, prompt injection, autonomous system failures).

Reserve Mandate Email Direct