SB
2026-04-04 · 22 min read

DORA is an architecture, not a checklist

Institutions reading DORA as a compliance exercise are building exactly the kind of fragile systems the regulation was designed to prevent.

Click any article reference to view the full regulation text.

Eighteen months after DORA entered into force, the pattern is clear. Most financial institutions have treated the Digital Operational Resilience Act (Regulation (EU) 2022/2554) as a documentation exercise. Map controls to articles. Fill in the register of information. Update the vendor contracts with the mandatory clauses from Article 30. Schedule the annual pen test. Pass the audit.

This approach satisfies the letter of DORA while comprehensively missing its intent. The regulation's language is precise, and it demands capability, not documentation. requires that the ICT risk management framework include "strategies, policies, procedures, ICT protocols and tools that are necessary to duly and adequately protect all information assets and ICT assets." mandates that financial entities identify all sources of ICT risk "on a continuous basis." requires ICT business continuity policy that forms "an integral part of the overall business continuity policy." states that ICT third-party risk must be managed "as an integral component of ICT risk."

The word "integral" appears repeatedly in DORA, and it is doing enormous work each time. When says third-party risk must be managed as an integral component of ICT risk, it means third-party risk cannot be siloed in procurement or vendor management. It must be woven into the institution's operational risk architecture. When says ICT business continuity must be an integral part of overall business continuity, it means IT recovery plans cannot exist in a separate binder from the business recovery plans. These are engineering problems. They require architectural responses. A compliance checklist cannot solve them.

Pillar 1: ICT risk management as an observability architecture

DORA's first pillar (Articles 5 through 16) is commonly read as "have a risk management framework." Institutions check the box by producing a policy document, assigning a Chief Information Security Officer, and establishing a risk register. This satisfies approximately none of what the regulation actually requires.

mandates that the management body "define, approve, oversee and be responsible for the implementation of all arrangements related to the ICT risk management framework." Critically, requires that board members "actively keep up to date with sufficient knowledge and skills to understand and assess ICT risk and its impact on the operations of the financial entity, including by following specific training on a regular basis." This is a governance requirement with architectural implications: the systems that surface ICT risk information must be designed for board-level consumption, with appropriate abstraction and real-time relevance.

Article 8 is where the operational demands become sharp. requires institutions to "identify, classify and adequately document all ICT supported business functions, roles and responsibilities, the information assets and ICT assets supporting those functions, and their roles and dependencies in relation to ICT risk." escalates this to continuous operation: financial entities must "on a continuous basis, identify all sources of ICT risk, in particular the risk exposure to and from other financial entities."

goes further still, requiring institutions to "identify all information assets and ICT assets, including those on remote sites, network resources and hardware equipment" and to "map the configuration of the information assets and ICT assets and the links and interdependencies between the different information assets and ICT assets." adds a third-party dimension: institutions must "identify and document all processes that are dependent on ICT third-party service providers" and "identify interconnections with ICT third-party service providers that provide services that support critical or important functions."

Read these requirements together. What DORA demands is a continuously updated, dependency-aware map of every ICT asset, every business function those assets support, every third-party provider those assets depend on, and the interconnections between them. That is not a risk register. That is an observability architecture: a system that ingests configuration data, dependency information, and real-time operational status, and maintains a live model of the institution's ICT landscape.

Most institutions do not have this. They have a CMDB (configuration management database) that was last updated during a migration project, a spreadsheet of vendor contracts that procurement maintains separately, and an architecture diagram that an enterprise architect drew two years ago. None of these systems talk to each other. None of them update continuously. None of them can answer the question that Article 8(4) implicitly demands: "If this specific ICT asset fails right now, which business functions are affected, through which dependency chains, with what fallback options?"

Building this capability requires investment in infrastructure observability (automated asset discovery, dependency mapping, configuration drift detection), integration between IT asset management and vendor risk management systems, and an abstraction layer that translates technical dependency data into business-function impact assessments. These are engineering projects with multi-quarter timelines. They cannot be replaced by a policy document.

Pillar 2: Incident management as an event-driven architecture

DORA's second pillar (Articles 17 through 23) covers ICT-related incident management. The compliance interpretation is straightforward: have an incident response plan, classify incidents by severity, report to regulators within defined timeframes.

The architectural demand is more rigorous. requires institutions to "establish and implement an ICT-related incident management process to detect, manage and notify ICT-related incidents." requires that this process include "early warning indicators" and "procedures to identify, track, log, categorise and classify ICT-related incidents according to their priority and severity and according to the criticality of the services impacted."

The phrase "early warning indicators" is easy to skip over. It implies that the incident management system must do more than react to events. It must detect patterns that precede incidents: anomalous network traffic, unusual authentication patterns, degraded vendor SLA performance, clustering of minor incidents that may indicate a systemic issue. This is a detection and correlation architecture, not an incident response runbook.

specifies the classification criteria for major ICT-related incidents, including thresholds based on the number of clients affected, the duration of the incident, the geographic spread, the data losses, the criticality of services impacted, and the economic impact. Institutions must classify incidents against these criteria in real time, because Article 19(4) sets tight reporting windows: initial notification within four hours of classifying an incident as major, an intermediate report within 72 hours, and a final report within one month.

Classifying an incident against six dimensions in real time requires automated severity scoring. A human analyst cannot manually assess client impact, geographic spread, data loss, service criticality, economic impact, and duration against regulatory thresholds within four hours while also managing the incident itself. The system must provide automated classification with manual override, pre-populated reporting templates, and escalation workflows that trigger regulatory notification when thresholds are breached.

The 2017 NotPetya attack illustrates why this matters. When Maersk's IT systems were compromised through a Ukrainian tax software update, the company lost access to 49,000 PCs and 6,500 servers within hours. Port operations across 76 terminals halted. Under DORA's classification criteria, this would have triggered immediate major incident reporting across multiple dimensions: client impact (global), duration (weeks), service criticality (core operations), economic impact ($250-300 million). An institution relying on manual classification processes would have been unable to meet DORA's four-hour initial notification window while simultaneously managing the crisis itself.

The architectural response is an event-driven incident management system: automated ingestion of operational alerts, automated severity scoring against DORA's Article 19 criteria, pre-built regulatory reporting templates, and escalation logic that triggers notification workflows without human intervention for the classification step.

Pillar 3: Resilience testing as continuous security validation

DORA's third pillar (Articles 24 through 27) covers digital operational resilience testing. Most institutions read this as "conduct penetration testing." DORA requires something substantially more demanding.

establishes the baseline: financial entities must "establish, maintain and review a sound and comprehensive digital operational resilience testing programme as an integral part of the ICT risk management framework." Article 24(2) specifies that this programme must include "a range of assessments, tests, methodologies, practices and tools" applied to "all ICT systems and applications supporting critical or important functions." The article lists specific testing types: vulnerability assessments, open source analyses, network security assessments, gap analyses, physical security reviews, questionnaires and scanning software solutions, source code reviews, scenario-based tests, compatibility testing, performance testing, end-to-end testing, and penetration testing.

This list is extensive by design. DORA does not want institutions to run an annual penetration test and call it done. It wants a testing programme that continuously validates resilience across multiple dimensions, using multiple methodologies, against all systems supporting critical functions.

Articles 26 and 27 introduce the genuinely paradigm-shifting requirement: threat-led penetration testing (TLPT). mandates that designated financial entities "carry out at least every 3 years advanced testing by means of TLPT." specifies that each TLPT "shall cover several or all critical or important functions of a financial entity, and shall be performed on live production systems supporting such functions."

The phrase "live production systems" is critical. This is not a test against a staging environment or a simulation. DORA requires red team operations against the actual production infrastructure that processes real transactions, holds real customer data, and connects to real payment networks. Article 26(4) extends this to third-party providers: where ICT services supporting critical functions have been outsourced, the TLPT scope must include those outsourced services. Article 30(3)(d) requires institutions to contractually obligate their vendors to participate in TLPT.

The Commission Delegated Regulation (EU) 2025/1190, published in June 2025, now provides the detailed RTS for TLPT execution, including mandatory purple teaming phases, specific scoping requirements, and reporting obligations. The first TLPTs under DORA are expected to commence in late 2026 or early 2027, and they will fundamentally change how financial institutions approach security validation.

The architectural implication: institutions need continuous testing infrastructure that supports multiple testing methodologies, schedules them against critical systems on a rolling basis, and integrates test results back into the ICT risk management framework. The TLPT requirement additionally demands established relationships with qualified external testers (Article 27 specifies certification, expertise, and insurance requirements), contractual frameworks with vendors enabling production testing, and internal coordination capabilities for managing red team operations without disrupting live services.

An annual pen test report sitting in a SharePoint folder does not constitute a resilience testing programme. What DORA describes is a continuous security validation capability embedded in the institution's operational rhythm.

Pillar 4: Third-party risk as a dependency graph

DORA's fourth pillar (Articles 28 through 44) addresses ICT third-party risk management. This is the pillar most institutions have focused on, primarily because of the register of information requirement in Article 28(3). And it is the pillar where the gap between compliance and architecture is widest.

opens with the foundational principle: "Financial entities shall manage ICT third-party risk as an integral component of ICT risk within their ICT risk management framework." The word "integral" again. Third-party risk must live inside the same framework that manages all ICT risk, sharing the same governance, the same monitoring infrastructure, and the same reporting lines. Siloing it in procurement or vendor management violates the regulation's explicit intent.

introduces the register of information: financial entities must "maintain and update at entity level, and at sub-consolidated and consolidated levels, a register of information in relation to all contractual arrangements on the use of ICT services provided by ICT third-party service providers." The register must distinguish between services supporting critical or important functions and those that do not. It must be reported to competent authorities at least yearly, and made available in full upon request.

Most institutions have implemented this register as a spreadsheet. Some have built it in a GRC (governance, risk, and compliance) tool. A few have populated a database. Almost none have built what the regulation actually implies: a queryable dependency graph.

Consider what the register must answer. requires that before entering into a contractual arrangement, institutions assess "the risk of ICT concentration," specifically "whether the conclusion of a contractual arrangement in respect of ICT services supporting critical or important functions would lead to" concentration risk. Article 29 elaborates on this: institutions must assess "the number of ICT third-party service providers" they depend on and evaluate "the level of substitutability of the ICT services."

A spreadsheet cannot answer concentration risk questions. If an institution contracts with AWS for cloud hosting, Salesforce for CRM (which runs on AWS), and a payment processor whose primary data centre is also on AWS, the concentration risk is invisible in a flat register that lists three separate vendor relationships. Only a graph-based representation, where nodes are entities (vendors, services, data centres, business functions) and edges are dependencies with metadata (criticality, contractual terms, substitutability), can reveal that three apparently independent vendor relationships create a single point of failure through shared infrastructure.

This is precisely the architectural approach that Eryndal applies to regulatory documents. The same graph-based model (nodes, edges, traversable relationships) applies directly to ICT dependency mapping. When Eryndal builds a knowledge graph of 8 million regulatory nodes with cross-references and hierarchical relationships, the underlying principle is identical to what a proper DORA register of information requires: structured entities connected by typed relationships, queryable in ways that reveal patterns invisible in flat representations.

The graph makes several DORA requirements tractable that would otherwise be impossible:

Concentration risk analysis (Article 29). Traverse the dependency graph to find all paths from critical business functions to a single vendor, data centre, or geographic region. If three critical functions depend on the same underlying provider through different contractual arrangements, the graph reveals the concentration that the spreadsheet hides.

Exit strategy planning (Article 28(8)). DORA requires exit strategies for all ICT services supporting critical functions. An exit strategy requires understanding what depends on the service being exited, which alternative providers could substitute, and what the transition path looks like. These are graph traversal problems: follow the dependency edges upstream from the exiting vendor to identify affected business functions, then search for alternative paths.

Subcontractor chain visibility (Article 28(3)). The register must include not only direct ICT providers but also "all the subcontractors that ensure the provision of the ICT service" for critical functions. This is inherently a graph problem: the institution's direct vendor subcontracts to a cloud provider, which subcontracts infrastructure to a data centre operator, which sources power from a specific utility. The subcontracting chain is a directed graph, and understanding the institution's true dependency profile requires traversing it to its leaves.

Impact analysis for vendor incidents. When a vendor experiences an outage (or a NotPetya-scale compromise), the institution needs to answer immediately: which of our critical functions are affected? The graph provides this answer through reverse dependency traversal from the affected vendor node to all connected business function nodes.

Consider a concrete example. In July 2024, a faulty content update pushed to CrowdStrike Falcon endpoint protection software caused Windows systems running the agent to crash with a blue screen of death. The outage cascaded across airlines, hospitals, banks, media companies, and emergency services globally. For a financial institution running CrowdStrike on its endpoint fleet, the dependency graph would have revealed the exposure instantly.

The institution contracts with CrowdStrike for endpoint protection. CrowdStrike agents run on workstations used by trading desk staff, on servers hosting the core banking platform, and on laptops used by compliance officers processing regulatory filings. The graph encodes these relationships. But the graph also reveals second-order dependencies. The institution's outsourced IT service provider also runs CrowdStrike on its infrastructure. The institution's cloud hosting provider uses CrowdStrike for its internal security monitoring. Neither of these dependencies is visible in a flat vendor register that lists CrowdStrike as a single line item. In the graph, CrowdStrike appears as a node with edges reaching into the institution's operations through three distinct paths: direct deployment, vendor infrastructure, and cloud provider infrastructure. The concentration risk is immediately visible.

When the faulty update triggers, the graph enables instant impact assessment. Traverse all edges from the CrowdStrike node. Identify every business function reached through any path. Classify each affected function by criticality. Activate the corresponding business continuity procedures. Report to the competent authority under Article 19 if the incident qualifies as major (which, given the breadth of impact, it almost certainly does).

Without the graph, this analysis happens manually. Someone calls the IT team. Someone else calls the outsourced provider. A third person checks the cloud hosting contract. By the time the full impact is understood, the four-hour notification window under Article 19(4) may already have elapsed.

This is what DORA means by "integral." The dependency information, the incident detection, the impact assessment, the regulatory reporting, and the business continuity activation must be connected into a single system. Fragmentation across departments and spreadsheets produces exactly the kind of opaque, slow-to-respond institutional behaviour that the regulation was designed to eliminate.

The BaFin guidance published in May 2025 confirms this operational reality. After the April 30, 2025 deadline for initial register submission, BaFin indicated it would use the register data for year-over-year comparison to identify new arrangements and changing dependency patterns. The register functions as a living data structure, one that regulators will actively analyse for concentration risk and systemic dependencies across the entire sector.

Pillar 5: Information sharing as a trust network

DORA's fifth pillar (Article 45) covers information sharing arrangements. This is the least discussed pillar, partly because it is voluntary and partly because it is the hardest to operationalise.

states that financial entities "may exchange amongst themselves cyber threat information and intelligence, including indicators of compromise, tactics, techniques, and procedures, cyber security alerts and configuration tools." The permissive "may" masks the strategic significance: institutions that participate in threat intelligence sharing networks gain access to early warning indicators that can shift their risk posture before an incident materialises. Those that do not participate are operating with a narrower field of view.

The architectural requirement is a trust network with structured data exchange. Threat intelligence must be classified by confidence level and source reliability. Sharing protocols must protect proprietary information while enabling actionable signal exchange. And the ingested intelligence must feed into the ICT risk management framework described in Pillar 1, updating risk assessments and detection rules based on newly identified threats.

This connects directly to the continuous monitoring requirement in Article 8(2). A threat intelligence feed that lands in an analyst's inbox is not integrated into the risk management framework. An automated pipeline that ingests structured threat intelligence, maps it to the institution's asset inventory, and adjusts detection rules accordingly is.

Continuous resilience, not annual review

Across all five pillars, a single architectural requirement recurs: continuity.

Article 6(5) requires that the ICT risk management framework "be continuously improved on the basis of lessons derived from implementation and monitoring." Article 8(2) requires continuous identification of ICT risk sources. Article 9(1) mandates that protection and prevention measures be "continuously updated." Article 10(1) requires mechanisms to "promptly detect anomalous activities." Article 13(3) requires that lessons from incidents and testing "be duly incorporated on a continuous basis into the ICT risk assessment process."

"Continuous" is incompatible with annual review cycles. It demands a system that ingests events (vendor incidents, regulatory enforcement actions, CVE disclosures, threat intelligence, internal audit findings) and updates the institution's risk posture in real time. Each new data point shifts the model. An outsourcing relationship that experienced zero incidents for ten years carries different evidence than one that experienced a two-day disruption last quarter. A vulnerability disclosed in a critical vendor's software changes the risk profile of every business function that depends on that vendor, immediately and automatically.

This is the architectural pattern behind Roach's prediction engine. Each new event feeds into the causal graph and shifts predicted threshold breach timelines. The model does not wait for an annual review to incorporate new evidence. It updates continuously, because the threat environment changes continuously and a resilience assessment based on last quarter's data is already stale.

Proportionality is not an exemption

Article 4 of DORA establishes a proportionality principle: requirements must be implemented "taking into account the size and the overall risk profile of the financial entity, and the nature, scale and complexity of its services, activities and operations." Article 16 provides a "simplified ICT risk management framework" for certain smaller entities.

Many institutions have interpreted proportionality as permission to do less. This misreads the principle. Proportionality means the implementation can be simpler. It does not mean the capability can be absent. A small payment institution with 50 employees still needs to know its ICT dependencies (Article 8). It still needs to detect incidents promptly (Article 10). It still needs to maintain business continuity during ICT disruptions (Article 11). It still needs to manage third-party risk as an integral component of its risk framework (Article 28). The systems it builds to do these things can be simpler and less automated than those of a systemically important bank. But the capabilities must exist.

The risk of hiding behind proportionality is concrete. When a vendor outage cascades through a smaller institution's operations and critical functions are offline for 72 hours, "we applied proportionality" will not satisfy the regulator asking why the institution had no visibility into its dependency chain and no exit strategy for the affected vendor.

What this means for architecture teams

DORA implicitly demands a set of architectural capabilities that most financial institutions do not currently possess. These are engineering systems, not compliance tools:

Dependency-aware asset management. A continuously updated graph of ICT assets, business functions, vendor relationships, and the dependencies between them. Queryable for concentration risk, impact analysis, and exit planning. Fed by automated discovery and change detection, not manual data entry.

Event-driven risk monitoring. A system that ingests operational events (vendor incidents, CVE disclosures, regulatory actions, threat intelligence, internal alerts), classifies them by type and severity, and updates the institution's risk posture automatically. The output feeds the board-level dashboards that Article 5(4) implicitly requires.

Automated incident classification. Real-time severity scoring against DORA's Article 19 criteria, with pre-built regulatory reporting templates and escalation logic that triggers notification workflows within the four-hour window.

Continuous testing infrastructure. A testing programme that supports multiple methodologies (vulnerability scanning, penetration testing, scenario-based testing, source code review) on a rolling schedule against all systems supporting critical functions. Integration with the risk management framework so that test findings automatically update risk assessments.

TLPT coordination capability. For designated institutions: established relationships with qualified external testers, contractual frameworks enabling production testing of outsourced services, and internal coordination capabilities for managing red team operations without prior notification to the blue team.

Cross-silo data integration. A unified view of vendor, infrastructure, threat, and compliance data. The teams can remain separate. The data cannot. Risk assessment that requires manual aggregation across four departments will always be too slow for the continuous monitoring that DORA demands.

These capabilities share a common property: they connect information that currently sits in silos into a model that can answer questions about cross-domain risk. DORA's five pillars are not independent compliance domains. They are facets of a single resilience architecture that must be connected to function.

The gap between compliant and resilient

DORA exists because the regulators recognised what the 2017 NotPetya attack, the 2020 SolarWinds compromise, and the 2024 CrowdStrike outage each demonstrated: the financial sector's operational resilience was built on assumptions that no longer hold. The assumptions that IT risks can be managed through capital allocation. That vendor relationships are stable and substitutable. That annual testing reveals the vulnerabilities that matter. That incident response plans survive contact with real incidents.

Institutions that treat DORA as an architecture problem rather than a compliance problem will build systems that are genuinely resilient. They will know their dependencies before a vendor fails. They will detect incidents before they cascade. They will test against realistic adversaries on live production systems. They will maintain a model of their risk posture that updates as the threat environment changes.

Institutions that treat DORA as a checklist will produce documentation that satisfies auditors and systems that fail under stress. The gap between compliant and resilient is precisely where the next operational failure will occur.

Roach exists because that gap is real, measurable, and exploitable. Eryndal exists because the regulatory landscape that defines resilience obligations is too vast and too interconnected for human analysts to traverse unaided. Together, they represent the kind of architectural response that DORA demands: continuous, integrated, dependency-aware, and honest about what the institution does and does not know.

DORA exists because the threat environment has changed and the old tools are inadequate. The regulation's genius is that it prescribes capabilities rather than specific technologies or architectures. How you build those capabilities is an engineering decision. That you build them is now a legal obligation.

Regulation (EU) 2022/2554
Digital Operational Resilience Act, 14 December 2022. Official Journal of the European Union, L 333.

References

  1. Regulation (EU) 2022/2554 of the European Parliament and of the Council of 14 December 2022 on digital operational resilience for the financial sector. eur-lex.europa.eu

  2. Commission Delegated Regulation (EU) 2025/1190 supplementing Article 26 of DORA, specifying Regulatory Technical Standards for Threat-Led Penetration Testing. Published 18 June 2025, applicable from 8 July 2025.

  3. BaFin (2025). Registers of Information and Notification Requirements: Identifying Concentrations in IT Services. bafin.de. Guidance on post-submission obligations for the Article 28(3) register of information.

  4. Crosignani, M., Macchiavelli, M. & Silva, A.F. (2023). Pirates without Borders: The Propagation of Cyberattacks through Firms' Supply Chains. Journal of Financial Economics, 147(2), 432-448. sciencedirect.com. Empirical analysis of NotPetya's supply chain propagation and downstream financial effects.

  5. DORA Article-by-Article Reference. digital-operational-resilience-act.com. Complete final text of all DORA articles with commentary.