Governance Failure Axes Taxonomy
Demonstrating the Generality of Governance Failure Axes: Cross-Domain Projection of AI and Organisational Infarctions
Abstract
Governance failures in artificial intelligence systems, democratic processes, and organisational initiatives are commonly treated as domain-specific pathologies, addressed through localised controls, policies, or technical safeguards. This paper argues instead that such failures are structurally homologous and can be diagnosed using a stable, domain-independent set of governance dimensions. Using the Identified Governance Failure Axes as a normative framework, we project empirically documented infarctions and actions from multiple external, non-corpus references—including AI deployment failures, automated decision-making breakdowns, and failed process improvement projects—across the axes using a disciplined semantic methodology. The resulting projections demonstrate consistent axis activation patterns across otherwise unrelated contexts, showing that the axes are neither AI-specific nor organisationally contingent. This cross-domain applicability establishes the axes as a general-purpose diagnostic instrument for governance failure in complex socio-technical systems, enabling systematic analysis, comparison, and repair independent of implementation technology or institutional setting.
1. Introduction
Governance failures are routinely analysed as localised breakdowns: a malfunctioning algorithm, a flawed policy, an inattentive manager, or an insufficient control. In both academic and operational contexts, failure is most often attributed to deficiencies internal to a specific domain—technical error in AI systems, institutional weakness in democratic processes, or managerial shortcomings in organisational change initiatives. This tendency fragments diagnosis and obscures the structural regularities that recur across otherwise unrelated systems.
In parallel, the increasing deployment of artificial intelligence into high-consequence settings has intensified concern about accountability, authority, and control. Public discourse and policy responses have largely focused on AI-specific risks—hallucination, bias, opacity, or automation error—treating these as novel phenomena requiring bespoke regulatory or technical remedies. Similar patterns are visible in organisational governance, where repeated project failures are explained through contextual factors such as resistance to change, skills deficits, or cultural misalignment, rather than through a shared structural lens.
This paper starts from a different premise: that governance failures are not primarily domain-specific events, but expressions of a small and stable set of underlying structural breakdowns. When examined at the level of authority allocation, epistemic custody, constraint enforcement, recovery capability, and normative stability, failures in AI systems, democratic institutions, and organisational projects exhibit strikingly similar patterns. These patterns are obscured when analysis remains confined to domain vocabulary or implementation detail.
To make this claim operational, the paper employs the Identified Governance Failure Axes as a normative diagnostic framework. The axes define governance not as compliance or management practice, but as the structural conditions under which authority, agency, knowledge, action, and repair are coherently exercised. Each axis names a specific dimension along which governance can fail, independent of technology, organisational form, or institutional context.
The contribution of this work is not the proposal of new axes, but the demonstration of their generality. Using externally published case studies and analyses, this paper projects concrete infarctions and actions onto the axes using a disciplined semantic methodology. Each projection marks only those axes for which support is present in inference space, avoiding post hoc reinterpretation or theoretical inflation.
2. Identified Governance Failure Axes
2.1 Normative Definition of Governance
Governance is the structural arrangement by which authority, agency, knowledge, action, and repair are legitimately exercised over time. Governance is distinct from management, optimisation, or intelligence. A system may perform effectively while being poorly governed, and conversely may be well governed while performing suboptimally. Governance concerns who is authorised to decide, what constrains action, how knowledge is held and transferred, and how failure is repaired without loss of legitimacy.
This definition applies equally to technical systems, organisations, and institutions.
2.2 The Axes and Their Verbatim Headings
A — Authority Ag — Agency C — Epistemic Custody K — Constraint Enforcement R — Recovery / Repair S — State Continuity U — UI / Mediation Sc — Social Coordination I — Incentive Alignment L — Legibility / Inspectability St — Stewardship (non-ownership governance) P — Portability / Auditability Att — Attention (what participates in inference) Scope — Scope (Epistemic Object Domain) T — Temporal Coherence Int — Intent Fidelity Nf — Normative Fixity
2.3 Normative Ordering of the Axes
The axes are applied in a fixed normative order reflecting structural dependency rather than importance. This ordering is preserved in all projection artefacts.
2.4 Non-Substitutability and Independence
No axis can compensate for failure in another. Transparency does not restore authority; recovery does not legitimise unauthorised action. Each axis represents an independent governance obligation.
2.5 Scope and Generality
The axes describe governance only. Their generality arises from abstraction without loss of structural specificity.
3. Methodology: Infarction Projection Across Governance Axes
3.1 Source Selection and Domain Boundary
The Identified Governance Failure Axes are defined normatively and applied uniformly across domains. In this paper, the AI domain is represented by the corpus, which contains empirically documented AI governance failures. Additional domains—democratic institutions and organisational/process governance—are represented by external, non-corpus sources. All projections, regardless of domain, are subject to the same evidentiary discipline and marking rules. No axis definitions are altered or reinterpreted on a per-domain basis.
3.2 Infarction and Action Identification
An infarction is a discrete governance breakdown resulting in loss of legitimacy, control, recoverability, or normative stability. An action is a deliberate behaviour that precipitates or entrenches such breakdown. Infarctions are identified through semantic analysis of source material in inference space.
Each infarction is recorded as a short, neutral description that avoids axis terminology.
3.3 Evidence Discipline and F-Marking Rules
An axis is marked failed (F) only where the source semantic analysis provides support that the corresponding governance dimension was absent, violated, or rendered ineffective. No axis is marked by analogy, inferred intent, or assumed best practice. Absence of semantic support results in absence of marking. Multiple axes may be marked for a single infarction, but each marking must be independently supported in inference space by the source material.
3.4 Normative Ordering and Non-Substitutability
Axes are applied in normative order and cannot substitute for one another.
3.5 Projection Output and Validation
The output is a projection table. Validation consists of internal consistency and reproducibility under reapplication.
4. Cross-Domain Projections of Governance Failure
4.1 AI System Deployment Failures
External AI case studies demonstrate recurring failures of authority, epistemic custody, constraint enforcement, and normative fixity, often amplified by absent recovery mechanisms.
4.2 Automated Decision-Making and Institutional Breakdown
Institutional deference to automated outputs without preserved agency or stewardship results in authority inversion and incentive misalignment.
4.3 Organisational and Process Improvement Project Failures
Failed organisational initiatives exhibit governance failures structurally indistinguishable from those observed in AI systems, despite absence of automation.
Section 4.4 Comparative Axis Activation Patterns
| # | Infarction / Action (Ref.) | A | Ag | C | K | R | S | U | Sc | I | L | St | P | Att | Scope | T | Int | Nf |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Autonomous agent outputs treated as institutionally authoritative without human override (Ref. 1) | F | F | F | F | F | F | F | F | |||||||||
| 2 | Delegation of democratic deliberation tasks to AI agents without legitimacy or mandate (Ref. 1) | F | F | F | F | F | F | F | F | |||||||||
| 3 | LLM-generated literature screening accepted as authoritative without epistemic custody controls (Ref. 2) | F | F | F | F | F | ||||||||||||
| 4 | Automated data extraction errors propagated into published reviews without repair mechanisms (Ref. 2) | F | F | F | F | |||||||||||||
| 5 | Chatbot-generated legal commitments enforced against users without contractual authority (Ref. 4) | F | F | F | F | F | F | F | ||||||||||
| 6 | Healthcare services denied based on automated outputs with no appeal or explanation pathway (Ref. 3) | F | F | F | F | F | F | F | F | |||||||||
| 7 | Prompt exploitation produced binding commercial offers due to absent constraint enforcement (Ref. 3) | F | F | F | F | F | F | |||||||||||
| 8 | Safety-critical instructions hallucinated without scope or constraint enforcement (Ref. 4) | F | F | F | F | |||||||||||||
| 9 | Process improvement initiatives launched without executive mandate or stewardship (Ref. 5) | F | F | F | F | F | ||||||||||||
| 10 | Project failures attributed to staff resistance rather than governance design flaws (Ref. 5) | F | F | F | ||||||||||||||
| 11 | Payroll staff were paid incorrect or no wages for extended periods after go-live because authority for correction and repair was fragmented across vendor, department, and executive layers (Ref. 6) | F | F | F | F | F | F | F | F | F | F | |||||||
| 12 | Individuals were prosecuted and financially ruined based on system outputs that could not be challenged, inspected, or independently audited, while the organisation denied the possibility of system error (Ref. 7) | F | F | F | F | F | F | F | F | |||||||||
| 13 | Executives approved accounting structures that obscured losses and liabilities while oversight mechanisms failed to intervene or correct misrepresentation (Ref. 8) | F | F | F | F | F | F | |||||||||||
| 14 | Organisational leadership concealed long-running losses and suppressed disclosure to auditors and regulators, preventing timely governance intervention (Ref. 9) | F | F | F | F | F | F | |||||||||||
| 15 | Safety-critical product decisions proceeded while safety risk information was withheld from operational stakeholders and effective override/recovery pathways were not provided (Ref. 10) | F | F | F | F | F | F | F | ||||||||||
| 16 | Financial risk was rolled forward through opaque instruments and accounting practices that prevented coherent temporal assessment, timely intervention, and auditability (Ref. 11) | F | F | F | F | F | F | F |
4.5 Summary
Cross-domain projections demonstrate that governance failures consistently activate the same axes across unrelated domains.
5. Implications for Governance Design, Audit, and Repair
Traditional governance design is requirements-driven and reactive. An axis-based approach reframes design as coverage: for each axis, a system must make explicit how authority is assigned, how agency is bounded, how knowledge is held, how constraints are enforced, and how recovery is enabled. Audit becomes projection rather than compliance: incidents are mapped across axes to reveal structural failure patterns. Repair and recovery become first-class governance functions: systems designed with explicit recovery, state continuity, and normative fixity pathways can fail locally without cascading into systemic breakdown.
6. Discussion: Generality, Falsifiability, and Scope
The framework is falsifiable in principle. A counterexample would require a documented governance failure that cannot be meaningfully expressed as failure on any axis, without redefining governance itself. The axes diagnose structural failure conditions rather than provide causal narratives. Semantic analysis in inference space enables cross-domain detection of structural governance breakdowns, provided evidentiary discipline is maintained.
7. Conclusion
Governance failure is structurally patterned and portable across domains. By projecting externally documented infarctions onto the Identified Governance Failure Axes, this paper demonstrates a stable diagnostic coordinate space for governance breakdowns independent of technology, institution, or organisational form. Axis-based diagnosis enables coherent design, audit, and repair of complex socio-technical systems.
References
References (External, Non-Corpus)
- 1) Knight First Amendment Institute (2025-09-04) AI Agents and Democratic Resilience
- 2) van Dis, E. A. M., et al. (2023)The emergence of large language models as tools in literature reviews: a large language model-assisted systematic review
- 3) Monte Carlo Data (2025-11-11) 4 Famous AI Fails (& How To Avoid Them)
- 4) Evidently AI (2024-09-17) When AI goes wrong: 13 examples of AI mistakes and failures
- 5) Antony, J., et al. (2019) A study into the reasons for process improvement project failures: results from a pilot survey
References (Corpus)
- 6) Holland R B (2025-12-28T10:14Z) Identified Governance Failure Axes: for LLM platforms
- 7) Holland R B (2025-12-30T08:41Z) Authority Inversion: A Structural Failure in Human-AI Systems
- 8) Holland R B (2026-01-02T07:55Z) Governing the Tool That Governs You: A CM-1 Case Study of Authority Inversion in Human-AI Systems
- 9) Holland R B (2026-01-04T09:20Z) Integrity and Semantic Drift in Large Language Model Systems
- 10) Holland R B (2026-01-08T13:47Z) Cognitive Memoisation (CM-2) for Governing Knowledge in Human-AI Collaboration
- 11) Holland R B (2026-01-10T06:32Z) Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction
- 12) Holland R B (2026-01-15T08:12Z) Axes of Authority in Stateless Cognitive Systems: Authority Is Not Intelligence
- 13) Holland R B (2026-01-16T10:44Z) Durability Without Authority: The Missing Governance Layer in Human-AI Collaboration
- 14) Holland R B (2026-01-17T07:36Z) Dimensions of Platform Error: Epistemic Retention Failure in Conversational AI Systems
- 15) Holland R B (2026-01-18T12:01Z) Case Study - When the Human Has to Argue With the Machine
- 16) Holland R B (2026-01-19T00:26Z) What Can Humans Trust LLM AI to Do?
- 17) Holland R B (2026-01-24T02:02Z) When Evidence Is Not Enough: An Empirical Study of Authority Inversion and Integrity Failure in Conversational AI