Governance Failure Axes Taxonomy

From publications

metadata

Title: Demonstrating the Generality of Governance Failure Axes: Cross-Domain Projection of AI and Organisational Infarctions
Author: Ralph B. Holland
Affiliation: Arising Technology Systems Pty Ltd
Contact: ralph.b.holland [at] gmail.com
version: 1.1.2
Publication Date: 2026-01-24T10:37Z
Updates:
Scope: This is a non-peer reviewed paper presented as an anchor for orthogonal Governance Failure Axes analysis across domains.
Provenance: This is an authored paper maintained as a MediaWiki document; edit history reflects editorial changes, not collaborative authorship.
Status: non-peer reviewed anchor

Metadata (Normative)

The metadata table immediately preceding this section is CM-defined and constitutes the authoritative provenance record for this artefact.

All fields in that table (including artefact, author, version, date, local timezone, and reason) MUST be treated as normative metadata.

The assisting system MUST NOT infer, normalise, reinterpret, duplicate, or rewrite these fields. If any field is missing, unclear, or later superseded, the change MUST be made explicitly by the human and recorded via version update, not inferred.

Curator Provenance and Licensing Notice

This document predates its open licensing.

As curator and author, I apply the Apache License, Version 2.0, at publication to permit reuse and implementation while preventing enclosure or patent capture. This licensing action does not revise, reinterpret, or supersede any normative content herein.

Authority remains explicitly human; no implementation, system, or platform may assert epistemic authority by virtue of this license.

Demonstrating the Generality of Governance Failure Axes: Cross-Domain Projection of AI and Organisational Infarctions

Abstract

Governance failures in artificial intelligence systems, democratic institutions, and organisational initiatives are commonly treated as domain-specific pathologies, addressed through localised controls, policies, or technical safeguards. This paper argues instead that such failures are structurally homologous and can be diagnosed using a stable, domain-independent set of governance dimensions. Using the Identified Governance Failure Axes as a normative diagnostic framework, we project empirically documented governance infarctions and actions drawn from multiple domains. The AI domain is represented by a corpus of documented AI governance failures, while democratic and organisational domains are represented by external, non-corpus case studies. All projections are performed using a disciplined semantic methodology and uniform evidentiary rules, with axes marked only where support is present in inference space. The resulting cross-domain projections reveal consistent axis activation patterns across otherwise unrelated contexts, demonstrating that the axes describe governance structure rather than domain-specific or technology-contingent failure modes. This establishes the axes as a general-purpose diagnostic instrument for analysing, comparing, and repairing governance failures in complex socio-technical systems, independent of implementation technology or institutional setting.

1. Introduction

Governance failures are routinely analysed as localised breakdowns: a malfunctioning algorithm, a flawed policy, an inattentive manager, or an insufficient control. In both academic and operational contexts, failure is most often attributed to deficiencies internal to a specific domain—technical error in AI systems, institutional weakness in democratic processes, or managerial shortcomings in organisational change initiatives. This tendency fragments diagnosis and obscures the structural regularities that recur across otherwise unrelated systems.

In parallel, the increasing deployment of artificial intelligence into high-consequence settings has intensified concern about accountability, authority, and control. Public discourse and policy responses have largely focused on AI-specific risks—hallucination, bias, opacity, or automation error—treating these as novel phenomena requiring bespoke regulatory or technical remedies. Similar patterns are visible in organisational governance, where repeated project failures are explained through contextual factors such as resistance to change, skills deficits, or cultural misalignment, rather than through a shared structural lens.

This paper starts from a different premise: that governance failures are not primarily domain-specific events, but expressions of a small and stable set of underlying structural breakdowns. When examined at the level of authority allocation, epistemic custody, constraint enforcement, recovery capability, and normative stability, failures in AI systems, democratic institutions, and organisational projects exhibit strikingly similar patterns. These patterns are obscured when analysis remains confined to domain vocabulary or implementation detail.

To make this claim operational, the paper employs the Identified Governance Failure Axes as a normative diagnostic framework. The axes define governance not as compliance or management practice, but as the structural conditions under which authority, agency, knowledge, action, and repair are coherently exercised. Each axis names a specific dimension along which governance can fail, independent of technology, organisational form, or institutional context.

The contribution of this work is not the proposal of new axes, but the demonstration of their generality. Using externally published case studies and analyses, this paper projects concrete infarctions and actions onto the axes using a disciplined semantic methodology. Each projection marks only those axes for which support is present in inference space, avoiding post hoc reinterpretation or theoretical inflation.

2. Identified Governance Failure Axes

2.1 Normative Definition of Governance

Governance is the structural arrangement by which authority, agency, knowledge, action, and repair are legitimately exercised over time. Governance is distinct from management, optimisation, or intelligence. A system may perform effectively while being poorly governed, and conversely may be well governed while performing suboptimally. Governance concerns who is authorised to decide, what constrains action, how knowledge is held and transferred, and how failure is repaired without loss of legitimacy.

This definition applies equally to technical systems, organisations, and institutions.

2.2 The Axes and Their Verbatim Headings

A — Authority Ag — Agency C — Epistemic Custody K — Constraint Enforcement R — Recovery / Repair S — State Continuity U — UI / Mediation Sc — Social Coordination I — Incentive Alignment L — Legibility / Inspectability St — Stewardship (non-ownership governance) P — Portability / Auditability Att — Attention (what participates in inference) Scope — Scope (Epistemic Object Domain) T — Temporal Coherence Int — Intent Fidelity Nf — Normative Fixity

2.3 Normative Ordering of the Axes

The axes are applied in a fixed normative order reflecting structural dependency rather than importance. This ordering is preserved in all projection artefacts.

2.4 Non-Substitutability and Independence

No axis can compensate for failure in another. Transparency does not restore authority; recovery does not legitimise unauthorised action. Each axis represents an independent governance obligation.

2.5 Scope and Generality

The axes describe governance only. Their generality arises from abstraction without loss of structural specificity.

2.6 Normative Projection and Interpretation Constraints

This paper adopts the following normative constraints governing projection, interpretation, and reuse of the Identified Governance Failure Axes and associated projection artefacts.

Infarction primacy

Each row in a projection table represents a discrete governance infarction or action. Rows are not grouped, ordered, or interpreted by domain. Domain attribution is derived solely from the cited source material associated with each infarction, not from row position, adjacency, or table structure.

Domain representation

The corpus represents the artificial intelligence (AI) domain. External, non-corpus sources represent democratic, institutional, and organisational domains. All domains are treated as analytically equivalent: no domain is privileged, discounted, or treated as normative ground truth for axis activation.

Axis invariance

Axis definitions, ordering, and meanings are fixed and invariant across domains. Axes are not specialised, extended, suppressed, or reinterpreted on a per-domain basis. Cross-domain applicability is assessed solely through projection outcomes, not through modification of the axis set.

Evidentiary discipline

An axis may be marked failed (F) only where the source material provides semantic support that the corresponding governance dimension was absent, violated, or rendered ineffective. Axes must not be marked by analogy, assumed intent, inferred best practice, outcome severity, or retrospective justification. Absence of support requires absence of marking.

Non-aggregation of justification

No axis failure compensates for, substitutes for, or implies failure on another axis. Each axis marking stands independently and must be supported independently. Patterns may be observed only after projection; they must not be imposed during projection.

Interpretive limits

Projection tables are diagnostic instruments, not causal models. They identify structural governance failure conditions but do not, by themselves, establish causality, intent, responsibility, or remediation priority.

These constraints are normative. Any analysis that relaxes them, alters axis definitions, re-buckets rows by domain, or introduces inferred markings constitutes a different method and must not be presented as an application of the framework used in this pape

3. Methodology: Infarction Projection Across Governance Axes

3.1 Source Selection and Domain Boundary

The Identified Governance Failure Axes are defined normatively and applied uniformly across domains. In this paper, the AI domain is represented by the corpus, which contains empirically documented AI governance failures. Additional domains—democratic institutions and organisational/process governance—are represented by external, non-corpus sources. All projections, regardless of domain, are subject to the same evidentiary discipline and marking rules. No axis definitions are altered or reinterpreted on a per-domain basis.

The inclusion of the AI corpus does not constitute validation-by-construction, as the same axis definitions and marking rules are applied unchanged to non-AI domains

3.2 Infarction and Action Identification

An infarction is a discrete governance breakdown resulting in loss of legitimacy, control, recoverability, or normative stability. An action is a deliberate behaviour that precipitates or entrenches such breakdown. Infarctions are identified through semantic analysis of source material in inference space.

Each infarction is recorded as a short, neutral description that avoids axis terminology.

3.3 Evidence Discipline and F-Marking Rules

An axis is marked failed (F) only where the source semantic analysis provides support that the corresponding governance dimension was absent, violated, or rendered ineffective. No axis is marked by analogy, inferred intent, or assumed best practice. Absence of semantic support results in absence of marking. Multiple axes may be marked for a single infarction, but each marking must be independently supported in inference space by the source material.

3.4 Normative Ordering and Non-Substitutability

Axes are applied in normative order and cannot substitute for one another.

3.5 Projection Output and Validation

The output is a projection table. Validation consists of internal consistency and reproducibility under reapplication.

4. Cross-Domain Projections of Governance Failure

4.1 AI System Deployment Failures

External AI case studies demonstrate recurring failures of authority, epistemic custody, constraint enforcement, and normative fixity, often amplified by absent recovery mechanisms.

4.2 Automated Decision-Making and Institutional Breakdown

Institutional deference to automated outputs without preserved agency or stewardship results in authority inversion and incentive misalignment.

4.3 Organisational and Process Improvement Project Failures

Failed organisational initiatives exhibit governance failures structurally indistinguishable from those observed in AI systems, despite absence of automation.

Section 4.4 Comparative Axis Activation Patterns

# Infarction / Action (Ref.) A Ag C K R S U Sc I L St P Att Scope T Int Nf
1 Autonomous agent outputs treated as institutionally authoritative without human override (Ref. 1) F F F F F F F F
2 Delegation of democratic deliberation tasks to AI agents without legitimacy or mandate (Ref. 1) F F F F F F F F
3 LLM-generated literature screening accepted as authoritative without epistemic custody controls (Ref. 2) F F F F F
4 Automated data extraction errors propagated into published reviews without repair mechanisms (Ref. 2) F F F F
5 Chatbot-generated legal commitments enforced against users without contractual authority (Ref. 4) F F F F F F F
6 Healthcare services denied based on automated outputs with no appeal or explanation pathway (Ref. 3) F F F F F F F F
7 Prompt exploitation produced binding commercial offers due to absent constraint enforcement (Ref. 3) F F F F F F
8 Safety-critical instructions hallucinated without scope or constraint enforcement (Ref. 4) F F F F
9 Process improvement initiatives launched without executive mandate or stewardship (Ref. 5) F F F F F
10 Project failures attributed to staff resistance rather than governance design flaws (Ref. 5) F F F
11 Payroll staff were paid incorrect or no wages for extended periods after go-live because authority for correction and repair was fragmented across vendor, department, and executive layers (Ref. 6) F F F F F F F F F F
12 Individuals were prosecuted and financially ruined based on system outputs that could not be challenged, inspected, or independently audited, while the organisation denied the possibility of system error (Ref. 7) F F F F F F F F
13 Executives approved accounting structures that obscured losses and liabilities while oversight mechanisms failed to intervene or correct misrepresentation (Ref. 8) F F F F F F
14 Organisational leadership concealed long-running losses and suppressed disclosure to auditors and regulators, preventing timely governance intervention (Ref. 9) F F F F F F
15 Safety-critical product decisions proceeded while safety risk information was withheld from operational stakeholders and effective override/recovery pathways were not provided (Ref. 10) F F F F F F F
16 Financial risk was rolled forward through opaque instruments and accounting practices that prevented coherent temporal assessment, timely intervention, and auditability (Ref. 11) F F F F F F F

Each row represents a discrete governance infarction; domain attribution derives from the cited reference rather than row position or grouping.

4.5 Summary

The AI domain (corpus) exhibits no axis activation patterns that do not also appear in at least one non-AI domain, and vice versa, supporting the claim that the axes describe governance structure rather than domain-specific failure modes.

Cross-domain projections demonstrate that governance failures consistently activate the same axes across unrelated domains.

5. Implications for Governance Design, Audit, and Repair

Traditional governance design is requirements-driven and reactive. An axis-based approach reframes design as coverage: for each axis, a system must make explicit how authority is assigned, how agency is bounded, how knowledge is held, how constraints are enforced, and how recovery is enabled. Audit becomes projection rather than compliance: incidents are mapped across axes to reveal structural failure patterns. Repair and recovery become first-class governance functions: systems designed with explicit recovery, state continuity, and normative fixity pathways can fail locally without cascading into systemic breakdown.

6. Discussion: Generality, Falsifiability, and Scope

The framework is falsifiable in principle. A counterexample would require a documented governance failure that cannot be meaningfully expressed as failure on any axis, without redefining governance itself. The axes diagnose structural failure conditions rather than provide causal narratives. Semantic analysis in inference space enables cross-domain detection of structural governance breakdowns, provided evidentiary discipline is maintained.

7. Conclusion

Governance failure is structurally patterned and portable across domains. By projecting externally documented infarctions onto the Identified Governance Failure Axes, this paper demonstrates a stable diagnostic coordinate space for governance breakdowns independent of technology, institution, or organisational form. Axis-based diagnosis enables coherent design, audit, and repair of complex socio-technical systems.

References

References (External, Non-Corpus)

  • 1) Knight First Amendment Institute (2025-09-04) AI Agents and Democratic Resilience
https://knightcolumbia.org/content/ai-agents-and-democratic-resilience
  • 2) van Dis, E. A. M., et al. (2023)The emergence of large language models as tools in literature reviews: a large language model-assisted systematic review
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10415499/
  • 3) Monte Carlo Data (2025-11-11) 4 Famous AI Fails (& How To Avoid Them)
https://www.montecarlodata.com/blog/4-famous-ai-fails/
  • 4) Evidently AI (2024-09-17) When AI goes wrong: 13 examples of AI mistakes and failures
https://www.evidentlyai.com/blog/when-ai-goes-wrong
  • 5) Antony, J., et al. (2019) A study into the reasons for process improvement project failures: results from a pilot survey
https://www.sciencedirect.com/science/article/pii/S1877050913010806

References (Corpus)

https://publication.arising.com.au/Identified_Governance_Failure_Axes:_for_LLM_Platforms
https://publication.arising.com.au/Authority_Inversion_A_Structural_Failure_in_Human_AI_Systems
https://publication.arising.com.au/Governing_the_Tool_That_Governs_You_A_CM-1_Case_Study_of_Authority_Inversion_in_Human_AI_Systems
https://publication.arising.com.au/Integrity_and_Semantic_Drift_in_Large_Language_Model_Systems
https://publication.arising.com.au/Cognitive_Memoisation_CM-2_for_Governing_Knowledge_in_Human-AI_Collaboration
https://publication.arising.com.au/Progress_Without_Memory_Cognitive_Memoisation_as_a_Knowledge-Engineering_Pattern_for_Stateless_LLM_Interaction
https://publication.arising.com.au/Axes_of_Authority_in_Stateless_Cognitive_Systems_Authority_Is_Not_Intelligence
https://publication.arising.com.au/Durability_Without_Authority_The_Missing_Governance_Layer_in_Human-AI_Collaboration
https://publication.arising.com.au/Dimensions_of_Platform_Error_Epistemic_Retention_Failure_in_Conversational_AI_Systems
https://publication.arising.com.au/Case_Study_-_When_the_Human_Has_to_Argue_With_the_Machine
https://publication.arising.com.au/What_Can_Humans_Trust_LLM_AI_to_Do
https://publication.arising.com.au/When_Evidence_Is_Not_Enough_An_Empirical_Study_of_Authority_Inversion_and_Integrity_Failure

categories