Governance Failure Axes Taxonomy

From publications
Revision as of 10:11, 24 January 2026 by Ralph (talk | contribs) (Created page with "= Demonstrating the Generality of Governance Failure Axes: Cross-Domain Projection of AI and Organisational Infarctions = == Abstract == Governance failures in artificial intelligence systems, democratic processes, and organisational initiatives are commonly treated as domain-specific pathologies, addressed through localised controls, policies, or technical safeguards. This paper argues instead that such failures are structurally homologous and can be diagnosed using a...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Demonstrating the Generality of Governance Failure Axes: Cross-Domain Projection of AI and Organisational Infarctions

Abstract

Governance failures in artificial intelligence systems, democratic processes, and organisational initiatives are commonly treated as domain-specific pathologies, addressed through localised controls, policies, or technical safeguards. This paper argues instead that such failures are structurally homologous and can be diagnosed using a stable, domain-independent set of governance dimensions. Using the Identified Governance Failure Axes as a normative framework, we project empirically documented infarctions and actions from multiple external, non-corpus references—including AI deployment failures, automated decision-making breakdowns, and failed process improvement projects—across the axes using a disciplined, evidence-bound methodology. The resulting projections demonstrate consistent axis activation patterns across otherwise unrelated contexts, showing that the axes are neither AI-specific nor organisationally contingent. This cross-domain applicability establishes the axes as a general-purpose diagnostic instrument for governance failure in complex socio-technical systems, enabling systematic analysis, comparison, and repair independent of implementation technology or institutional setting.

1. Introduction

Governance failures are routinely analysed as localised breakdowns: a malfunctioning algorithm, a flawed policy, an inattentive manager, or an insufficient control. In both academic and operational contexts, failure is most often attributed to deficiencies internal to a specific domain—technical error in AI systems, institutional weakness in democratic processes, or managerial shortcomings in organisational change initiatives. This tendency fragments diagnosis and obscures the structural regularities that recur across otherwise unrelated systems.

In parallel, the increasing deployment of artificial intelligence into high-consequence settings has intensified concern about accountability, authority, and control. Public discourse and policy responses have largely focused on AI-specific risks—hallucination, bias, opacity, or automation error—treating these as novel phenomena requiring bespoke regulatory or technical remedies. Similar patterns are visible in organisational governance, where repeated project failures are explained through contextual factors such as resistance to change, skills deficits, or cultural misalignment, rather than through a shared structural lens.

This paper starts from a different premise: that governance failures are not primarily domain-specific events, but expressions of a small and stable set of underlying structural breakdowns. When examined at the level of authority allocation, epistemic custody, constraint enforcement, recovery capability, and normative stability, failures in AI systems, democratic institutions, and organisational projects exhibit strikingly similar patterns. These patterns are obscured when analysis remains confined to domain vocabulary or implementation detail.

To make this claim operational, the paper employs the Identified Governance Failure Axes as a normative diagnostic framework. The axes define governance not as compliance or management practice, but as the structural conditions under which authority, agency, knowledge, action, and repair are coherently exercised. Each axis names a specific dimension along which governance can fail, independent of technology, organisational form, or institutional context.

The contribution of this work is not the proposal of new axes, but the demonstration of their generality. Using externally published case studies and analyses, this paper projects concrete infarctions and actions onto the axes using a disciplined semantic methodology. Each projection marks only those axes for which support is present in inference space, avoiding post hoc reinterpretation or theoretical inflation.

2. Identified Governance Failure Axes

2.1 Normative Definition of Governance

In this paper, governance is defined as the structural arrangement by which authority, agency, knowledge, action, and repair are legitimately exercised over time. Governance is distinct from management, optimisation, or intelligence. A system may perform effectively while being poorly governed, and conversely may be well governed while performing suboptimally. Governance concerns who is authorised to decide, what constrains action, how knowledge is held and transferred, and how failure is repaired without loss of legitimacy.

This definition applies equally to technical systems, organisations, and institutions.

2.2 The Axes and Their Verbatim Headings

The Identified Governance Failure Axes enumerate the distinct dimensions along which governance can break down. Each axis is used verbatim and without reinterpretation.

A — Authority Ag — Agency C — Epistemic Custody K — Constraint Enforcement R — Recovery / Repair S — State Continuity U — UI / Mediation Sc — Social Coordination I — Incentive Alignment L — Legibility / Inspectability St — Stewardship (non-ownership governance) P — Portability / Auditability Att — Attention (what participates in inference) Scope — Scope (Epistemic Object Domain) T — Temporal Coherence Int — Intent Fidelity Nf — Normative Fixity

2.3 Normative Ordering of the Axes

The axes are applied in a fixed normative order reflecting structural dependency rather than importance. This ordering is preserved in all projection artefacts.

2.4 Non-Substitutability and Independence

No axis can compensate for failure in another. Transparency does not restore authority; recovery does not legitimise unauthorised action. Each axis represents an independent governance obligation.

2.5 Scope and Generality

The axes describe governance only. Their generality arises from abstraction without loss of structural specificity.

3. Methodology: Infarction Projection Across Governance Axes

3.1 Source Selection and Reference Boundary

Governing definitions are drawn from the corpus and treated as normative. All infarctions and actions projected in this paper are sourced from external, non-corpus references. This separation ensures the axes are tested against material not authored within the same conceptual framework.

3.2 Infarction and Action Identification

An infarction is a discrete governance breakdown resulting in loss of legitimacy, control, recoverability, or normative stability. An action is a deliberate behaviour that precipitates or entrenches such breakdown. Infarctions are identified through semantic analysis of source material in inference space.

3.3 Evidence Discipline and F-Marking Rules

An axis is marked failed only where the source semantic analysis provides support that the corresponding governance dimension was absent, violated, or rendered ineffective. No axis is marked by analogy or assumed intent.

3.4 Normative Ordering and Non-Substitutability

Axes are applied in normative order and cannot substitute for one another.

3.5 Projection Output and Validation

The output is a projection table. Validation consists of internal consistency and reproducibility under reapplication.

4. Cross-Domain Projections of Governance Failure

4.1 AI System Deployment Failures

External AI case studies demonstrate recurring failures of authority, epistemic custody, constraint enforcement, and normative fixity, often amplified by absent recovery mechanisms.

4.2 Automated Decision-Making and Institutional Breakdown

Institutional deference to automated outputs without preserved agency or stewardship results in authority inversion and incentive misalignment.

4.3 Organisational and Process Improvement Project Failures

Failed organisational initiatives exhibit the same governance failures observed in AI systems, despite absence of automation.

4.4 Comparative Axis Activation Patterns

Infarction / Action A Ag C K R S U Sc I L St P Att Scope T Int Nf

Autonomous AI outputs treated as authoritative F - Automated denial without appeal F - Failed process improvement without executive mandate F }

4.5 Summary

Cross-domain projections demonstrate that governance failures consistently activate the same axes across unrelated domains.

5. Implications for Governance Design, Audit, and Repair

Governance can be treated as a design space with inspectable failure modes rather than an emergent property.

6. Discussion: Generality, Falsifiability, and Scope

The framework is falsifiable in principle and diagnostic rather than causal.

7. Conclusion

Governance failure is structurally patterned and portable across domains. Axis-based diagnosis enables coherent design, audit, and repair of complex socio-technical systems.

References (External)

Knight First Amendment Institute. AI Agents and Democratic Resilience. van Dis et al. The Emergence of Large Language Models as Tools in Literature Reviews. Monte Carlo Data. 4 Famous AI Fails (& How To Avoid Them). Evidently AI. When AI Goes Wrong: 13 Examples of AI Mistakes and Failures. Antony et al. A Study into the Reasons for Process Improvement Project Failures.

References (Corpus)

Holland R B (2025-12-22T19:10Z) Cognitive Memoisation Corpus Map

https://publication.arising.com.au/Cognitive_Memoisation_Corpus_Map

Holland R B (2025-12-28T10:14Z) Identified Governance Failure Axes for LLM Platforms

https://publication.arising.com.au/Identified_Governance_Failure_Axes_for_LLM_Platforms

Holland R B (2026-01-24T02:02Z) When Evidence Is Not Enough: An Empirical Study of Authority Inversion and Integrity Failure in Conversational AI

https://publication.arising.com.au/When_Evidence_Is_Not_Enough_An_Empirical_Study_of_Authority_Inversion_and_Integrity_Failure