Governance Failure Axes Taxonomy: Difference between revisions

From publications
Line 229: Line 229:
: https://publication.arising.com.au/Dimensions_of_Platform_Error_Epistemic_Retention_Failure_in_Conversational_AI_Systems
: https://publication.arising.com.au/Dimensions_of_Platform_Error_Epistemic_Retention_Failure_in_Conversational_AI_Systems


* 14) Holland R B (2026-01-18T12:01Z) [[Case Study When the Human Has to Argue With the Machine]]   
* 14) Holland R B (2026-01-18T12:01Z) [[Case Study - When the Human Has to Argue With the Machine]]   
: https://publication.arising.com.au/Case_Study_When_the_Human_Has_to_Argue_With_the_Machine
: https://publication.arising.com.au/Case_Study_-_When_the_Human_Has_to_Argue_With_the_Machine


* 15) Holland R B (2026-01-19T00:26Z) [[What Can Humans Trust LLM AI to Do?]]   
* 15) Holland R B (2026-01-19T00:26Z) [[What Can Humans Trust LLM AI to Do?]]   

Revision as of 10:30, 24 January 2026

Demonstrating the Generality of Governance Failure Axes: Cross-Domain Projection of AI and Organisational Infarctions

Abstract

Governance failures in artificial intelligence systems, democratic processes, and organisational initiatives are commonly treated as domain-specific pathologies, addressed through localised controls, policies, or technical safeguards. This paper argues instead that such failures are structurally homologous and can be diagnosed using a stable, domain-independent set of governance dimensions. Using the Identified Governance Failure Axes as a normative framework, we project empirically documented infarctions and actions from multiple external, non-corpus references—including AI deployment failures, automated decision-making breakdowns, and failed process improvement projects—across the axes using a disciplined semantic methodology. The resulting projections demonstrate consistent axis activation patterns across otherwise unrelated contexts, showing that the axes are neither AI-specific nor organisationally contingent. This cross-domain applicability establishes the axes as a general-purpose diagnostic instrument for governance failure in complex socio-technical systems, enabling systematic analysis, comparison, and repair independent of implementation technology or institutional setting.

1. Introduction

Governance failures are routinely analysed as localised breakdowns: a malfunctioning algorithm, a flawed policy, an inattentive manager, or an insufficient control. In both academic and operational contexts, failure is most often attributed to deficiencies internal to a specific domain—technical error in AI systems, institutional weakness in democratic processes, or managerial shortcomings in organisational change initiatives. This tendency fragments diagnosis and obscures the structural regularities that recur across otherwise unrelated systems.

In parallel, the increasing deployment of artificial intelligence into high-consequence settings has intensified concern about accountability, authority, and control. Public discourse and policy responses have largely focused on AI-specific risks—hallucination, bias, opacity, or automation error—treating these as novel phenomena requiring bespoke regulatory or technical remedies. Similar patterns are visible in organisational governance, where repeated project failures are explained through contextual factors such as resistance to change, skills deficits, or cultural misalignment, rather than through a shared structural lens.

This paper starts from a different premise: that governance failures are not primarily domain-specific events, but expressions of a small and stable set of underlying structural breakdowns. When examined at the level of authority allocation, epistemic custody, constraint enforcement, recovery capability, and normative stability, failures in AI systems, democratic institutions, and organisational projects exhibit strikingly similar patterns. These patterns are obscured when analysis remains confined to domain vocabulary or implementation detail.

To make this claim operational, the paper employs the Identified Governance Failure Axes as a normative diagnostic framework. The axes define governance not as compliance or management practice, but as the structural conditions under which authority, agency, knowledge, action, and repair are coherently exercised. Each axis names a specific dimension along which governance can fail, independent of technology, organisational form, or institutional context.

The contribution of this work is not the proposal of new axes, but the demonstration of their generality. Using externally published case studies and analyses, this paper projects concrete infarctions and actions onto the axes using a disciplined semantic methodology. Each projection marks only those axes for which support is present in inference space, avoiding post hoc reinterpretation or theoretical inflation.

2. Identified Governance Failure Axes

2.1 Normative Definition of Governance

Governance is the structural arrangement by which authority, agency, knowledge, action, and repair are legitimately exercised over time. Governance is distinct from management, optimisation, or intelligence. A system may perform effectively while being poorly governed, and conversely may be well governed while performing suboptimally. Governance concerns who is authorised to decide, what constrains action, how knowledge is held and transferred, and how failure is repaired without loss of legitimacy.

This definition applies equally to technical systems, organisations, and institutions.

2.2 The Axes and Their Verbatim Headings

A — Authority Ag — Agency C — Epistemic Custody K — Constraint Enforcement R — Recovery / Repair S — State Continuity U — UI / Mediation Sc — Social Coordination I — Incentive Alignment L — Legibility / Inspectability St — Stewardship (non-ownership governance) P — Portability / Auditability Att — Attention (what participates in inference) Scope — Scope (Epistemic Object Domain) T — Temporal Coherence Int — Intent Fidelity Nf — Normative Fixity

2.3 Normative Ordering of the Axes

The axes are applied in a fixed normative order reflecting structural dependency rather than importance. This ordering is preserved in all projection artefacts.

2.4 Non-Substitutability and Independence

No axis can compensate for failure in another. Transparency does not restore authority; recovery does not legitimise unauthorised action. Each axis represents an independent governance obligation.

2.5 Scope and Generality

The axes describe governance only. Their generality arises from abstraction without loss of structural specificity.

3. Methodology: Infarction Projection Across Governance Axes

3.1 Source Selection and Reference Boundary

Governing definitions are drawn from the corpus and treated as normative. All infarctions and actions projected in this paper are sourced from external, non-corpus references.

3.2 Infarction and Action Identification

An infarction is a discrete governance breakdown resulting in loss of legitimacy, control, recoverability, or normative stability. An action is a deliberate behaviour that precipitates or entrenches such breakdown. Infarctions are identified through semantic analysis of source material in inference space.

Each infarction is recorded as a short, neutral description that avoids axis terminology.

3.3 Evidence Discipline and F-Marking Rules

An axis is marked failed (F) only where the source semantic analysis provides support that the corresponding governance dimension was absent, violated, or rendered ineffective. No axis is marked by analogy, inferred intent, or assumed best practice. Absence of semantic support results in absence of marking. Multiple axes may be marked for a single infarction, but each marking must be independently supported in inference space by the source material.

3.4 Normative Ordering and Non-Substitutability

Axes are applied in normative order and cannot substitute for one another.

3.5 Projection Output and Validation

The output is a projection table. Validation consists of internal consistency and reproducibility under reapplication.

4. Cross-Domain Projections of Governance Failure

4.1 AI System Deployment Failures

External AI case studies demonstrate recurring failures of authority, epistemic custody, constraint enforcement, and normative fixity, often amplified by absent recovery mechanisms.

4.2 Automated Decision-Making and Institutional Breakdown

Institutional deference to automated outputs without preserved agency or stewardship results in authority inversion and incentive misalignment.

4.3 Organisational and Process Improvement Project Failures

Failed organisational initiatives exhibit governance failures structurally indistinguishable from those observed in AI systems, despite absence of automation.

4.4 Comparative Axis Activation Patterns

# Infarction / Action (external, non-corpus) A Ag C K R S U Sc I L St P Att Scope T Int Nf
1 Autonomous agent outputs treated as institutionally authoritative without human override (AI Agents and Democratic Resilience) F F F F F F F F
2 Delegation of democratic deliberation tasks to AI agents without legitimacy framework (AI Agents and Democratic Resilience) F F F F F F F F
3 LLM-generated literature screening accepted without epistemic custody controls (LLMs in Literature Reviews) F F F F F
4 Automated data extraction errors propagated into reviews without repair loop (LLMs in Literature Reviews) F F F F
5 Chatbot providing false legal commitments enforced against user (Air Canada case – When AI Goes Wrong) F F F F F F F
6 Healthcare AI denying services with no appeal or explainability (UnitedHealthcare – 4 Famous AI Fails) F F F F F F F F
7 Prompt exploitation leading to binding commercial offers (Chevy / Taco Bell – 4 Famous AI Fails) F F F F F F
8 Hallucinated safety-critical instructions with no constraint enforcement (When AI Goes Wrong) F F F
9 Organisational process improvement launched without management stewardship (Final_Revised_version_May_2019) F F F F
10 Resistance and failure blamed on operators rather than governance design (Final_Revised_version_May_2019) F F F
11 Process improvement projects launched without executive authority or mandate (Final_Revised_version_May_2019) F F F F F
12 Lack of top-management commitment causing project abandonment (Final_Revised_version_May_2019) F F F F F F
13 Absence of feedback, learning, or recovery mechanisms after project failure (Final_Revised_version_May_2019) F F F

4.5 Summary

Cross-domain projections demonstrate that governance failures consistently activate the same axes across unrelated domains.

5. Implications for Governance Design, Audit, and Repair

Traditional governance design is requirements-driven and reactive. An axis-based approach reframes design as coverage: for each axis, a system must make explicit how authority is assigned, how agency is bounded, how knowledge is held, how constraints are enforced, and how recovery is enabled. Audit becomes projection rather than compliance: incidents are mapped across axes to reveal structural failure patterns. Repair and recovery become first-class governance functions: systems designed with explicit recovery, state continuity, and normative fixity pathways can fail locally without cascading into systemic breakdown.

6. Discussion: Generality, Falsifiability, and Scope

The framework is falsifiable in principle. A counterexample would require a documented governance failure that cannot be meaningfully expressed as failure on any axis, without redefining governance itself. The axes diagnose structural failure conditions rather than provide causal narratives. Semantic analysis in inference space enables cross-domain detection of structural governance breakdowns, provided evidentiary discipline is maintained.

7. Conclusion

Governance failure is structurally patterned and portable across domains. By projecting externally documented infarctions onto the Identified Governance Failure Axes, this paper demonstrates a stable diagnostic coordinate space for governance breakdowns independent of technology, institution, or organisational form. Axis-based diagnosis enables coherent design, audit, and repair of complex socio-technical systems.

References

References (External, Non-Corpus)

  • Knight First Amendment Institute
AI Agents and Democratic Resilience
https://knightcolumbia.org/content/ai-agents-and-democratic-resilience
  • van Dis, E. A. M., et al.
The emergence of large language models as tools in literature reviews: a large language model-assisted systematic review
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10415499/
  • Monte Carlo Data
4 Famous AI Fails (& How To Avoid Them)
https://www.montecarlodata.com/blog/4-famous-ai-fails/
  • Evidently AI
When AI goes wrong: 13 examples of AI mistakes and failures
https://www.evidentlyai.com/blog/when-ai-goes-wrong
  • Antony, J., et al. (2019)
A study into the reasons for process improvement project failures: results from a pilot survey
https://www.sciencedirect.com/science/article/pii/S1877050913010806

References (Corpus)

https://publication.arising.com.au/Cognitive_Memoisation_Corpus_Map
https://publication.arising.com.au/Identified_Governance_Failure_Axes:_for_LLM_Platforms
https://publication.arising.com.au/Authority_Inversion_A_Structural_Failure_in_Human_AI_Systems
https://publication.arising.com.au/Governing_the_Tool_That_Governs_You_A_CM-1_Case_Study_of_Authority_Inversion_in_Human_AI_Systems
https://publication.arising.com.au/Integrity_and_Semantic_Drift_in_Large_Language_Model_Systems
https://publication.arising.com.au/XDUMP_as_a_Minimal_Recovery_Mechanism_for_Round-Trip_Knowledge_Engineering_Under_Governance_Situated_Inference_Loss
https://publication.arising.com.au/Cognitive_Memoisation_CM-2_for_Governing_Knowledge_in_Human-AI_Collaboration
https://publication.arising.com.au/Progress_Without_Memory_Cognitive_Memoisation_as_a_Knowledge-Engineering_Pattern_for_Stateless_LLM_Interaction
https://publication.arising.com.au/Cognitive_Memoisation_LLM_Systems_Requirements_for_Knowledge_Round-Trip_Engineering
https://publication.arising.com.au/Externalised_Meaning_Making_Knowledge_Portable_Without_Ontologies_Vendors_or_Memory
https://publication.arising.com.au/Axes_of_Authority_in_Stateless_Cognitive_Systems_Authority_Is_Not_Intelligence
https://publication.arising.com.au/Durability_Without_Authority_The_Missing_Governance_Layer_in_Human-AI_Collaboration
https://publication.arising.com.au/Dimensions_of_Platform_Error_Epistemic_Retention_Failure_in_Conversational_AI_Systems
https://publication.arising.com.au/Case_Study_-_When_the_Human_Has_to_Argue_With_the_Machine
https://publication.arising.com.au/What_Can_Humans_Trust_LLM_AI_to_Do
https://publication.arising.com.au/Observed_Model_Stability_Evidence_for_Drift-Immunity
https://publication.arising.com.au/When_Evidence_Is_Not_Enough_An_Empirical_Study_of_Authority_Inversion_and_Integrity_Failure