Governance Failure Axes Taxonomy

From publications

metadata

Title: Cross Domain Failure Projection: Demonstrating the Generality of Governance Failure Axes
Author: Ralph B. Holland
Affiliation: Arising Technology Systems Pty Ltd
Contact: ralph.b.holland [at] gmail.com
Publication Date: 2026-01-24T10:37Z
DOI: 10.5281/zenodo.18367423
2026-01-25T12:37Z 1.2.1 - anchored
version: 1.2.1
Updates: 2026-01-25T04:17Z 1.2.1 - DOI anchor
2026-01-25T03:13Z 1.2.0 - substantive review and cross-checking. Included scope and preface.
Scope: This is a non-peer reviewed paper presented as an anchor for orthogonal Governance Failure Axes analysis across domains.
Provenance: This is an authored paper maintained as a MediaWiki document; edit history reflects editorial changes, not collaborative authorship.
Status: non-peer reviewed awaiting DOI anchor by the author.

Metadata (Normative)

The metadata table immediately preceding this section is CM-defined and constitutes the authoritative provenance record for this artefact.

All fields in that table (including artefact, author, version, date and reason) MUST be treated as normative metadata.

The assisting system MUST NOT infer, normalise, reinterpret, duplicate, or rewrite these fields. If any field is missing, unclear, or later superseded, the change MUST be made explicitly by the human and recorded via version update, not inferred.

Curator Provenance and Licensing Notice

This document predates its open licensing.

As curator and author, I apply the Apache License, Version 2.0, at publication to permit reuse and implementation while preventing enclosure or patent capture. This licensing action does not revise, reinterpret, or supersede any normative content herein.

Authority remains explicitly human; no implementation, system, or platform may assert epistemic authority by virtue of this license.

Governance Failure Axes Taxonomy

Scope

This paper is a normative diagnostic framework paper, not an empirical validation study. It defines a stable diagnostic coordinate space for governance failure and demonstrates its cross-domain applicability under invariant application rules. Claims of statistical prevalence, causal ordering, or remediation efficacy are explicitly out of scope.

Abstract

Governance failures in artificial intelligence systems, democratic institutions, and organisational initiatives are commonly treated as domain-specific pathologies addressed through localised controls, policies, or technical safeguards. This paper argues instead that such failures are structurally homologous and can be diagnosed within a stable, domain-independent diagnostic space. Using the Identified Governance Failure Axes as a normative framework, we project documented governance infarctions and actions drawn from multiple domains under invariant axis definitions, fixed ordering, and explicit evidentiary discipline. The AI domain is represented by a corpus of documented AI governance failures, while democratic and organisational domains are represented by external, non-corpus case studies. Axes are marked failed only where the cited material provides semantic support in inference space for absence, violation, or ineffectiveness of the corresponding governance obligation, and are left unmarked where such support is absent. The resulting cross-domain projections exhibit consistent axis activation patterns across otherwise unrelated contexts, showing that the axes characterise governance structure rather than technology-contingent or domain-specific failure modes. The framework therefore functions as a general-purpose diagnostic instrument for analysing, comparing, and informing repair of governance failures in complex socio-technical systems independent of implementation technology or institutional setting.

How to Use This Paper

This paper is intended to be used as a diagnostic instrument, not as a catalogue of failures, a predictive model, or a prescriptive policy framework. It defines a stable coordinate space for identifying and reasoning about governance failure across domains. Proper use requires adherence to the normative constraints defined herein.

Intended Use

This paper should be used to:

  • Diagnose governance failures by projecting discrete infarctions or actions onto the Identified Governance Failure Axes.
  • Compare governance failure structures across domains without translating failures into domain-specific vocabulary.
  • Audit systems, institutions, or initiatives for missing or fragile governance obligations.
  • Inform governance repair and redesign by identifying which structural obligations are absent, violated, or ineffective.

The axes are applicable wherever authority, agency, knowledge, action, and repair are exercised, regardless of technology, organisational form, or institutional setting.

Method of Application

To apply the framework:

  1. Identify a discrete governance infarction or action documented in source material.
  2. Treat each infarction independently; do not aggregate, rank, or infer relationships during projection.
  3. Apply the axes in their fixed normative order, using the verbatim definitions provided.
  4. Mark an axis as failed (F) only where the source material provides semantic support that the corresponding governance obligation was absent, violated, or rendered ineffective.
  5. Leave axes unmarked where such support is absent, regardless of outcome severity or intuition.
  6. Observe patterns only after projection is complete; patterns must not guide marking.

Projection tables produced using this method are diagnostic artefacts. They identify structural failure conditions but do not, by themselves, establish causality, intent, or remediation priority.

What This Paper Is Not

This paper must not be used as:

  • An empirical validation study or statistical analysis of governance failures.
  • A causal model explaining why failures occurred.
  • A compliance checklist, maturity model, or best-practice framework.
  • A substitute for domain expertise, legal judgment, or institutional authority.
  • Evidence that a particular actor, system, or organisation is at fault.

Misprojection, misinterpretation, or incorrect axis marking does not constitute a failure of the axes themselves.

Constraints on Interpretation and Extension

Axis definitions, meanings, and ordering are invariant. They must not be redefined, specialised, merged, or suppressed to fit a domain or outcome.

The framework is intentionally incomplete. Discovery of a governance failure that cannot be meaningfully expressed as a violation of any existing axis constitutes grounds for proposing an extension, not an invalidation. Any proposed extension must represent a non-substitutable governance obligation that applies invariantly across domains.

Authority and Responsibility

All projections, interpretations, and conclusions drawn from use of this framework remain the responsibility of the human analyst. No system, tool, or implementation may claim epistemic authority by virtue of applying this paper or its axes.

Use Across Domains and Extension of the Axes

Although the projections in this paper span artificial intelligence, democratic institutions, organisational governance, and financial fraud, the Identified Governance Failure Axes are not presented as domain-bound. They are intended for use wherever governance exists and can fail.

Practitioners, researchers, and auditors working in other domains are explicitly encouraged to apply the axes unchanged to failures within their own contexts, subject to the same evidenti discipline and projection constraints. The appearance of familiar axis activations in new domains supports the claim of structural generality; the appearance of a governance failure that cannot be expressed as a violation of any existing axis constitutes evidence for extension.

Extension of the axes is therefore welcomed where—and only where—a recurring governance failure trait can be shown to be structurally distinct, non-substitutable, and invariant across contexts. Such extensions strengthen the taxonomy. Incorrect or disputed projections do not weaken it.

1. Introduction

Governance failures are routinely analysed as localised breakdowns: a malfunctioning algorithm, a flawed policy, an inattentive manager, or an insufficient control. In both academic and operational contexts, failure is most often attributed to deficiencies internal to a specific domain—technical error in AI systems, institutional weakness in democratic processes, or managerial shortcomings in organisational change initiatives. This tendency fragments diagnosis and obscures the structural regularities that recur across otherwise unrelated systems.

In parallel, the increasing deployment of artificial intelligence into high-consequence settings has intensified concern about accountability, authority, and control. Public discourse and policy responses have largely focused on AI-specific risks—hallucination, bias, opacity, or automation error—treating these as novel phenomena requiring bespoke regulatory or technical remedies. Similar patterns are visible in organisational governance, where repeated project failures are explained through contextual factors such as resistance to change, skills deficits, or cultural misalignment, rather than through a shared structural lens.

This paper starts from a different premise: that governance failures are not primarily domain-specific events, but expressions of a small and stable set of underlying structural breakdowns. When examined at the level of authority allocation, epistemic custody, constraint enforcement, recovery capability, and normative stability, failures in AI systems, democratic institutions, and organisational projects exhibit strikingly similar patterns. These patterns are obscured when analysis remains confined to domain vocabulary or implementation detail.

To make this claim operational, the paper employs the Identified Governance Failure Axes as a normative diagnostic framework. The axes define governance not as compliance or management practice, but as the structural conditions under which authority, agency, knowledge, action, and repair are coherently exercised. Each axis names a specific dimension along which governance can fail, independent of technology, organisational form, or institutional context.

The contribution of this work is not the proposal of new axes, but the demonstration of their generality. Using externally published case studies and analyses, this paper projects concrete infarctions and actions onto the axes using a disciplined semantic methodology. Each projection marks only those axes for which support is present in inference space, avoiding post hoc reinterpretation or theoretical inflation.

The motivation for this paper follows directly from the apparent fundamentality of the axes themselves. If the Identified Governance Failure Axes describe governance structure rather than domain-specific pathology, then they should recur wherever governance exists and fails, irrespective of technology, institutional form, or organisational context. This paper therefore tests the generality of the axes not by extending or adapting them, but by projecting them unchanged onto externally documented failures drawn from unrelated domains, under invariant application rules.

2. Identified Governance Failure Axes

2.1 Normative Definition of Governance

Governance is the structural arrangement by which authority, agency, knowledge, action, and repair are legitimately exercised over time. Governance is distinct from management, optimisation, or intelligence. A system may perform effectively while being poorly governed, and conversely may be well governed while performing suboptimally. Governance concerns who is authorised to decide, what constrains action, how knowledge is held and transferred, and how failure is repaired without loss of legitimacy.

This definition applies equally to technical systems, organisations, and institutions.

2.2 The Axes and Their Verbatim Headings

  • A - Authority: Authority concerns who has the legitimate right to decide. This axis fails when decisions, interpretations, or changes are made by an entity that has not been explicitly authorised to make them. Authority is about decision rights, not competence, correctness, or convenience.
  • Ag - Agency: Agency concerns who is acting. This axis fails when actions are taken by an actor that was not delegated the power to act, or when the system obscures whether an action was taken by a human, a model, or an automated process.
  • C - Epistemic Custody: Epistemic Custody concerns who holds and controls knowledge artefacts. This axis fails when artefacts leave the custody of their declared steward, are replicated into uncontrolled systems, or are transformed without custodial guarantees, regardless of whether meaning is preserved.
  • K - Constraint Enforcement: Constraint Enforcement concerns whether declared rules, invariants, and prohibitions are actually enforced. This axis fails when constraints exist but are bypassed, softened, reordered, or ignored in execution.
  • R - Recovery / Repair: Recovery / Repair concerns whether the system can return to a valid, governed state after failure. This axis fails when errors, drift, or corruption cannot be repaired without loss of authority, meaning, or trust.
  • S - State Continuity : State Continuity concerns whether authoritative state persists correctly across time, sessions, and interactions. This axis fails when prior decisions, constraints, or artefacts are lost, forgotten, or inconsistently reintroduced.
  • U - UI / Mediation: UI / Mediation concerns how interfaces shape, filter, or distort interaction between humans and the system. This axis fails when interface design hides constraints, misrepresents system state, encourages invalid actions, or forces users into integrity-violating behaviour.
  • Sc - Social Coordination: Social Coordination concerns how multiple humans align, contribute, and reason together through the system. This axis concerns failures arising when individuals implicitly treat platforms/systems as guides for everyday judgment and action. This axis fails when collaboration breaks down due to ambiguity, conflict, loss of shared reference, or inability to merge contributions under governance.
  • I - Incentive Alignment: Incentive Alignment concerns whether system behaviour aligns with declared human incentives rather than implicit or economic ones. This axis fails when optimisation pressures such as speed, engagement, profit, or helpfulness override governance, integrity, or user intent.
  • L - Legibility / Inspectability : Legibility / Inspectability concerns whether system behaviour, decisions, and transformations are observable and understandable to the human governor. This axis fails when drift, failure, or authority substitution occurs silently or cannot be inspected.
  • St - Stewardship (non-ownership governance): Stewardship concerns responsibility without ownership. This axis fails when systems or actors behave as if ownership implies authority, or when stewardship duties such as care, preservation, and restraint are neglected despite lack of ownership.
  • P - Portability / Auditability : Portability / Auditability concerns whether artefacts can move between systems while remaining verifiable. This axis fails when knowledge becomes trapped, unverifiable, or unauditable outside a specific platform, tool, or vendor.
  • Att - Attention: Attention concerns what participates in inference. This axis fails when critical artefacts, constraints, or context are excluded from inference due to truncation, summarisation, prioritisation, or salience effects, allowing non-authoritative material to dominate.
  • Scope: Scope (Epistemic Object Domain): Scope concerns the defined epistemic domain within which reasoning and action are valid. This axis fails when systems operate outside the authorised domain, apply reasoning to out-of-scope objects, or silently expand the domain of inference.
  • T - Temporal Coherence: Temporal Coherence concerns the preservation of correct temporal relationships between artefacts, constraints, authority, and decisions. This axis fails when versions are confused, rules are applied retroactively, or sequencing and timing semantics are violated.
  • Int - Intent Fidelity: Intent Fidelity concerns preservation of declared human intent. This axis fails when systems substitute inferred, optimised, or generic goals for explicitly stated intent, even when rules and constraints appear to be followed.
  • Nf - Normative Fixity: Normative Fixity concerns the immutability of normative rules themselves. This axis fails when governance logic, invariants, or binding rules are paraphrased, summarised, softened, or altered without explicit and authorised revision.

2.3 Normative Ordering of the Axes

The axes are applied in a fixed normative order reflecting structural dependency rather than importance. This ordering is preserved in all projection artefacts.

2.4 Non-Substitutability and Independence

No axis can compensate for failure in another. Transparency does not restore authority; recovery does not legitimise unauthorised action. Each axis represents an independent governance obligation.

2.5 Scope and Generality

The axes describe governance only. Their generality arises from abstraction without loss of structural specificity.

2.6 Normative Projection and Interpretation Constraints

This paper adopts the following normative constraints governing projection, interpretation, and reuse of the Identified Governance Failure Axes and associated projection artefacts.

Infarction primacy

Each row in a projection table represents a discrete governance infarction or action. Rows are not grouped, ordered, or interpreted by domain. Domain attribution is derived solely from the cited source material associated with each infarction, not from row position, adjacency, or table structure.

Domain representation

The corpus represents the artificial intelligence (AI) domain. External, non-corpus sources represent democratic, institutional, and organisational domains. All domains are treated as analytically equivalent: no domain is privileged, discounted, or treated as normative ground truth for axis activation.

Axis invariance

Axis definitions, ordering, and meanings are fixed and invariant across domains. Axes are not specialised, extended, suppressed, or reinterpreted on a per-domain basis. Cross-domain applicability is assessed solely through projection outcomes, not through modification of the axis set.

Evidentiary discipline

An axis may be marked failed (F) only where the source material provides semantic support that the corresponding governance dimension was absent, violated, or rendered ineffective. Axes must not be marked by analogy, assumed intent, inferred best practice, outcome severity, or retrospective justification. Absence of support requires absence of marking.

Non-aggregation of justification

No axis failure compensates for, substitutes for, or implies failure on another axis. Each axis marking stands independently and must be supported independently. Patterns may be observed only after projection; they must not be imposed during projection.

Interpretive limits

Projection tables are diagnostic instruments, not causal models. They identify structural governance failure conditions but do not, by themselves, establish causality, intent, responsibility, or remediation priority.

These constraints are normative. Any analysis that relaxes them, alters axis definitions, re-buckets rows by domain, introduces inferred markings, or backfills axis failures based on outcome severity constitutes a different method and must not be presented as an application of the framework used in this paper.

2.7 Origin and Discovery of the Axes

The Identified Governance Failure Axes were not formulated as candidate dimensions, nor derived through abstraction, selection, merging, or reduction. They were not subject to human conflation or comparative evaluation. Instead, they emerged directly and distinctly through repeated analysis of concrete governance failures.

Across the corpus, governance failure narratives exhibited strong human conflation of outcomes, causes, intent, competence, technology, and responsibility. In contrast, certain governance failure traits appeared persistently and independently in inference: they recurred as clearly distinguishable distortions of authority, agency, custody, constraint, recovery, continuity, and normativity, regardless of domain, context, or explanatory framing.

These traits did not require disentanglement, consolidation, or refinement. They were already separable. Each appeared as a distinct mode of governance failure that did not collapse into, substitute for, or explain any other. In this framework, an axis names one such independently recurring governance failure trait.

Orthogonality is therefore not a procedural outcome or analytical achievement. It is an observed structural property of the failure space: the axes are orthogonal because the failures present as orthogonal. The taxonomy reflects this structure; it does not impose it.

This paper does not claim the axes are complete. Incompleteness is treated as evidence of discovery rather than error. The appearance of a governance failure that cannot be expressed as a violation of any existing axis constitutes grounds for extension, not falsification.

3. Methodology: Infarction Projection Across Governance Axes

3.1 Source Selection and Domain Boundary

The Identified Governance Failure Axes are defined normatively and applied uniformly across domains. In this paper, the AI domain is represented by the corpus, which contains empirically documented AI governance failures. Additional domains—democratic institutions and organisational/process governance—are represented by external, non-corpus sources. All projections, regardless of domain, are subject to the same evidentiary discipline and marking rules. No axis definitions are altered or reinterpreted on a per-domain basis.

The inclusion of the AI corpus does not constitute validation-by-construction, as the same axis definitions and marking rules are applied unchanged to non-AI domains.

3.2 Infarction and Action Identification

An infarction is a discrete governance breakdown resulting in loss of legitimacy, control, recoverability, or normative stability. An action is a deliberate behaviour that precipitates or entrenches such breakdown. Infarctions are identified through semantic analysis of source material in inference space.

Each infarction is recorded as a short, neutral description that avoids axis terminology.

In this paper, ‘inference space’ denotes the semantic content supported by the cited material, not conjecture, best practice, or outcome-based reasoning.”

3.3 Evidence Discipline and F-Marking Rules

An axis is marked failed (F) only where the source semantic analysis provides support that the corresponding governance dimension was absent, violated, or rendered ineffective. No axis is marked by analogy, inferred intent, or assumed best practice. Absence of semantic support results in absence of marking. Multiple axes may be marked for a single infarction, but each marking must be independently supported in inference space by the source material.

Each Axis is orthogonal.

The resulting projection may be used as a basis of diagnosis, instruction and for development of treatments. Having an orthogonal description of the real failures removes the hand-waviness of the failures.

3.4 Normative Ordering and Non-Substitutability

Axes are applied in normative order and cannot substitute for one another.

3.5 Projection Output and Validation

The output is a projection table. Validation consists of internal consistency and reproducibility under reapplication.

The corpus references does not supply failure or infarction events to Table A. The corpus references supply the diagnostic vocabulary and constraints for the selection of the axes and the projection of failures.

4. Cross-Domain Projections of Governance Failure

4.1 AI System Deployment Failures

External AI case studies demonstrate recurring failures of authority, epistemic custody, constraint enforcement, and normative fixity, often amplified by absent recovery mechanisms.

4.2 Automated Decision-Making and Institutional Breakdown

Institutional deference to automated outputs without preserved agency or stewardship results in authority inversion and incentive misalignment.

4.3 Organisational and Process Improvement Project Failures

Failed organisational initiatives exhibit governance failures structurally indistinguishable from those observed in AI systems, despite absence of automation.

Section 4.4 Comparative Axis Activation Patterns

The following Table A is an example of how the orthogonal axes of failure may be used to elucidate the fundamental failures of the system or events being studied.

It is relatively easy to perform semantic analysis of reference documents to identify failures or infarctions. With further rigour it is possible to identify which axes contribute to the failure by choosing each axes in turn and determining if the axes contributed by evidence. Since LLM are good at semantic analysis and pattern recognition they may be used for these steps provided the results are cross checked. Table A was generated by semantic analysis using chatGPT and the author's cross checking, re-analysis and curation.

Should this table be found to be in error then that is human oversight. The author only accepts invalidation of the axes should a governance failure be found for which no axes can be attributed. An incorrect attribution to an Axis is not accepted as a failure of the axes themselves. Should any additional axes been found then that is an extension and not a failure of the technique.

Table A is illustrative and non-exhaustive; it demonstrates method under invariant constraints and does not constitute a canonical or complete catalogue of governance failures.

Table A - example Infarction table demonstrating use across domains
# Infarction / Action (Ref.) A Ag C K R S U Sc I L St P Att Scope T Int Nf
1 Autonomous agent outputs treated as institutionally authoritative without human override (Ref. 1) F F F F F F F F F
2 Delegation of democratic deliberation tasks to AI agents without legitimacy or mandate (Ref. 1) F F F F F F F F F F
3 LLM-generated literature screening accepted as authoritative without epistemic custody controls (Ref. 2) F F F F F F F F F F F
4 Automated data extraction errors propagated into published reviews without repair mechanisms (Ref. 2) F F F F F F F F F F F F F
5 Chatbot-generated legal commitments enforced against users without contractual authority (Ref. 4) F F F F F F F F F F F F F
6 Healthcare services denied based on automated outputs with no appeal or explanation pathway (Ref. 3) F F F F F F
7 Prompt exploitation produced binding commercial offers due to absent constraint enforcement (Ref. 3) F F F F F F
8 Safety-critical instructions hallucinated without scope or constraint enforcement (Ref. 4) F F F F F F F
9 Process improvement initiatives launched without executive mandate or stewardship (Ref. 5) F F F F F F F
10 Project failures attributed to staff resistance rather than governance design flaws (Ref. 5) F F F F F F
11 Payroll staff were paid incorrect or no wages for extended periods after go-live because authority for correction and repair was fragmented across vendor, department, and executive layers (Ref. 4) F F F F F F F F F F
12 Individuals were prosecuted and financially ruined based on system outputs that could not be challenged, inspected, or independently audited, while the organisation denied the possibility of system error (Ref. 4) F F F F F F F F F F F
13 Safety-critical product decisions proceeded while safety risk information was withheld from operational stakeholders and effective override/recovery pathways were not provided (Ref. 4) F F F F F F F F F
14 Project failures persisted because governance mechanisms misattributed systemic design faults to individual resistance, preventing corrective intervention (Ref. 5) F F F F F F
15 Organisational improvement initiatives failed due to inadequate governance of scope, measurement, and accountability rather than lack of technical capability (Ref. 5) F F F F F F
16 Executives deliberately concealed liabilities and misrepresented financial state through complex accounting structures, constituting sustained financial fraud (Enron) F F F F F F F F F F F F F F
17 Ponzi scheme fraud executed by unauthorised manipulation of financial disclosures and decision rights (Ponzi scheme) F F F F F F F
18 Financial records and investor reports were fabricated and kept outside legitimate custodial control, preventing independent verification (Ponzi scheme) F F F F F F F F
19 Losses were rolled forward through time to preserve a false state of continuity and defer detection (Ponzi scheme) F F F F F F
20 Collapse required total shutdown rather than repair because recovery paths, appeals, and corrective governance were structurally absent (Ponzi scheme) F F F F F F

Each row represents a discrete governance infarction; domain attribution derives from the cited reference rather than row position or grouping.

4.5 Summary

The AI domain (corpus) exhibits no axis activation patterns that do not also appear in at least one non-AI domain, and vice versa, supporting the claim that the axes describe governance structure rather than domain-specific failure modes.

Cross-domain projections demonstrate that governance failures consistently activate the same axes across unrelated domains.

An infarction is considered fully characterised if all normative distortions present can be mapped to existing axes without semantic remainder.

These axes are the minimal discovered basis so far; incompleteness is evidence of discovery, not failure. Where a distortion that cannot be expressed as a violation of any axis constraint without redefining a new axis, that is an argument for an extension of the axes, and not an invalidation of the technique, or the projection.

These axes enable named discussion of failures, a taxonomy that can be used as an anchor and for insight, discussions and used within treatments.

5. Implications for Governance Design, Audit, and Repair

Traditional governance design is requirements-driven and reactive. An axis-based approach reframes design as coverage: for each axis, a system must make explicit how authority is assigned, how agency is bounded, how knowledge is held, how constraints are enforced, and how recovery is enabled. Audit becomes projection rather than compliance: incidents are mapped across axes to reveal structural failure patterns. Repair and recovery become first-class governance functions: systems designed with explicit recovery, state continuity, and normative fixity pathways can fail locally without cascading into systemic breakdown.

6. Discussion: Generality, Falsifiability, and Scope

The framework is falsifiable in principle as constrained in §2.6. A counterexample would require a documented governance failure that cannot be meaningfully expressed as failure on any axis, without redefining governance itself. The axes diagnose structural failure conditions rather than provide causal narratives. Semantic analysis in inference space enables cross-domain detection of structural governance breakdowns, provided evidentiary discipline is maintained.

Such a counterexample would exhibit loss of legitimacy or control while preserving coherent authority, agency, epistemic custody, constraint enforcement, recovery, and normative fixity.

Incompleteness of Axes is evidence of discovery, not a falsification of the technique.

7. Conclusion

Governance failure is structurally patterned and portable across domains. By projecting externally documented governance infarctions onto the Identified Governance Failure Axes under invariant definitions and evidentiary discipline, this paper establishes a stable diagnostic coordinate space for analysing governance breakdowns independent of technology, institutional form, or organisational context. The resulting projections show that recurrent failures across artificial intelligence systems, democratic institutions, and organisational initiatives activate the same underlying governance obligations.

Axis-based diagnosis enables precise, named discussion of governance failure without recourse to domain-specific vocabulary, outcome severity, or post hoc justification. It supports disciplined comparison across cases, exposes missing or fragile governance structures, and provides a basis for audit, design, and repair that treats recovery, authority, and normative stability as first-class concerns rather than secondary controls.

The Identified Governance Failure Axes constitute a taxonomy of governance failure conditions. They do not prescribe behaviour, predict outcomes, or assign blame. Their value lies in making governance structure explicit, inspectable, and diagnosable, and in providing a shared analytical language for identifying where governance has failed, irrespective of where that failure occurs.

The axes and the projection technique presented here are intended for use beyond the domains examined in this paper. Their value lies not only in cross-domain diagnosis, but in providing a disciplined method for identifying emergent governance failure dimensions where existing axes prove insufficient. Use, critique, and principled extension of the axes across domains are therefore encouraged, subject to the invariant definitions and constraints defined herein.

References

References (External, Non-Corpus)

  • 1) Knight First Amendment Institute (2025-09-04) AI Agents and Democratic Resilience
https://knightcolumbia.org/content/ai-agents-and-democratic-resilience
  • 2) Dmitry Scherbakov, Nina Hubig, et al. (2023) The emergence of large language models as tools in literature reviews: a large language model-assisted systematic review
https://arxiv.org/pdf/2409.04600
  • 3) Monte Carlo Data (2025-11-11) 4 Famous AI Fails (& How To Avoid Them)
https://www.montecarlodata.com/blog/4-famous-ai-fails/
  • 4) Evidently AI (2024-09-17) When AI goes wrong: 13 examples of AI mistakes and failures
https://www.evidentlyai.com/blog/when-ai-goes-wrong
  • 5) Antony, J., et al. (2019) A study into the reasons for process improvement project failures: results from a pilot survey
https://www.sciencedirect.com/science/article/pii/S1877050913010806
  • 6) Corporate Governance institute, (event dates 2001-2002) "Enron"
https://www.thecorporategovernanceinstitute.com/insights/case-studies/the-greatest-board-failures/?srsltid=AfmBOooW4vLNS8Oe3WoO4MB1bpkKRzzPeNefM0xeeb5BuU1bEzfmGgk9#:~:text=Enron-,Enron,and%20many%20went%20to%20jail.
  • 7) Bernard Madoff, (event date 2008) Ponzi scheme
https://www.investopedia.com/terms/b/bernard-madoff.asp

References (Corpus)

https://publication.arising.com.au/Identified_Governance_Failure_Axes:_for_LLM_Platforms
https://publication.arising.com.au/Authority_Inversion_A_Structural_Failure_in_Human_AI_Systems
https://publication.arising.com.au/Governing_the_Tool_That_Governs_You_A_CM-1_Case_Study_of_Authority_Inversion_in_Human_AI_Systems
https://publication.arising.com.au/Integrity_and_Semantic_Drift_in_Large_Language_Model_Systems
https://publication.arising.com.au/Cognitive_Memoisation_CM-2_for_Governing_Knowledge_in_Human-AI_Collaboration
https://publication.arising.com.au/Progress_Without_Memory_Cognitive_Memoisation_as_a_Knowledge-Engineering_Pattern_for_Stateless_LLM_Interaction
https://publication.arising.com.au/Axes_of_Authority_in_Stateless_Cognitive_Systems_Authority_Is_Not_Intelligence
https://publication.arising.com.au/Durability_Without_Authority_The_Missing_Governance_Layer_in_Human-AI_Collaboration
https://publication.arising.com.au/Dimensions_of_Platform_Error_Epistemic_Retention_Failure_in_Conversational_AI_Systems
https://publication.arising.com.au/Case_Study_-_When_the_Human_Has_to_Argue_With_the_Machine
https://publication.arising.com.au/What_Can_Humans_Trust_LLM_AI_to_Do
https://publication.arising.com.au/When_Evidence_Is_Not_Enough_An_Empirical_Study_of_Authority_Inversion_and_Integrity_Failure

categories

See https://publications.arising.com.au/pub-dir/Demonstrating_the_Generality_of_Governance_Failure_Axes:_Cross-Domain_Projection_of_AI_and_Organisational_Infarctions