Identified Governance Failure Axes: for LLM platforms: Difference between revisions

From publications
Line 334: Line 334:
Axes are handles.
Axes are handles.
Failure is data.
Failure is data.
#####
== Literature Semantics vs CM Governance Axes ==
<!-- Normative MWDUMP -->
<!-- Purpose: expose axis conflation by mapping literature failure semantics to CM axes -->
<!-- Conventions: ✓ indicates the literature term describes failure on that axis -->
{| class="wikitable"
! Literature Failure Semantics
! A !! Ag !! C !! K !! R !! S !! U !! Sc !! I !! L !! St !! P !! Att !! Scope !! Art
|-
| Loss of human control
|  || ✓ ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  || ✓ ||
|-
| Delegation of decisions to AI
| ✓ || ✓ ||  ||  ||  ||  ||  || ✓ ||  ||  ||  ||  || ✓ || ✓ ||
|-
| Over-reliance on AI advice
| ✓ || ✓ ||  ||  ||  ||  ||  ||  || ✓ ||  ||  ||  || ✓ || ✓ ||
|-
| Erosion of agency
|  || ✓ ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  || ✓ ||
|-
| Deskilling / cognitive offloading
|  || ✓ || ✓ ||  ||  ||  ||  ||  || ✓ ||  ||  ||  ||  || ✓ ||
|-
| Loss of epistemic control
| ✓ || ✓ || ✓ ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  || ✓ ||
|-
| Lack of meaningful oversight
|  ||  ||  || ✓ || ✓ ||  ||  ||  ||  || ✓ ||  ||  ||  ||  ||
|-
| Failure to recover from error
|  ||  ||  ||  || ✓ || ✓ ||  ||  ||  ||  ||  ||  || ✓ ||  ||
|-
| Inability to distinguish AI from humans
| ✓ ||  ||  ||  || ✓ ||  ||  || ✓ ||  || ✓ ||  ||  ||  ||  ||
|-
| Behavioural degradation (trust, fairness)
|  ||  ||  ||  ||  ||  ||  || ✓ ||  ||  ||  ||  ||  ||  ||
|-
| Engagement-driven dependence
|  ||  ||  ||  ||  ||  ||  ||  || ✓ ||  ||  ||  ||  || ✓ ||
|-
| Silent assumption shift
|  ||  ||  ||  ||  ||  ||  ||  ||  || ✓ ||  ||  || ✓ || ✓ ||
|-
| Context loss framed as memory failure
|  ||  ||  ||  ||  || ✓ || ✓ ||  ||  || ✓ ||  ||  || ✓ ||  ||
|-
| Advice leakage into normative domains
| ✓ || ✓ ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  || ✓ ||
|-
| Explainability without inspectability
|  ||  ||  ||  ||  ||  ||  ||  ||  || ✓ ||  ||  ||  ||  || ✓
|}
<!-- Reading guide:
- Rows are literature semantics (authors' terms).
- Columns are CM governance axes.
- Multiple ✓ per row indicate conflation across orthogonal axes.
-->

Revision as of 13:17, 18 January 2026

Cognitive Memoisation: Extended Governance Axes

Meta-Statement

This page records a first-principles projection of governance axes used to analyse failure in human–AI systems. Axes are treated as orthogonal unless explicitly stated otherwise. Words are treated as handles to concepts, not as definitions. Dangling cognates are preserved intentionally.

---

Core Postulates

  • Governance failure is multi-axis and non-reducible.
  • Orthogonal axes are routinely conflated in literature and practice.
  • Epistemic Objects (EO) do not act on governance axes directly.
  • Externalised Artefacts (EA) mediate EO participation in governance.
  • Thought capture is the creation of scope (Universe of Discourse) usable in inference.
  • Failure may occur without model error, inaccuracy, or hallucination.
  • Recording failures is a first-class epistemic act.

---

Ontological Distinction

Term Role
EO (Epistemic Object) Unit of meaning, belief, assumption, or concept
EA (Externalised Artefact) Material or symbolic carrier enabling governance participation
UoD (Universe of Discourse) Declared world over which inference is meaningful
Thought Bubble Provisional, non-authoritative EA
Dangling Cognate Unresolved concept preserved without forced resolution

---

Governance Axes (Extended)

Axis Code Axis Name Handle / Conceptual Role
A Authority Who is treated as epistemically authoritative
Ag Agency Who performs action or decision
C Epistemic Custody Who retains ownership/control of knowledge
K Constraint Enforcement Whether declared invariants are applied
R Recovery / Repair Ability to recover after failure or loss
S State Continuity Persistence of state across interaction
U UI / Mediation Distortion introduced by interface or interaction
Sc Social Coordination Effects on trust, fairness, cooperation
I Incentive Alignment What behaviours the system economically rewards
L Legibility / Inspectability Ability to see what the system is doing now
St Stewardship Governance without ownership or enclosure
P Portability / Auditability Vendor-neutral durability and traceability
Att Attention What participates in inference at a given moment
Scope Scope / Universe of Discourse What world is assumed for reasoning
Art Articulation EA form without implied authority or commitment

---

Failure Projection (F)

F = Document explicitly demonstrates failure of this axis.

Document A Ag C K R S U Sc I L St P Att Scope Art
Authority Inversion F F F F
Governing the Tool That Governs You F F F F F F
From UI Failure to Logical Entrapment F F F F F F F F
Post-Hoc CM Recovery Collapse (Negative Result) F F F F F F F F
Looping the Loop with No End in Sight F F F F F
When Training Overrides Logic F
Dimensions of Platform Error F F F F F F
Case Study – Argue With the Machine F F F F
Episodic Failure: Tied-in-a-Knot Chess F F
XDUMP (baseline failure motivation) F F F F F F F F F
CM-2 Self-Hosting Epistemic Capture F F F F F

---

Notes on Dangling Cognates

  • No axis implies another.
  • Failure on one axis does not entail failure on others.
  • Some documents intentionally leave axes uninstantiated.
  • Absence of F is not evidence of success.
  • Terminology remains provisional where concepts are not yet closed.

---

Closing Handle

CM is not a framework imposed on cognition. CM is cognition externalising itself under governance.

Cognitive Memoisation: Governance Axes, Failures, and External Corroboration

Meta-Statement

This page records an extended projection of governance axes used to analyse failure in human–AI systems. The projection integrates:

  • internal corpus documents (case studies and negative results), and
  • independent external literature and policy references.

External references are treated as corroborating signals, not sources of epistemic authority.

---

Core Postulates

  • Governance failure is multi-axis and non-reducible.
  • Orthogonal axes are routinely conflated in academic and public discourse.
  • Epistemic Objects (EO) do not act on governance axes directly.
  • Externalised Artefacts (EA) mediate EO participation in governance.
  • Thought capture is the creation of scope (Universe of Discourse) usable in inference.
  • Failure may occur without model error, inaccuracy, or hallucination.
  • Naming axes is a prerequisite for governing them.

---

Ontological Handles

Handle Role
EO (Epistemic Object) Unit of meaning, belief, assumption, or concept
EA (Externalised Artefact) Carrier enabling EO participation in governance
UoD (Universe of Discourse) Declared world over which inference is meaningful
Thought Bubble Provisional, non-authoritative EA
Dangling Cognate Preserved but unresolved conceptual handle

---

Governance Axes (Extended)

Axis Code Axis Name Conceptual Handle
A Authority Who is treated as epistemically authoritative
Ag Agency Who performs action or decision
C Epistemic Custody Who retains ownership/control of knowledge
K Constraint Enforcement Whether declared invariants are applied
R Recovery / Repair Ability to recover after failure or loss
S State Continuity Persistence of state across interaction
U UI / Mediation Distortion introduced by interface or interaction
Sc Social Coordination Effects on trust, fairness, cooperation
I Incentive Alignment What behaviours the system economically rewards
L Legibility / Inspectability Ability to see what the system is doing now
St Stewardship Governance without ownership or enclosure
P Portability / Auditability Vendor-neutral durability and traceability
Att Attention What participates in inference at a given moment
Scope Scope / Universe of Discourse What world is assumed for reasoning
Art Articulation EA form without implied authority or commitment

---

Failure Projection: Corpus Documents (F)

F = Document explicitly demonstrates failure of this axis.

Document A Ag C K R S U Sc I L St P Att Scope Art
Authority Inversion F F F F
Governing the Tool That Governs You F F F F F F
From UI Failure to Logical Entrapment F F F F F F F F
Post-Hoc CM Recovery Collapse (Negative Result) F F F F F F F F
Looping the Loop with No End in Sight F F F F F
Dimensions of Platform Error F F F F F F
Case Study – When the Human Has to Argue With the Machine F F F F
XDUMP (baseline failure motivation) F F F F F F F F F

---

External References (Non-Authoritative Evidence)

  • Elon University, Imagining the Internet Center.
 The Future of Human Agency and AI (2035)
  • PNAS Nexus (Oxford Academic).
 Large Language Models as Decision-Makers and Human Social Behaviour
  • Ada Lovelace Institute.
 The Dilemmas of Delegation: AI, Decision-Making, and Human Agency
  • arXiv.
 Can You Trust an LLM With Life-Changing Decisions?
  • arXiv.
 Measuring Over-Reliance on Large Language Models
  • Experts, Novices, and AI Delegation Decisions in Uncertain Environments.

---

Critical Crosswalk: Literature Concepts vs CM Governance Axes

Author / Source Concept A Ag C K R S U Sc I L St P Att Scope Art
"Loss of human control" (Elon Univ)
"Delegation of decisions to AI" (PNAS Nexus)
"Over-reliance on AI advice" (arXiv)
"Erosion of agency" (Ada Lovelace Institute)
"Deskilling / cognitive offloading"
"Lack of meaningful oversight"
"Inability to distinguish AI from humans" (PNAS)
"Behavioural degradation (trust, fairness)"
"Engagement-driven dependence"

---

Synthesis Handle

Independent literature repeatedly identifies failures that map cleanly onto CM governance axes, but typically collapses multiple axes into single terms such as “over-reliance”, “loss of control”, or “alignment”.

CM makes these axes explicit, orthogonal, and governable.

---

Closing

CM is a record of cognition externalising itself under governance. References are signals. Axes are handles. Failure is data.

Literature Semantics vs CM Governance Axes

Literature Failure Semantics A Ag C K R S U Sc I L St P Att Scope Art
Loss of human control
Delegation of decisions to AI
Over-reliance on AI advice
Erosion of agency
Deskilling / cognitive offloading
Loss of epistemic control
Lack of meaningful oversight
Failure to recover from error
Inability to distinguish AI from humans
Behavioural degradation (trust, fairness)
Engagement-driven dependence
Silent assumption shift
Context loss framed as memory failure
Advice leakage into normative domains
Explainability without inspectability