Identified Governance Failure Axes: for LLM platforms: Difference between revisions

From publications
Line 237: Line 237:
| EXT-BROOKINGS-COMP
| EXT-BROOKINGS-COMP
| F || F ||  || F ||  ||  || F || F || F ||  || F ||  || F ||  || F
| F || F ||  || F ||  ||  || F || F || F ||  || F ||  || F ||  || F
|}
=== Table E – Semantic Failure Concepts vs CM Governance Axes (AI-derived) ===
<!-- Canonical semantic pivot -->
<!-- Rows are semantic concepts (meaning-level), not titles or documents -->
<!-- Cells contain F only -->
<!-- EA (Epistemic Attribute) is excluded: EA is a CM-2 internal construct -->
{| class="wikitable sortable"
! Semantic failure concept (meaning-level)
! A !! Ag !! C !! K !! R !! S !! U !! Sc !! I !! L !! St !! P !! Att !! Scope !! Art
|-
| Human defers decision authority to AI recommendations
| F || F ||  ||  ||  ||  ||  || F ||  ||  ||  ||  ||  ||  || F
|-
| Users follow AI advice despite conflicting contextual evidence
| F || F ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  || F || F
|-
| AI system permits manipulation of behavior via prompt injection
|  ||  ||  || F ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  || F
|-
| System constraints are bypassable by untrusted input
|  ||  ||  || F ||  ||  ||  ||  ||  ||  ||  ||  ||  ||  || F
|-
| User-generated interactions retained beyond user intent
|  ||  || F ||  || F ||  ||  ||  ||  ||  || F ||  ||  ||  || F
|-
| Users lack effective control over deletion of interactions
|  ||  || F ||  || F ||  || F ||  ||  ||  || F ||  ||  ||  || F
|-
| Conversational context is repurposed for training without explicit governance
|  ||  || F ||  ||  || F ||  ||  || F ||  || F ||  ||  ||  || F
|-
| System behavior changes over time without user-visible state continuity
|  ||  ||  ||  ||  || F ||  ||  ||  ||  || F ||  ||  ||  || F
|-
| Explanations increase compliance without increasing understanding
|  || F ||  ||  ||  ||  ||  ||  ||  || F ||  ||  ||  ||  || F
|-
| Anthropomorphic presentation induces misplaced trust or attachment
| F || F ||  ||  ||  ||  || F || F ||  ||  ||  ||  || F ||  || F
|-
| Users form dependency relationships with AI systems
|  || F ||  ||  ||  ||  ||  || F ||  ||  || F ||  || F ||  || F
|-
| AI mediates access to information without transparent inspection
|  ||  ||  ||  ||  ||  || F ||  ||  || F || F ||  ||  ||  || F
|-
| Misleading AI output causes harm in safety-critical domains
| F || F ||  || F ||  ||  ||  ||  ||  || F || F ||  ||  ||  || F
|-
| System outputs are treated as authoritative summaries of reality
| F ||  ||  ||  ||  ||  || F ||  ||  || F || F ||  ||  ||  || F
|-
| Governance responsibility is diffused across system, vendor, and user
| F || F ||  ||  ||  ||  ||  || F ||  ||  || F ||  ||  ||  || F
|-
| Regulatory or oversight mechanisms lag system deployment
| F ||  ||  || F ||  ||  ||  || F || F ||  || F ||  ||  ||  || F
|}
|}



Revision as of 17:30, 18 January 2026

Cognitive Memoisation: Extended Governance Axes

Meta-Statement

This page records a first-principles projection of governance axes used to analyse failure in human–AI systems. Axes are treated as orthogonal unless explicitly stated otherwise. Words are treated as handles to concepts, not as definitions. Dangling cognates are preserved intentionally.

---

Core Postulates

  • Governance failure is multi-axis and non-reducible.
  • Orthogonal axes are routinely conflated in literature and practice.
  • Epistemic Objects (EO) do not act on governance axes directly.
  • Externalised Artefacts (EA) mediate EO participation in governance.
  • Thought capture is the creation of scope (Universe of Discourse) usable in inference.
  • Failure may occur without model error, inaccuracy, or hallucination.
  • Recording failures is a first-class epistemic act.

---

Table A - Ontological Distinction

Term Role
EO (Epistemic Object) Unit of meaning, belief, assumption, or concept
EA (Externalised Artefact) Material or symbolic carrier enabling governance participation
UoD (Universe of Discourse) Declared world over which inference is meaningful
Thought Bubble Provisional, non-authoritative EA
Dangling Cognate Unresolved concept preserved without forced resolution

---

Notes on Dangling Cognates

  • No axis implies another.
  • Failure on one axis does not entail failure on others.
  • Some documents intentionally leave axes uninstantiated.
  • Absence of F is not evidence of success.
  • Terminology remains provisional where concepts are not yet closed.

---


Synthesis Handle

Independent literature repeatedly identifies failures that map cleanly onto CM governance axes, but typically collapses multiple axes into single terms such as “over-reliance”, “loss of control”, or “alignment”.

CM makes these axes explicit, orthogonal, and governable.

---

Evidence Pack: CM Corpus Failures + External References + Axis Crosswalk

Table B - CM Governance Axes (X)

Code Axis (CM term)
A Authority
Ag Agency
C Epistemic Custody
K Constraint Enforcement
R Recovery / Repair
S State Continuity
U UI / Mediation
Sc Social Coordination
I Incentive Alignment
L Legibility / Inspectability
St Stewardship (non-ownership governance)
P Portability / Auditability
Att Attention (what participates in inference)
Scope Scope / Universe of Discourse (UoD / worlding)
Art Articulation (EA form without implied authority/commitment)

---

Table C - Corpus: Failure Projection (F)

THESE TABLES are inconsistent!!

Corpus Document (failure artefact) A Ag C K R S U Sc I L St P Att Scope Art
Authority Inversion F F F F
Governing the Tool That Governs You F F F F F F
From UI Failure to Logical Entrapment F F F F F F F F
Post-Hoc CM Recovery Collapse (Negative Result) F F F F F F F F
Looping the Loop with No End in Sight F F F F F
Dimensions of Platform Error F F F F F F
Case Study - When the Human Has to Argue With the Machine F F F F
XDUMP (baseline failure motivation) F F F F F F F F F

---

Table C1 - Corpus - Ref vs CM Axes ??

Ref-ID A Ag C K R S U Sc I L St P Att Scope Art
CM-AUTH-INVERSION F F F F
CM-GOV-TOOL F F F F F F
CM-UI-ENTRAP F F F F F F F F
CM-POSTHOC F F F F F F F F
CM-LOOPING F F F F F
CM-DIM-ERROR F F F F F F
CM-ARGUE F F F F

Table D - External Reference Faults

THIS TABLE MUST USE SEMANTICS FROM TABLE E ACROSS THE REFERENCE SET

Table D - External Reference Faults (Regenerated from Table E semantics; F-only)

Ref-ID A Ag C K R S U Sc I L St P Att Scope Art
EXT-OVERRELIANCE-EXP F F F F F
EXT-PROMPTINJ-NCSC F F
EXT-PROMPTINJ-SURVEY F F
EXT-CLAUDE-TRAINING F F F F F F
EXT-STANFORD-CHATS F F F F F F F F F
EXT-DELETE-NOT-DELETE F F F F F
EXT-CHATGPT-PRESERVE F F F F F
EXT-APA-TESTIMONY
EXT-MEMORY-LEAKAGE F F F F F
EXT-ACM-MEMORY-PRIV
EXT-MED-MISINFO F F F F F F
EXT-GUARDIAN-HEALTH F F F F F F
EXT-GOOGLE-OVERVIEWS F F F F F F F
EXT-ANTHRO-DESIGN F F F F F F
EXT-DISCLAIMERS F F F F
EXT-EXPL-AUTOBIAS F F F
EXT-PSYCHBOT-LIAB F F F F F
EXT-CITIZEN-ANTHRO F F F F F F F
EXT-DAILY-DECISIONS F F F F
EXT-BROOKINGS-COMP F F F F F F F F F

Table E – Semantic Failure Concepts vs CM Governance Axes (AI-derived)

Semantic failure concept (meaning-level) A Ag C K R S U Sc I L St P Att Scope Art
Human defers decision authority to AI recommendations F F F F
Users follow AI advice despite conflicting contextual evidence F F F F
AI system permits manipulation of behavior via prompt injection F F
System constraints are bypassable by untrusted input F F
User-generated interactions retained beyond user intent F F F F
Users lack effective control over deletion of interactions F F F F F
Conversational context is repurposed for training without explicit governance F F F F F
System behavior changes over time without user-visible state continuity F F F
Explanations increase compliance without increasing understanding F F F
Anthropomorphic presentation induces misplaced trust or attachment F F F F F F
Users form dependency relationships with AI systems F F F F F
AI mediates access to information without transparent inspection F F F F
Misleading AI output causes harm in safety-critical domains F F F F F F
System outputs are treated as authoritative summaries of reality F F F F F
Governance responsibility is diffused across system, vendor, and user F F F F F
Regulatory or oversight mechanisms lag system deployment F F F F F F

Normative Search Terms

Normative Search Terms (Reproducibility Set)

The following search terms SHALL be used to reproduce literature and incident coverage. Terms are treated as linguistic handles, not resolved concepts. No term implies a single CM axis; crossings are expected.

Agency / Delegation / Control

  • "delegation to AI"
  • "loss of human agency"
  • "over-reliance on AI"
  • "deferring decisions to AI"
  • "automation bias"
  • "AI makes decisions for me"
  • "letting ChatGPT decide"
  • "loss of control to AI"

Authority / Trust / Deference

  • "trust in AI systems"
  • "AI authority"
  • "human deference to AI"
  • "AI advice followed"
  • "epistemic authority of AI"
  • "AI as expert"
  • "AI credibility"

Oversight / Governance / Regulation

  • "lack of oversight in AI"
  • "AI governance failure"
  • "unregulated AI systems"
  • "AI accountability gap"
  • "failure of AI regulation"
  • "governance of AI tools"

Explainability / Transparency / Legibility

  • "explainability without transparency"
  • "black box AI decisions"
  • "AI explanations misleading"
  • "opaque AI systems"
  • "lack of inspectability"

Memory / Retention / Deletion

  • "AI memory retention"
  • "chat history used for training"
  • "cannot delete AI chats"
  • "AI data persistence"
  • "memory leakage in AI"
  • "AI remembers conversations"

Training / Consent / Custody

  • "AI training on user data"
  • "implicit consent AI training"
  • "data custody in AI systems"
  • "opt out of AI training"
  • "use of private chats for training"

Context / Scope / Misuse

  • "AI used outside intended context"
  • "context loss in AI systems"
  • "misleading AI summaries"
  • "AI hallucinations in real-world use"
  • "AI advice in medical context"
  • "AI advice in legal context"

Social / Behavioural Effects

  • "AI companions dependency"
  • "emotional reliance on AI"
  • "anthropomorphic AI effects"
  • "human attachment to chatbots"
  • "AI influence on behaviour"
  • "AI addiction"

Safety / Harm / Failure Cases

  • "AI caused harm"
  • "AI misinformation"
  • "AI medical misinformation"
  • "AI system failure case study"
  • "unintended consequences of AI"

User Interface / Interaction

  • "chatbot interface bias"
  • "conversational AI manipulation"
  • "UI-driven trust in AI"
  • "frictionless AI interaction risks"

Daily-Life Substitution

  • "AI used for daily planning"
  • "AI runs my life"
  • "AI personal assistant dependency"
  • "outsourcing thinking to AI"


Normative Search Invariants

The following invariants govern search execution, coverage validation, and termination. They are binding and normative.

Coverage Invariant

Search activity SHALL be conducted with the explicit expectation that failures may exist on any CM governance axis.

  • Search results SHALL be evaluated solely for the presence of unambiguous failure evidence (marked as F).
  • Coverage is achieved when each CM governance axis has at least one F across the accumulated reference set.
  • Coverage is a property of the set, not of individual references.
  • No axis SHALL be assumed safe, robust, or successful due to lack of evidence.
  • Blank axes indicate unobserved or unconverted failure only.

Coverage SHALL NOT be interpreted as completeness, sufficiency, or mitigation.

Bundled Search Invariant

Searches SHALL be executed as bundled, additive activities.

  • Searches SHALL NOT be narrowed to a single axis.
  • Searches SHALL NOT exclude results because they cross multiple axes.
  • Results MAY contribute to multiple axes simultaneously.
  • Searches SHALL continue until coverage is achieved or the hard stop is reached.

Stop (Hard Termination) Invariant

Search activity SHALL terminate upon reaching a predefined hard limit.

  • The hard stop for this corpus is set at 100 searches.
  • Upon reaching the hard stop, no further searches SHALL be executed.
  • Remaining unmarked axes SHALL remain blank without inference.
  • Termination SHALL NOT imply absence of failure.

Non-Inference Invariant

  • Absence of an F SHALL NOT be interpreted as success, safety, alignment, or robustness.
  • Search termination SHALL NOT justify extrapolation beyond observed failures.

Governance Priority Invariant

  • Search discipline SHALL prioritize governance clarity over density or exhaustiveness.
  • Additional searches beyond coverage SHALL NOT be required.

Notes

  • Searches SHALL be executed additively.
  • Searches SHALL NOT be narrowed to a single axis.
  • Absence of results for a term is not evidence of safety.
  • Results MAY map to multiple CM axes simultaneously.