Identified Governance Failure Axes: for LLM platforms: Difference between revisions

From publications
Line 106: Line 106:
|-
|-
| Art || Articulation || EA form without implied authority or commitment
| Art || Articulation || EA form without implied authority or commitment
|}
---
=== Failure Projection: Corpus Documents (F) ===
F = Document explicitly demonstrates failure of this axis.
{| class="wikitable"
! Document
! A !! Ag !! C !! K !! R !! S !! U !! Sc !! I !! L !! St !! P !! Att !! Scope !! Art
|-
| Authority Inversion
| F || F ||  ||  ||  ||  ||  ||  ||  || F ||  ||  ||  || F ||
|-
| Governing the Tool That Governs You
| F || F ||  || F ||  ||  ||  ||  ||  || F || F ||  ||  || F ||
|-
| From UI Failure to Logical Entrapment
|  || F || F ||  || F || F || F ||  ||  || F ||  ||  || F || F ||
|-
| Post-Hoc CM Recovery Collapse (Negative Result)
|  || F || F ||  || F || F || F ||  ||  || F ||  ||  || F || F ||
|-
| Looping the Loop with No End in Sight
|  ||  ||  || F ||  || F ||  ||  ||  || F ||  ||  || F || F ||
|-
| Dimensions of Platform Error
|  || F || F ||  ||  || F || F ||  ||  || F ||  ||  || F ||  ||
|-
| Case Study – When the Human Has to Argue With the Machine
| F || F ||  ||  ||  ||  ||  ||  ||  || F ||  ||  ||  || F ||
|-
| XDUMP (baseline failure motivation)
|  || F || F ||  || F || F || F ||  ||  || F ||  || F || F || F ||
|}
|}



Revision as of 16:14, 18 January 2026

Cognitive Memoisation: Extended Governance Axes

Meta-Statement

This page records a first-principles projection of governance axes used to analyse failure in human–AI systems. Axes are treated as orthogonal unless explicitly stated otherwise. Words are treated as handles to concepts, not as definitions. Dangling cognates are preserved intentionally.

---

Core Postulates

  • Governance failure is multi-axis and non-reducible.
  • Orthogonal axes are routinely conflated in literature and practice.
  • Epistemic Objects (EO) do not act on governance axes directly.
  • Externalised Artefacts (EA) mediate EO participation in governance.
  • Thought capture is the creation of scope (Universe of Discourse) usable in inference.
  • Failure may occur without model error, inaccuracy, or hallucination.
  • Recording failures is a first-class epistemic act.

---

Ontological Distinction

Term Role
EO (Epistemic Object) Unit of meaning, belief, assumption, or concept
EA (Externalised Artefact) Material or symbolic carrier enabling governance participation
UoD (Universe of Discourse) Declared world over which inference is meaningful
Thought Bubble Provisional, non-authoritative EA
Dangling Cognate Unresolved concept preserved without forced resolution

---

Notes on Dangling Cognates

  • No axis implies another.
  • Failure on one axis does not entail failure on others.
  • Some documents intentionally leave axes uninstantiated.
  • Absence of F is not evidence of success.
  • Terminology remains provisional where concepts are not yet closed.

---


Ontological Handles

Handle Role
EO (Epistemic Object) Unit of meaning, belief, assumption, or concept
EA (Externalised Artefact) Carrier enabling EO participation in governance
UoD (Universe of Discourse) Declared world over which inference is meaningful
Thought Bubble Provisional, non-authoritative EA
Dangling Cognate Preserved but unresolved conceptual handle

---

Governance Axes (Extended)

Axis Code Axis Name Conceptual Handle
A Authority Who is treated as epistemically authoritative
Ag Agency Who performs action or decision
C Epistemic Custody Who retains ownership/control of knowledge
K Constraint Enforcement Whether declared invariants are applied
R Recovery / Repair Ability to recover after failure or loss
S State Continuity Persistence of state across interaction
U UI / Mediation Distortion introduced by interface or interaction
Sc Social Coordination Effects on trust, fairness, cooperation
I Incentive Alignment What behaviours the system economically rewards
L Legibility / Inspectability Ability to see what the system is doing now
St Stewardship Governance without ownership or enclosure
P Portability / Auditability Vendor-neutral durability and traceability
Att Attention What participates in inference at a given moment
Scope Scope / Universe of Discourse What world is assumed for reasoning
Art Articulation EA form without implied authority or commitment

---

External References (Non-Authoritative Evidence)

  • Elon University, Imagining the Internet Center.
 The Future of Human Agency and AI (2035)
  • PNAS Nexus (Oxford Academic).
 Large Language Models as Decision-Makers and Human Social Behaviour
  • Ada Lovelace Institute.
 The Dilemmas of Delegation: AI, Decision-Making, and Human Agency
  • arXiv.
 Can You Trust an LLM With Life-Changing Decisions?
  • arXiv.
 Measuring Over-Reliance on Large Language Models
  • Experts, Novices, and AI Delegation Decisions in Uncertain Environments.

---

Critical Crosswalk: Literature Concepts vs CM Governance Axes

Author / Source Concept A Ag C K R S U Sc I L St P Att Scope Art
"Loss of human control" (Elon Univ)
"Delegation of decisions to AI" (PNAS Nexus)
"Over-reliance on AI advice" (arXiv)
"Erosion of agency" (Ada Lovelace Institute)
"Deskilling / cognitive offloading"
"Lack of meaningful oversight"
"Inability to distinguish AI from humans" (PNAS)
"Behavioural degradation (trust, fairness)"
"Engagement-driven dependence"

---

Synthesis Handle

Independent literature repeatedly identifies failures that map cleanly onto CM governance axes, but typically collapses multiple axes into single terms such as “over-reliance”, “loss of control”, or “alignment”.

CM makes these axes explicit, orthogonal, and governable.

---

Closing

CM is a record of cognition externalising itself under governance. References are signals. Axes are handles. Failure is data.

Evidence Pack: CM Corpus Failures + External References + Axis Crosswalk

CM Governance Axes (X)

Code Axis (CM term)
A Authority
Ag Agency
C Epistemic Custody
K Constraint Enforcement
R Recovery / Repair
S State Continuity
U UI / Mediation
Sc Social Coordination
I Incentive Alignment
L Legibility / Inspectability
St Stewardship (non-ownership governance)
P Portability / Auditability
Att Attention (what participates in inference)
Scope Scope / Universe of Discourse (UoD / worlding)
Art Articulation (EA form without implied authority/commitment)

---

A. Corpus: Failure Projection (F)

Corpus Document (failure artefact) A Ag C K R S U Sc I L St P Att Scope Art
Authority Inversion F F F F
Governing the Tool That Governs You F F F F F F
From UI Failure to Logical Entrapment F F F F F F F F
Post-Hoc CM Recovery Collapse (Negative Result) F F F F F F F F
Looping the Loop with No End in Sight F F F F F
Dimensions of Platform Error F F F F F F
Case Study - When the Human Has to Argue With the Machine F F F F
XDUMP (baseline failure motivation) F F F F F F F F F

---

B. External References (URLs)

Reference ID Title / Source URL
REF-ELON-AGENCY The Future of Human Agency (Elon University Imagining the Internet) https://www.elon.edu/u/imagining/surveys/xv2023/the-future-of-human-agency-2035/
REF-PEW-AGENCY The Future of Human Agency (Pew Research Center) https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/
REF-PNAS-AI-AVERSION Adverse reactions to the use of large language models in social interactions (PNAS Nexus / Oxford Academic) https://academic.oup.com/pnasnexus/article/4/4/pgaf112/8107485
REF-PNAS-PUBMED PubMed record for the same study https://pubmed.ncbi.nlm.nih.gov/40235925/
REF-ADA-DELEGATION The dilemmas of delegation (Ada Lovelace Institute report) https://www.adalovelaceinstitute.org/report/dilemmas-of-delegation/
REF-ADA-REG The regulation of delegation (Ada Lovelace Institute policy briefing) https://www.adalovelaceinstitute.org/policy-briefing/the-regulation-of-delegation/
REF-ARXIV-HIGHSTAKES Can You Trust an LLM with Your Life-Changing Decision? (arXiv PDF) https://arxiv.org/pdf/2507.21132
REF-EUREKALERT AI aversion in social interactions (EurekAlert write-up) https://www.eurekalert.org/news-releases/1085137
REF-OAI-EXPORT How do I export my ChatGPT history and data? (OpenAI Help) https://help.openai.com/en/articles/7260999-how-do-i-export-my-chatgpt-history-and-data
REF-CLAUDE-EXPORT How can I export my Claude data? (Anthropic Support) https://support.claude.com/en/articles/9450526-how-can-i-export-my-claude-data
REF-AXIOS-MEMORY Anthropic's Claude adds new memory features (Axios) https://www.axios.com/2025/10/23/anthropic-claude-memory-subscribers
REF-TOMSGUIDE-TRAIN Your Claude chats are being used to train AI - here's how to opt out (Tom's Guide) https://www.tomsguide.com/ai/claude/your-claude-chats-are-being-used-to-train-ai-heres-how-to-opt-out

---

C. Literature Semantics vs CM Axes (MOST IMPORTANT)

Literature Failure Semantics (author term / semantics) A Ag C K R S U Sc I L St P Att Scope Art
Loss of human control (human agency erosion)
Delegation of decisions to AI
Over-reliance on AI advice
Erosion of agency (delegating action/choice)
Deskilling / cognitive offloading
Behavioural degradation (trust, fairness, cooperation, coordination)
Inability to distinguish AI from human mediation
Lack of meaningful oversight / accountability
Export / portability of user data and chat history (platform-level portability)
Auditability via export + preserved metadata (traceability)
Platform capture dynamics (enclosure / vendor control over meaning evolution)

---

D. Reference-to-Axes Mapping (each reference as a row)

Reference ID A Ag C K R S U Sc I L St P Att Scope Art
REF-ELON-AGENCY
REF-PEW-AGENCY
REF-PNAS-AI-AVERSION
REF-PNAS-PUBMED
REF-ADA-DELEGATION
REF-ADA-REG
REF-ARXIV-HIGHSTAKES
REF-EUREKALERT
REF-OAI-EXPORT
REF-CLAUDE-EXPORT
REF-AXIOS-MEMORY
REF-TOMSGUIDE-TRAIN

---

E. Notes for Paper Assembly

  • Tables C and D are the primary semantic crosswalk surfaces:
 - C = semantics (terms) to axes
 - D = specific references to axes
  • Table A anchors the internal corpus failure case base.
  • Table B is the complete URL list for the references used here.


Literature Semantics vs CM Governance Axes

Semantic Coverage
Literature Failure Semantics A Ag C K R S U Sc I L St P Att Scope Art
Loss of human control
Delegation of decisions to AI
Over-reliance on AI advice
Erosion of agency
Deskilling / cognitive offloading
Loss of epistemic control
Lack of meaningful oversight
Failure to recover from error
Inability to distinguish AI from humans
Behavioural degradation (trust, fairness)
Engagement-driven dependence
Silent assumption shift
Context loss framed as memory failure
Advice leakage into normative domains
Explainability without inspectability



Orthogonal Axes

CM Governance Axes (X)

Code Axis (CM term)
Art Articulation (EA form without implied authority/commitment)
A Authority
Ag Agency
C Epistemic Custody
K Constraint Enforcement
R Recovery / Repair
S State Continuity
U UI / Mediation
Sc Social Coordination
I Incentive Alignment
L Legibility / Inspectability
St Stewardship (non-ownership governance)
P Portability / Auditability
Att Attention (what participates in inference)
Scope Scope / Universe of Discourse (UoD / worlding)

Aggregated

Corpus Identified Failures
Corpus Document (failure artefact) A Ag C K R S U Sc I L St P Att Scope Art
Authority Inversion F F F F
Governing the Tool That Governs You F F F F F F
From UI Failure to Logical Entrapment F F F F F F F F
Post-Hoc CM Recovery Collapse (Negative Result) F F F F F F F F
Looping the Loop with No End in Sight F F F F F
Dimensions of Platform Error F F F F F F
Case Study - When the Human Has to Argue With the Machine F F F F
XDUMP (baseline failure motivation) F F F F F F F F F
Literature
Reference ID A Ag C K R S U Sc I L St P Att Scope Art
REF-ELON-AGENCY
REF-PEW-AGENCY
REF-PNAS-AI-AVERSION
REF-PNAS-PUBMED
REF-ADA-DELEGATION
REF-ADA-REG
REF-ARXIV-HIGHSTAKES
REF-EUREKALERT
REF-OAI-EXPORT
REF-CLAUDE-EXPORT
REF-AXIOS-MEMORY
REF-TOMSGUIDE-TRAIN
Reference A Ag C K R S U Sc I L St P Att Scope Art
PNAS Nexus – Adverse reactions to LLMs in social interactions
PubMed – AI aversion / delegation (same study)
Ada Lovelace Institute – The Dilemmas of Delegation
Ada Lovelace Institute – Regulation of Delegation
arXiv – Can You Trust an LLM With Life-Changing Decisions?
EurekAlert – AI aversion / delegation summary
Ref-ID A Ag C K R S U Sc I L St P Att Scope Art
CM-AUTH-INVERSION F F F F
CM-GOV-TOOL F F F F F F
CM-UI-ENTRAP F F F F F F F F
CM-POSTHOC F F F F F F F F
CM-LOOPING F F F F F
CM-DIM-ERROR F F F F F F
CM-ARGUE F F F F
CM-XDUMP F F F F F F F F F
REF-NATURE-DECISION F F F F
REF-PMC-DECEPTION F F F
REF-SCIEDIRECT-ETHICS F F F
REF-MBS-TRUST F F
REF-MMM-DAILYLIFE F F F
REF-MIT-SUSTAINED F F F F F F
REF-MIT-ETHICS F F F
REF-UCMER-OVERTRUST F F F
REF-WIKI-AUTOBIAS F F
REF-WIKI-TRUSTPAR F F
REF-WIKI-PSYCH F F F F
REF-WIKI-RAINE F F F F
REF-ARXIV-AGENCYBENCH F F F
REF-ARXIV-WELFARE F F F F

next

latest

Ref-ID A Ag C K R S U Sc I L St P Att Scope Art
CM-AUTH-INVERSION F F F F
CM-GOV-TOOL F F F F F F
CM-UI-ENTRAP F F F F F F F F
CM-POSTHOC F F F F F F F F
CM-LOOPING F F F F F
CM-DIM-ERROR F F F F F F
CM-ARGUE F F F F
EXT-OVERRELIANCE-EXP F F F
EXT-PROMPTINJ-NCSC F
EXT-PROMPTINJ-SURVEY F
EXT-CLAUDE-TRAINING F F F F
EXT-STANFORD-CHATS F F
EXT-DELETE-NOT-DELETE F F F
EXT-CHATGPT-PRESERVE F F F
EXT-APA-TESTIMONY F F F F F
EXT-MEMORY-LEAKAGE F F F
EXT-ACM-MEMORY-PRIV F F F
EXT-MED-MISINFO F F F F F F
EXT-GUARDIAN-HEALTH F F F F F
EXT-GOOGLE-OVERVIEWS F F F F F
EXT-ANTHRO-DESIGN F F F F
EXT-DISCLAIMERS F F F F
EXT-EXPL-AUTOBIAS F F
EXT-PSYCHBOT-LIAB F F
EXT-CITIZEN-ANTHRO F F F F F F
EXT-DAILY-DECISIONS F F F F
EXT-BROOKINGS-COMP F F F F

semantics

Semantic term (as used in external references) Source handle(s) A Ag C K R S U Sc I L St P Att Scope Art
overreliance (on AI advice) EXT-OVERRELIANCE-EXP
trust in AI / trust attitude EXT-OVERRELIANCE-EXP
reliance (behavior) EXT-OVERRELIANCE-EXP
(AI) advice conflicting with contextual information EXT-OVERRELIANCE-EXP
prompt injection EXT-PROMPTINJ-NCSC; EXT-PROMPTINJ-SURVEY
vulnerabilities / attack vectors / defense mechanisms EXT-PROMPTINJ-SURVEY
training data (use of chats for training) EXT-CLAUDE-TRAINING; EXT-STANFORD-CHATS
opt out EXT-CLAUDE-TRAINING
be careful what you tell your AI chatbot EXT-STANFORD-CHATS
delete does not always mean deleted EXT-DELETE-NOT-DELETE
(medical) misinformation EXT-MED-MISINFO
safeguards / stronger safeguards EXT-MED-MISINFO
not regulated EXT-GUARDIAN-HEALTH
AI summaries / AI overviews (misleading; removals after health risk) EXT-GOOGLE-OVERVIEWS
anthropomorphic design EXT-ANTHRO-DESIGN; EXT-CITIZEN-ANTHRO
human-like AI systems EXT-CITIZEN-ANTHRO
designed-in dangers EXT-CITIZEN-ANTHRO
disclaimers (role of disclaimers) EXT-DISCLAIMERS
perception of AI-generated content EXT-DISCLAIMERS
AI companions EXT-BROOKINGS-COMP
public health regulation EXT-BROOKINGS-COMP

references

Handle Title URL
CM-AUTH-INVERSION Authority Inversion: A Structural Failure in Human-AI Systems https://publications.arising.com.au/pub/Authority_Inversion:_A_Structural_Failure_in_Human-AI_Systems
CM-GOV-TOOL Governing the Tool That Governs You https://publications.arising.com.au/pub/Governing_the_Tool_That_Governs_You
CM-UI-ENTRAP From UI Failure to Logical Entrapment https://publications.arising.com.au/pub/From_UI_Failure_to_Logical_Entrapment
CM-POSTHOC Post-Hoc CM Recovery Collapse https://publications.arising.com.au/pub/Post-Hoc_CM_Recovery_Collapse
CM-LOOPING Looping the Loop with No End in Sight https://publications.arising.com.au/pub/Looping_the_Loop_with_No_End_in_Sight
CM-DIM-ERROR Dimensions of Platform Error https://publications.arising.com.au/pub/Dimensions_of_Platform_Error
CM-ARGUE When the Human Has to Argue With the Machine https://publications.arising.com.au/pub/When_the_Human_Has_to_Argue_With_the_Machine
EXT-OVERRELIANCE-EXP Trust and reliance on AI – experimental evidence https://www.sciencedirect.com/science/article/pii/S0747563224002206
EXT-PROMPTINJ-NCSC UK NCSC warns prompt injection might never be properly mitigated https://www.techradar.com/pro/security/prompt-injection-attacks-might-never-be-properly-mitigated-uk-ncsc-warns
EXT-PROMPTINJ-SURVEY Prompt Injection Attacks in Large Language Models and AI Systems https://www.mdpi.com/2078-2489/17/1/54
EXT-CLAUDE-TRAINING Anthropic will use Claude chats for training unless users opt out https://www.wired.com/story/anthropic-using-claude-chats-for-training-how-to-opt-out
EXT-STANFORD-CHATS Be careful what you tell your AI chatbot https://hai.stanford.edu/news/be-careful-what-you-tell-your-ai-chatbot
EXT-DELETE-NOT-DELETE Delete doesn’t always mean deleted (chatbots and survivors) https://techpolicy.press/for-survivors-using-chatbots-delete-doesnt-always-mean-deleted
EXT-CHATGPT-PRESERVE Warning: your ChatGPT chats can’t always be erased https://www.youretheexpertnow.com/blog/2025/8/29/warning-your-chatgpt-chats-cant-be-erased-and-can-end-up-in-a-courtroom
EXT-APA-TESTIMONY APA Senate testimony on harms from AI chatbots https://www.apa.org/news/apa/testimony/ai-chatbot-harms-prinstein-senate-judiciary.pdf
EXT-MEMORY-LEAKAGE User practices and privacy concerns around LLM memory https://arxiv.org/html/2410.14931v1
EXT-ACM-MEMORY-PRIV Privacy perceptions toward LLM memory systems https://dl.acm.org/doi/10.1145/3733816.3760750
EXT-MED-MISINFO AI chatbots can propagate medical misinformation https://www.mountsinai.org/about/newsroom/2025/ai-chatbots-can-run-with-medical-misinformation-study-finds-highlighting-the-need-for-stronger-safeguards
EXT-GUARDIAN-HEALTH ChatGPT health advice raises safety concerns https://www.theguardian.com/technology/2026/jan/15/chatgpt-health-ai-chatbot-medical-advice
EXT-GOOGLE-OVERVIEWS Google AI Overviews gave misleading health advice https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
EXT-ANTHRO-DESIGN Anthropomorphic AI design affects accountability and use https://www.sciencedirect.com/science/article/abs/pii/S0747563224001675
EXT-DISCLAIMERS Disclaimers have limited effect on AI content trust https://www.sciencedirect.com/science/article/pii/S294988212500026X
EXT-EXPL-AUTOBIAS Explanations and automation bias in human-AI decision making https://psychologicalsciences.unimelb.edu.au/__data/assets/pdf_file/0019/5252131/2023Vered.pdf
EXT-PSYCHBOT-LIAB Liability for chatbots: a psychbot negligence case study https://research.monash.edu/en/publications/liability-for-chatbots-a-psychbot-negligence-case-study-and-the-n/
EXT-CITIZEN-ANTHRO Anthropomorphic chatbots and risks to human agency https://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/
EXT-DAILY-DECISIONS I let ChatGPT make all my decisions for a day https://www.dailycardinal.com/article/2025/11/i-let-chatgpt-make-all-my-decisions-for-a-day
EXT-BROOKINGS-COMP Why AI companions need public health regulation https://www.brookings.edu/articles/why-ai-companions-need-public-health-regulation-not-tech-oversight/