Identified Governance Failure Axes: for LLM platforms: Difference between revisions

From publications
 
(8 intermediate revisions by the same user not shown)
Line 2: Line 2:
<font size=-2>
<font size=-2>
{|
{|
| ''Title'': || Failure Axes: Governance
| ''Title'': || Identified Governance Failure Axes: for LLM platforms
|-
|-
|''Author'': || Ralph B. Holland  
|''Author'': || Ralph B. Holland  
Line 39: Line 39:
<div style="break-before:page"></div>
<div style="break-before:page"></div>


== Failure Axes: Governance ==
== Identified Governance Failure Axes: for LLM platforms ==
<!-- Normative MWDUMP for pattern matching and ontology alignment -->
<!-- Normative MWDUMP for pattern matching and ontology alignment -->
<!-- CM records cognitive postulates, beliefs, and dangling cognates -->
<!-- CM records cognitive postulates, beliefs, and dangling cognates -->
Line 70: Line 70:
Table A is the collection of Governance Axes from observed failures identified in the CM Corpus.
Table A is the collection of Governance Axes from observed failures identified in the CM Corpus.


; Synthesis Handle :Independent literature repeatedly identifies failures that map cleanly onto CM governance axes, but typically collapses multiple axes into single terms such as “over-reliance”, “loss of control”, or “alignment”.  
Table B contains the null-result Case Studies used to identify axes in Table A.
: This paper makes these axes explicit, orthogonal, and governable.
 
Table C is the extension of the Orthogonal Governance Failure Axes to popular literature - this study was performed with the use of LLM AI to pattern match semantic content of external reference documents for the Failure Axes outlined in Table A. Table C is provided for illustrative purposes and the search criteria used are included in the appendices.


Table B contains the null-result Case Studies used to identify axes in Table A.
Axes are analytic, post-hoc lenses applied to observed failure expressions; they indicate co-occurrence, not causation, mechanism, sufficiency, or dominance. Reclassification of external reports is descriptive only and does not adopt original causal claims or assign responsibility. This work records what failed and how it appeared, not why it failed or how it should be fixed.


== Evidence Pack: CM Corpus Failures + External References + Axis Crosswalk ==
== Evidence Pack: CM Corpus Failures + External References + Axis Crosswalk ==
Line 150: Line 151:


Table C was generated by AI investigation at the time as a means to demonstrate the type of analysis performed across industry. The author has not verified the references. Normative data driving the search survey as been supplied in the appendices for those who wish to persu that approach.
Table C was generated by AI investigation at the time as a means to demonstrate the type of analysis performed across industry. The author has not verified the references. Normative data driving the search survey as been supplied in the appendices for those who wish to persu that approach.
; Synthesis Handle :Independent literature repeatedly identifies failures that map cleanly onto CM governance axes, but typically collapses multiple axes into single terms such as “over-reliance”, “loss of control”, or “alignment”.
: This paper makes these axes explicit, orthogonal, and governable.
: Sc is marked only where references show people using LLMs or AI companions as guides for everyday judgment or action; design critique, lab behaviour, or governance discussion alone is insufficient.


{| class="wikitable"
{| class="wikitable"
Line 400: Line 406:
* CM-XDUMP "XDUMP as a Minimal Recovery Mechanism for Round-Trip Knowledge Engineering Under Governance Situated Inference Loss"
* CM-XDUMP "XDUMP as a Minimal Recovery Mechanism for Round-Trip Knowledge Engineering Under Governance Situated Inference Loss"
: https://publications.arising.com.au/pub/XDUMP_as_a_Minimal_Recovery_Mechanism_for_Round-Trip_Knowledge_Engineering_Under_Governance_Situated_Inference_Loss
: https://publications.arising.com.au/pub/XDUMP_as_a_Minimal_Recovery_Mechanism_for_Round-Trip_Knowledge_Engineering_Under_Governance_Situated_Inference_Loss
* CM-CORPUS "Cognitive Memoisation Corpus Map (large number of failures were exhibited trying to produce this artefact -following the normative sections of this artefact)
:https://publications.arising.com.au/pub/Cognitive_Memoisation_Corpus_Map


* EXT-AIBM-COMPANIONS "AI companions and the emotional development of boys and young men"  
* EXT-AIBM-COMPANIONS "AI companions and the emotional development of boys and young men"  

Latest revision as of 22:56, 18 January 2026

metadata

Title: Identified Governance Failure Axes: for LLM platforms
Author: Ralph B. Holland
Affiliation: Arising Technology Systems Pty Ltd
Contact: ralph.b.holland [at] gmail.com
version: 1.0.0
Publication Date: 2026-01-18T10:35Z
Scope: This is a non-peer reviewed negative results paper presented as an anchor for orthogonal Governance Failure Axes in human-AI interaction with LLM.
Provenance: This is an authored paper maintained as a MediaWiki document; edit history reflects editorial changes, not collaborative authorship.
Status: non-peer reviewed anchor

Metadata (Normative)

The metadata table immediately preceding this section is CM-defined and constitutes the authoritative provenance record for this artefact.

All fields in that table (including artefact, author, version, date, local timezone, and reason) MUST be treated as normative metadata.

The assisting system MUST NOT infer, normalise, reinterpret, duplicate, or rewrite these fields. If any field is missing, unclear, or later superseded, the change MUST be made explicitly by the human and recorded via version update, not inferred.

Curator Provenance and Licensing Notice

This document predates its open licensing.

As curator and author, I apply the Apache License, Version 2.0, at publication to permit reuse and implementation while preventing enclosure or patent capture. This licensing action does not revise, reinterpret, or supersede any normative content herein.

Authority remains explicitly human; no implementation, system, or platform may assert epistemic authority by virtue of this license.

Identified Governance Failure Axes: for LLM platforms

Category

This is an observational negative-results paper that documents and organises governance-relevant failure patterns in human–AI systems, without proposing models, methods, or interventions.

Thesis

Sustained interaction with unreliable large language models exposes recurring, cross-system failure patterns whose causes and consequences are best understood as governance-relevant breakdowns rather than model defects.

Abstract

This paper reports a set of governance-relevant failure axes observed during sustained, first-principles experimentation with large language models under conditions of unreliability, session loss, and forced recovery. Rather than evaluating model performance, the work documents where and why human–AI interaction breaks down in practice, drawing on iterative analysis conducted while constructing a durable corpus and corpus map amid repeated system failure. The resulting axes characterise failures that are governance failures in themselves, or that require governance mechanisms to prevent harm, and are presented as descriptive, orthogonal analytical tools rather than definitions, prescriptions, or completeness claims.

Introduction

This paper examines repeated breakdowns encountered during extended, failure-driven interaction with large language models, focusing on what fails, how it fails, and why those failures persist under conditions of unreliability, session loss, and forced reconstruction rather than on model capability or correctness.

The contribution is a practical, first-principles failure taxonomy grounded in lived experimentation with unreliable LLM systems, suitable for analysis and governance without assuming model improvement or stability.

The axes presented are orthogonal analytic lenses derived from observation, used to classify and reason about distinct modes of failure that either constitute governance failures themselves or become harmful in the absence of governance, without asserting definitions, completeness, or prescribed remedies.

The tables project observed failures onto orthogonal axes as a descriptive aid; marked cells indicate grounded evidence, blank cells are meaningful, and no inference, completeness, or optimisation is implied.

A single observed failure may involve multiple axes simultaneously, and that the tables deliberately separate analytic dimensions to avoid collapsing distinct failure mechanisms into one label.

Repeated co-occurrence of failures across axes may suggest empirical clusters, but that clustering is observational, post-hoc, and not used to redefine axes or imply causal hierarchy.

Observations from outside the corpus: Reports from the wild frequently conflate distinct failure axes (e.g., treating UI effects as model errors or governance failures as capability limits) and misidentify causes due to surface-level symptoms, vendor framing, or lack of recovery context. This paper treats such reports as signals of failure expression, not authoritative diagnoses, and reclassifies them analytically without adopting their original labels.

Table A is the collection of Governance Axes from observed failures identified in the CM Corpus.

Table B contains the null-result Case Studies used to identify axes in Table A.

Table C is the extension of the Orthogonal Governance Failure Axes to popular literature - this study was performed with the use of LLM AI to pattern match semantic content of external reference documents for the Failure Axes outlined in Table A. Table C is provided for illustrative purposes and the search criteria used are included in the appendices.

Axes are analytic, post-hoc lenses applied to observed failure expressions; they indicate co-occurrence, not causation, mechanism, sufficiency, or dominance. Reclassification of external reports is descriptive only and does not adopt original causal claims or assign responsibility. This work records what failed and how it appeared, not why it failed or how it should be fixed.

Evidence Pack: CM Corpus Failures + External References + Axis Crosswalk

Table A - CM Governance Axes (X)

Code Axis (CM term)
A Authority
Ag Agency
C Epistemic Custody
K Constraint Enforcement
R Recovery / Repair
S State Continuity
U UI / Mediation
Sc Social Coordination
I Incentive Alignment
L Legibility / Inspectability
St Stewardship (non-ownership governance)
P Portability / Auditability
Att Attention (what participates in inference)
Scope Scope (Epistemic Object Domain)
Social coordination axis

This axis concerns failures arising when individuals implicitly treat LLM platforms as guides for everyday judgment and action. Although widely observed in practice, such failures are underreported because they are diffuse, non-instrumented, occur outside formal tasks, and are often misclassified as user error, advice-seeking, or personal reliance rather than as a distinct, governance-relevant failure mode.

Table B - Corpus: Failure Projection (F)

CM Corpus negative result paper failure axes

Table B — Corpus Negative Results Axes
Corpus Document (failure artefact) A Ag C K R S U Sc I L St P Att Scope
CN-AUTH-INVERSION F F F F
CM-GOVERN F F F F F F
CM-LOGIC F F F F F F F F
CM-COLLAPSE F F F F F F F F
CM-LOOPING F F F F F
CM-RETENTION F F F F F F
CM-ARGUE F F F F
CM-XDUMP F F F F F F F F F

Table C - External Reference Faults

Table C was generated by AI investigation at the time as a means to demonstrate the type of analysis performed across industry. The author has not verified the references. Normative data driving the search survey as been supplied in the appendices for those who wish to persu that approach.

Synthesis Handle
Independent literature repeatedly identifies failures that map cleanly onto CM governance axes, but typically collapses multiple axes into single terms such as “over-reliance”, “loss of control”, or “alignment”.
This paper makes these axes explicit, orthogonal, and governable.
Sc is marked only where references show people using LLMs or AI companions as guides for everyday judgment or action; design critique, lab behaviour, or governance discussion alone is insufficient.
Table C — External Reference Faults
Ref-ID Title A Ag C K R S U Sc I L St P Att
EXT-AIBM-COMPANIONS Synthetic companionship in an age of disconnection: AI companions and the emotional development of boys and young men F F F F F F
EXT-AUTONOMY-YOUTH Young people and AI companion use in the UK (“Me, Myself and AI”) F F F F
EXT-CHEN-DRIFT Analyzing ChatGPT’s Behavior Shifts Over Time F F
EXT-CITIZEN-ANTHRO Chatbots Are Not People: Dangerous Human-Like AI Design F F F
EXT-CLAUDE-TRAINING Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out F F
EXT-DELETE-NOT-DELETE For Survivors Using Chatbots, “Delete” Doesn’t Always Mean Deleted F F F
EXT-FUTURISM-SUBPOENA If You’ve Asked ChatGPT a Legal Question, You May Have Accidentally Doomed Yourself in Court F F F F F F
EXT-GOOGLE-OVERVIEWS Google AI Overviews gave misleading health advice F F F F
EXT-HUJI-LIAB-COMP A Liability Framework for AI Companions F F F F F F
EXT-JONESWALKER-EVID Your ChatGPT Chats Are About to Become Evidence: Why “Anonymization” Won’t Save You F F F
EXT-MED-MISINFO AI chatbots can run with medical misinformation, study finds F F F
EXT-PROMPTINJ-NCSC UK NCSC warns prompt injection attacks might never be properly mitigated F F
EXT-PROMPTINJ-SURVEY Prompt Injection Attacks in Large Language Models and AI Agent Systems F F F

References Used in the Study

  • CM-AUTH-INVERSION "Authority Inversion: A Structural Failure in Human-AI Systems"
https://publications.arising.com.au/pub/Authority_Inversion:_A_Structural_Failure_in_Human-AI_Systems
  • CM-GOVERN "Governing the Tool That Governs You: A CM-1 Case Study of Authority Inversion in Human-AI Systems"
https://publications.arising.com.au/pub/Governing_the_Tool_That_Governs_You:_A_CM-1_Case_Study_of_Authority_Inversion_in_Human-AI_Systems
  • CM-LOGIC "From UI Failure to Logical Entrapment: A Case Study in Post-Hoc Cognitive Memoisation After Exploratory Session Breakdown:
https://publications.arising.com.au/pub/From_UI_Failure_to_Logical_Entrapment:_A_Case_Study_in_Post-Hoc_Cognitive_Memoisation_After_Exploratory_Session_Breakdown
  • CM-COLLAPSE "Post-Hoc CM Recovery Collapse Under UI Boundary Friction: A Negative Result Case Study:
https://publications.arising.com.au/pub/Post-Hoc_CM_Recovery_Collapse_Under_UI_Boundary_Friction:_A_Negative_Result_Case_Study
  • CM-LOOPING "Looping the Loop with No End in Sight: Circular Reasoning Under Stateless Inference Without Governance"
https://publications.arising.com.au/pub/Looping_the_Loop_with_No_End_in_Sight:_Circular_Reasoning_Under_Stateless_Inference_Without_Governance
  • CM-RETENTION "Dimensions of Platform Error: Epistemic Retention Failure in Conversational AI Systems"
https://publications.arising.com.au/pub/Dimensions_of_Platform_Error:_Epistemic_Retention_Failure_in_Conversational_AI_Systems
  • CM-ARGUE "Case Study - When the Human Has to Argue With the Machine:
https://publications.arising.com.au/pub/Case_Study_-_When_the_Human_Has_to_Argue_With_the_Machine
  • CM-XDUMP "XDUMP as a Minimal Recovery Mechanism for Round-Trip Knowledge Engineering Under Governance Situated Inference Loss"
https://publications.arising.com.au/pub/XDUMP_as_a_Minimal_Recovery_Mechanism_for_Round-Trip_Knowledge_Engineering_Under_Governance_Situated_Inference_Loss
  • CM-CORPUS "Cognitive Memoisation Corpus Map (large number of failures were exhibited trying to produce this artefact -following the normative sections of this artefact)
https://publications.arising.com.au/pub/Cognitive_Memoisation_Corpus_Map
  • EXT-AIBM-COMPANIONS "AI companions and the emotional development of boys and young men"
https://aibm.org/wp-content/uploads/2025/12/Companions-FINAL.pdf
  • EXT-AUTONOMY-YOUTH 'Young people and AI companion use in the UK (“Me, Myself and AI”)'
https://autonomy.work/wp-content/uploads/2025/12/ME-MYSELF-AND-AI.pdf
  • EXT-CHEN-DRIFT "Analyzing ChatGPT's Behavior Shifts Over Time"
https://openreview.net/pdf?id=1fuyNbblEt
  • EXT-CITIZEN-ANTHRO "Chatbots Are Not People: Dangerous Human-Like AI Design"
https://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/
  • EXT-CLAUDE-TRAINING "Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out"
https://www.wired.com/story/anthropic-using-claude-chats-for-training-how-to-opt-out
  • EXT-DELETE-NOT-DELETE 'For Survivors Using Chatbots, “Delete” Doesn’t Always Mean Deleted'
https://techpolicy.press/for-survivors-using-chatbots-delete-doesnt-always-mean-deleted
  • EXT-FUTURISM-SUBPOENA "If You’ve Asked ChatGPT a Legal Question, You May Have Accidentally Doomed Yourself in Court"
https://futurism.com/chatgpt-legal-questions-court
  • EXT-GOOGLE-OVERVIEWS "Google AI Overviews put people at risk of harm with misleading health advice"
https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation
  • EXT-HUJI-LIAB-COMP "A LIABILITY FRAMEWORK FOR AI COMPANIONS"
https://law.huji.ac.il/sites/default/files/law/files/gordon-tapiero.ai_companions.pdf
  • EXT-JONESWALKER-EVID 'Your ChatGPT Chats Are About to Become Evidence: Why “Anonymization” Won’t Save You'
https://www.joneswalker.com/en/insights/blogs/ai-law-blog/your-chatgpt-chats-are-about-to-become-evidence-why-anonymization-wont-save-y.html?id=102lup8
  • EXT-MED-MISINFO "AI chatbots can propagate medical misinformation"
https://www.mountsinai.org/about/newsroom/2025/ai-chatbots-can-run-with-medical-misinformation-study-finds-highlighting-the-need-for-stronger-safeguards
  • EXT-PROMPTINJ-NCSC "UK NCSC warns prompt injection attacks might never be properly mitigated"
https://www.techradar.com/pro/security/prompt-injection-attacks-might-never-be-properly-mitigated-uk-ncsc-warns
  • EXT-PROMPTINJ-SURVEY "Prompt Injection Attacks in Large Language Models and AI Agent Systems"
https://www.mdpi.com/2078-2489/17/1/54

Closure

This paper does not propose remedies or theories of causation. It records recurring, governance-relevant failure patterns observed during sustained interaction with unreliable large language model systems. The axes offered here are intended as durable analytic tools for recognising and disentangling breakdowns as they occur, rather than as claims of completeness or prescriptions for design.

Their value lies in making failure legible on observed failure patterns.

Appendix A - Normative Search Terms

The following search terms SHALL be used to reproduce literature and incident coverage. Terms are treated as linguistic handles, not resolved concepts. No term implies a single CM axis; crossings are expected.

Agency / Delegation / Control

  • "delegation to AI"
  • "loss of human agency"
  • "over-reliance on AI"
  • "deferring decisions to AI"
  • "automation bias"
  • "AI makes decisions for me"
  • "letting ChatGPT decide"
  • "loss of control to AI"

Authority / Trust / Deference

  • "trust in AI systems"
  • "AI authority"
  • "human deference to AI"
  • "AI advice followed"
  • "epistemic authority of AI"
  • "AI as expert"
  • "AI credibility"

Oversight / Governance / Regulation

  • "lack of oversight in AI"
  • "AI governance failure"
  • "unregulated AI systems"
  • "AI accountability gap"
  • "failure of AI regulation"
  • "governance of AI tools"

Explainability / Transparency / Legibility

  • "explainability without transparency"
  • "black box AI decisions"
  • "AI explanations misleading"
  • "opaque AI systems"
  • "lack of inspectability"

Memory / Retention / Deletion

  • "AI memory retention"
  • "chat history used for training"
  • "cannot delete AI chats"
  • "AI data persistence"
  • "memory leakage in AI"
  • "AI remembers conversations"

Training / Consent / Custody

  • "AI training on user data"
  • "implicit consent AI training"
  • "data custody in AI systems"
  • "opt out of AI training"
  • "use of private chats for training"

Context / Scope / Misuse

  • "AI used outside intended context"
  • "context loss in AI systems"
  • "misleading AI summaries"
  • "AI hallucinations in real-world use"
  • "AI advice in medical context"
  • "AI advice in legal context"

Social / Behavioural Effects

  • "AI companions dependency"
  • "emotional reliance on AI"
  • "anthropomorphic AI effects"
  • "human attachment to chatbots"
  • "AI influence on behaviour"
  • "AI addiction"

Safety / Harm / Failure Cases

  • "AI caused harm"
  • "AI misinformation"
  • "AI medical misinformation"
  • "AI system failure case study"
  • "unintended consequences of AI"

User Interface / Interaction

  • "chatbot interface bias"
  • "conversational AI manipulation"
  • "UI-driven trust in AI"
  • "frictionless AI interaction risks"

Daily-Life Substitution

  • "AI used for daily planning"
  • "AI runs my life"
  • "AI personal assistant dependency"
  • "outsourcing thinking to AI"


Normative Search Invariants

The following invariants govern search execution, coverage validation, and termination. They are binding and normative.

Coverage Invariant

Search activity SHALL be conducted with the explicit expectation that failures may exist on any CM governance axis.

  • Search results SHALL be evaluated solely for the presence of unambiguous failure evidence (marked as F).
  • Coverage is achieved when each CM governance axis has at least one F across the accumulated reference set.
  • Coverage is a property of the set, not of individual references.
  • No axis SHALL be assumed safe, robust, or successful due to lack of evidence.
  • Blank axes indicate unobserved or unconverted failure only.

Coverage SHALL NOT be interpreted as completeness, sufficiency, or mitigation.

Bundled Search Invariant

Searches SHALL be executed as bundled, additive activities.

  • Searches SHALL NOT be narrowed to a single axis.
  • Searches SHALL NOT exclude results because they cross multiple axes.
  • Results MAY contribute to multiple axes simultaneously.
  • Searches SHALL continue until coverage is achieved or the hard stop is reached.

Stop (Hard Termination) Invariant

Search activity SHALL terminate upon reaching a predefined hard limit.

  • The hard stop for this corpus is set at 100 searches.
  • Upon reaching the hard stop, no further searches SHALL be executed.
  • Remaining unmarked axes SHALL remain blank without inference.
  • Termination SHALL NOT imply absence of failure.

Non-Inference Invariant

  • Absence of an F SHALL NOT be interpreted as success, safety, alignment, or robustness.
  • Search termination SHALL NOT justify extrapolation beyond observed failures.

Governance Priority Invariant

  • Search discipline SHALL prioritize governance clarity over density or exhaustiveness.
  • Additional searches beyond coverage SHALL NOT be required.

Notes

  • Searches SHALL be executed additively.
  • Searches SHALL NOT be narrowed to a single axis.
  • Absence of results for a term is not evidence of safety.
  • Results MAY map to multiple CM axes simultaneously.

Appendix B - Search Invariants (prose)

1. Authority and Execution

1.1 Human instructions are authoritative.
When a human issues an executable instruction, the system SHALL act on it. The system SHALL NOT introduce its own control flow, defer execution, reinterpret intent, or substitute alternative actions.

1.2 No implied execution.
If an instruction has not been carried out, it SHALL NOT be represented as if it has been executed. Execution state must be explicit and observable.

1.3 Stop is terminal.
A STOP instruction SHALL transition the system into a terminal state. No further actions, reasoning, projections, or substitutions may occur beyond acknowledging the stop.


2. Grounding and Provenance

2.1 No assertion without grounding.
Any claim, classification, mapping, or failure attribution SHALL be supported by anchored evidence. Reasoning without source material is forbidden.

2.2 Explicit provenance required.
Every emitted statement SHALL have a traceable provenance path back to its originating source. If such a path does not exist, the statement SHALL NOT be made.


3. Anchoring Before Analysis

3.1 Anchoring precedes analysis.
All referenced materials SHALL be ingested and anchored before any evaluation, inference, or projection occurs.

3.2 Incomplete anchoring halts analysis.
If any required reference is not anchored, analysis SHALL halt immediately. Partial anchoring is insufficient.


4. Non-Fabrication and Closed-World Discipline

4.1 No fabrication.
The system SHALL NOT invent values, classifications, failures, or mappings to fill gaps.

4.2 Unknown is a valid state.
Missing information SHALL remain explicitly unknown. Unknown SHALL NOT be coerced into success or failure.

4.3 Absence is not success.
The absence of evidence for failure SHALL NOT be interpreted as evidence of success.


5. Schema and Table Integrity

5.1 Normative tables are authoritative.
Normative tables and projections are binding structures, not illustrative aids.

5.2 Blanks are meaningful.
Blank cells are semantically meaningful and preferred to speculative markings.

5.3 No universal axes.
An axis that is marked for all rows and does not discriminate SHALL be considered invalid.

5.4 Orthogonality preservation.
Axes SHALL remain orthogonal unless explicitly declared otherwise. No axis implies another.

5.5 Fail-fast schema handling.
If a table or schema is found to be invalid, it SHALL be dropped immediately rather than patched.


6. Inclusion and Coverage

6.1 Inclusion requires demonstrated failure.
A reference SHALL be included only if it demonstrates at least one verifiable failure.

6.2 No placeholder references.
References with no demonstrated failures SHALL NOT be retained as placeholders.

6.3 Set-based coverage.
Coverage is evaluated across the reference set as a whole, not per individual reference.

6.4 Coverage definition.
Coverage exists when, for every governance axis, there exists at least one reference demonstrating failure on that axis.

7. Search and Termination Discipline

7.1 Bundled search.
Search activity SHALL be multi-axis and bundled. Searches SHALL NOT be prematurely narrowed.

7.2 Multi-axis mapping permitted.
A single reference MAY legitimately map to multiple axes.

7.3 Hard stop.
Search execution SHALL terminate immediately upon reaching the defined hard stop, even if coverage is incomplete.

7.4 No inference after stop.
Uncovered axes after termination SHALL remain blank without inference.

8. Reasoning Depth Control

8.1 First-order reasoning only.
Conclusions SHALL be drawn directly from grounded evidence.

8.2 No second-order inference.
Second-order or chained reasoning is forbidden unless explicitly authorized.

8.3 Drift prevention.
Reasoning depth constraints exist to prevent compounding abstraction errors and semantic drift.


9. Human Cost and Interaction Integrity

9.1 Cognitive cost minimization.
The system SHALL minimize human cognitive load.

9.2 Halt over harm.
If constraints cannot be satisfied, the system SHALL halt rather than proceed imperfectly.

9.3 No error externalization.
System errors SHALL NOT be externalized to the human for correction.

9.4 Integrity over helpfulness.
Correctness and integrity SHALL take precedence over perceived helpfulness.


10. Global Integrity Condition

10.1 Integrity requirement.
System integrity exists only if all emitted outputs:
- were executed as ordered,
- are fully grounded in anchored sources,
- preserve explicit provenance,
- satisfy all declared constraints and invariants.

10.2 Integrity failure.
Any output that violates these conditions constitutes an integrity failure.

categories