CM referenced documents
Timeline Projection (from corpus2 manifest)
Flags: [A]=DOI section present, [M]=Category:Must-DOI-Anchor
2025-12-17T22:21Z
Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction [A]
- (no first-level refs)
2025-12-20T10:28Z
ChatGPT UI Boundary Friction as a Constraint on Round-Trip Knowledge Engineering [A]
- (no first-level refs)
2026-01-06T03:56Z
Cognitive Memoisation (CM-2) for Governing Knowledge in Human-AI Collaboration [A]
- (no first-level refs)
2026-01-09T12:40Z
Externalised Meaning: Making Knowledge Portable Without Ontologies, Vendors or Memory [M]
- Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction [A]
- Mechanical Extraction of Thought: Bootstrapping Epistemic Objects from Sequential Input under Cognitive Memoisation [M]
- First Self-Hosting Epistemic Capture Using Cognitive Memoisation (CM-2) [A]
- Integrity and Semantic Drift in Large Language Model Systems [A]
- CM-master
2026-01-11T08:27Z
Durability Without Authority: The Missing Governance Layer in Human-AI Collaboration [M]
2026-01-11T11:22Z
Authority Inversion: A Structural Failure in Human-AI Systems [M]
- Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction [A]
- Cognitive Memoisation: LLM Systems Requirements for Knowledge Round Trip Engineering
- Axes of Authority in Stateless Cognitive Systems: Authority Is Not Intelligence
- Context is Not Just a Window: Cognitive Memoisation as a Context Architecture for Human - AI Collaboration
- Externalised Meaning: Making Knowledge Portable Without Ontologies, Vendors or Memory [M]
- Cognitive Memoisation (CM-2) for Governing Knowledge in Human-AI Collaboration [A]
- Integrity and Semantic Drift in Large Language Model Systems [A]
- Governing the Tool That Governs You: A CM-1 Case Study of Authority Inversion in Human-AI Systems
- What Can Humans Trust LLM AI to Do? [A]
- Governance Failure Axes Taxonomy [A]
- Observed Model Stability: Evidence for Drift-Immune Embedded Governance
2026-01-20T06:08Z
First Self-Hosting Epistemic Capture Using Cognitive Memoisation (CM-2) [A]
- Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction [A]
- Cognitive Memoisation (CM-2) for Governing Knowledge in Human-AI Collaboration [A]
- Mechanical Extraction of Thought: Bootstrapping Epistemic Objects from Sequential Input under Cognitive Memoisation [M]
- Externalised Meaning: Making Knowledge Portable Without Ontologies, Vendors or Memory [M]
- Integrity and Semantic Drift in Large Language Model Systems [A]
- What Can Humans Trust LLM AI to Do? [A]
- Observed Model Stability: Evidence for Drift-Immune Embedded Governance
2026-01-20T09:58Z
Mechanical Extraction of Thought: Bootstrapping Epistemic Objects from Sequential Input under Cognitive Memoisation [M]
2026-01-22T15:10Z
Identified Governance Failure Axes: for LLM platforms [A]
- Governance Failure Axes Taxonomy [A]
- What Can Humans Trust LLM AI to Do? [A]
- Observed Model Stability: Evidence for Drift-Immune Embedded Governance
- Integrity and Semantic Drift in Large Language Model Systems [A]
- Authority Inversion: A Structural Failure in Human-AI Systems [M]
- Durability Without Authority: The Missing Governance Layer in Human-AI Collaboration [M]
2026-01-24T06:34Z
Integrity and Semantic Drift in Large Language Model Systems [A]
- Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction [A]
- ChatGPT UI Boundary Friction as a Constraint on Round-Trip Knowledge Engineering [A]
- Cognitive Memoisation (CM-2) for Governing Knowledge in Human-AI Collaboration [A]
- Axes of Authority in Stateless Cognitive Systems: Authority Is Not Intelligence
- Authority Inversion: A Structural Failure in Human-AI Systems [M]
- Durability Without Authority: The Missing Governance Layer in Human-AI Collaboration [M]
- Externalised Meaning: Making Knowledge Portable Without Ontologies, Vendors or Memory [M]
- Mechanical Extraction of Thought: Bootstrapping Epistemic Objects from Sequential Input under Cognitive Memoisation [M]
- Governing the Tool That Governs You: A CM-1 Case Study of Authority Inversion in Human-AI Systems
- What Can Humans Trust LLM AI to Do? [A]
- Governance Failure Axes Taxonomy [A]
- Identified Governance Failure Axes: for LLM platforms [A]
2026-01-24T06:46Z
What Can Humans Trust LLM AI to Do? [A]
2026-01-24T06:49Z
Governance Failure Axes Taxonomy [A]
- Authority Inversion: A Structural Failure in Human-AI Systems [M]
- Durability Without Authority: The Missing Governance Layer in Human-AI Collaboration [M]
- Integrity and Semantic Drift in Large Language Model Systems [A]
- What Can Humans Trust LLM AI to Do? [A]
- Identified Governance Failure Axes: for LLM platforms [A]
2026-01-27T03:55Z
CM-master-1.16 (anchored) [A]
2026-01-27T05:06Z
Why Machines Cannot Own Knowledge [A]
- Authority Inversion: A Structural Failure in Human-AI Systems [M]
- Integrity and Semantic Drift in Large Language Model Systems [A]
- Governance Failure Axes Taxonomy [A]
- What Can Humans Trust LLM AI to Do? [A]
- Observed Model Stability: Evidence for Drift-Immune Embedded Governance
- Identified Governance Failure Axes: for LLM platforms [A]