DOI Edition: Progress Without Memory
CM-1 DOI Edition
This page is a DOI-targeted publication wrapper for the CM-master artefact “Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction”.
It MUST NOT redefine, override, reinterpret, or restate any normative CM-1 content. Its sole function is to provide external scholarly framing for DOI publication.
All epistemic authority remains with the CM-master artefact and its normative metadata.
Normative Transclusion
This transclusion imports the CM-master artefact including: • Normative metadata • Version history • Provenance assertions • Golden CM-1 content
The transcluded page remains the sole authoritative definition of Cognitive Memoisation.
Scholarly Positioning (Non-Normative)
This section situates Cognitive Memoisation within the existing technical and theoretical landscape for the purposes of peer review and academic indexing. It does NOT form part of the CM normative corpus.
Large language models (LLMs) are typically deployed as stateless inference systems whose epistemic context is bounded by a finite context window. Multiple techniques have been proposed to mitigate this limitation, including retrieval-augmented generation, vector-store memory, prompt persistence, fine-tuning, and conversational state tracking. However, these approaches treat memory as a technical augmentation rather than as a governed epistemic artefact.
Cognitive Memoisation introduces a distinct model: memory as a human-governed, revision-controlled, permissioned cognitive corpus that mediates all durable knowledge exchange between humans and AI systems.
This work therefore belongs to the intersection of: • knowledge engineering • cognitive systems • human-AI interaction • epistemic governance
rather than to training-time model optimisation or data-centric AI.
Related Work (Boundary Definitions)
The concept of persistent memory in LLM systems has been approached through several technical paradigms:
• Retrieval-Augmented Generation (RAG) uses external vector stores to retrieve text fragments for prompt injection. • Fine-tuning encodes information into model weights but sacrifices transparency and revision control. • Prompt engineering and session memory rely on ephemeral context windows and cannot preserve long-term epistemic state. • Memory plugins and embeddings provide recall but not governed knowledge evolution.
None of these provide: • revision control • epistemic authority • provenance • or permissioned reuse
Cognitive Memoisation addresses this gap by treating memory as a first-class, human-governed knowledge system.
Scholarly References (Non-Normative)
Brown et al. (2020). Language Models are Few-Shot Learners. Lewis et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Borgeaud et al. (2022). Improving Language Models by Retrieving from Trillions of Tokens. Wu et al. (2022). Memorizing Transformers. Liu et al. (2023). Large Language Models as Optimizers. Newell & Simon (1972). Human Problem Solving. Anderson (1983). The Architecture of Cognition. Russell & Norvig (2021). Artificial Intelligence: A Modern Approach. Gruber (1993). A Translation Approach to Portable Ontology Specifications. Studer et al. (1998). Knowledge Engineering: Principles and Methods. Floridi (2019). The Logic of Information.
These references are provided solely to locate CM-1 within the scholarly discourse. They do not define Cognitive Memoisation.
DOI Binding Declaration
This wrapper page is intended to be the DOI-targeted publication surface for the CM-master artefact.
The DOI MUST resolve to this wrapper page. The normative definition of Cognitive Memoisation remains in the transcluded CM-master page.
No DOI, publisher, or platform acquires epistemic authority over CM by virtue of hosting or indexing this wrapper.