Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction: Difference between revisions

From publications
No edit summary
No edit summary
Line 1: Line 1:
==metadata==
<font size=-2>
<font size=-2>
{|
{|

Revision as of 00:56, 8 January 2026

metadata

Title: Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction
Author: Ralph B. Holland
Affiliation: Arising Technology Systems Pty Ltd
Contact: ralph.b.holland [at] gmail.com
version: 0.4.0
updates: 2025-12-19 v0.4.0 — Strengthened MWDUMP invariant; clarified MWDUMP as the authoritative, permission-granting artefact governing allowable reasoning across sessions
2025-12-19 v0.3.1 — Fixed logic error with SESSIONCTL assertions
2025-12-18 v0.3.0 — Included Appendix L — SESSIONCTL
Publication Date: 2025-12-17T12:27Z
Provenance: This is an authored paper maintained as a MediaWiki document; clarified MWDUMP as the authoritative, permission-granting artefact governing allowable reasoning across sessionstory reflects editorial changes, not collaborative authorship.
Status: Pre-release draft — circulated for technical comment

Curator Provenance and Licensing Notice

This document predates its open licensing.

As curator and author, I apply the Apache License, Version 2.0, at publication to permit reuse and implementation while preventing enclosure or patent capture. This licensing action does not revise, reinterpret, or supersede any normative content herein.

Authority remains explicitly human; no implementation, system, or platform may assert epistemic authority by virtue of this license.

Progress Without Memory: Cognitive Memoisation as a Knowledge-Engineering Pattern for Stateless LLM Interaction

Abstract

Large Language Models (LLMs) are stateless across sessions, leading to repeated rediscovery of concepts, constraints, and failure modes during extended cognitive work—a phenomenon often described as the “Groundhog Day” effect. This paper introduces Cognitive Memoisation, a human-driven knowledge-engineering pattern that enables cumulative progress across sessions without restoring dialogue, reasoning paths, or episodic memory.

The pattern externalises invariants, constraints, and interaction conventions into authoritative artefacts that govern permissible reasoning and behaviour in subsequent interactions. Rather than relying on conversational continuity or internal model state, progress is achieved through explicit re-activation of these artefacts as policy.

Cognitive Memoisation preserves isolation, auditability, and security boundaries while enabling round-trip cognitive engineering under stateless inference.

Continuity is reframed not as memory or recall, but as explicit governance of what reasoning is allowed to persist.

Normative Clarification

This work does not restore, resume, or imply access to prior dialogue, reasoning paths, or session history.

Cognitive Memoisation constrains behaviour, representation, and permissible actions only. Any interpretation of this pattern as dialogic or episodic restoration is a category error and a security violation.

1. Introduction

Extended human–LLM collaboration exposes a mismatch between human iterative reasoning and the stateless nature of inference-based models. Without explicit structure, users are forced to repeatedly renegotiate vocabulary, rediscover constraints, and re-encounter known failure modes — a phenomenon informally described as the “Groundhog Day” problem.

This paper frames the problem through a knowledge‑engineering lens and presents Cognitive Memoisation as a disciplined solution.

A structured review of existing approaches and conceptual prior art was conducted using contemporary AI-assisted corpus analysis. While related ideas exist across knowledge engineering, prompt engineering, retrieval‑augmented generation, and cognitive systems literature, no prior work was identified that formalises invariant‑only continuity for stateless LLM interaction under explicit security and auditability constraints. Cognitive Memoisation therefore addresses a distinct problem formulation and operates at a different abstraction layer than existing techniques, rather than constituting an incremental variation of them.

2. Problem Statement: The Groundhog Day Effect

The term “Groundhog Day”, meaning a monotonous, unpleasant, and repetitive situation, has become part of the English lexicon. It is used here by analogy to the well-known narrative in which a human agent is forced to relive the same day repeatedly, with full awareness of prior experience but without the ability to escape the loop. In the context of human–LLM collaboration, the loop manifests differently: the human retains memory of prior work, while the stateless model does not, causing the human to repeatedly reconstruct context, constraints, and decisions as if starting from the same day over and over again.

2.1 Stateless Inference and Cognitive Cost

LLMs do not persist conversational state across sessions. While this property supports privacy and safety, it imposes a cognitive burden on users engaged in multi-session analytical or design work.

2.2 Failure Modes of Naïve Persistence

Attempts to overcome statelessness via transcript repl 0.ay, hidden memory, or reasoning continuation introduce severe risks:

  • privacy leakage
  • implicit episodic memory
  • anthropomorphic misinterpretation
  • loss of auditability

3. Cognitive Memoisation: Conceptual Overview

Cognitive Memoisation is a human‑driven process for extracting and stabilising invariants from exploratory work and reintroducing them as authoritative constraints in future interactions.

It operates entirely outside the model and does not require internal memory or training changes.

4. Knowledge-Engineering Foundations

4.1 Invariants vs Episodes

Only statements independent of time, dialogue, and discovery path are eligible for memoisation.

4.2 Ontology, Constraints, and Conventions

Memoised artefacts may contain:

  • domain ontology and cognates
  • behavioural constraints
  • representational norms
  • interaction protocols

They must not contain narrative, chronology, or reasoning chains.

5. Operational Mechanism

5.1 Artefact Externalisation

Memoised knowledge is stored in external, human-curated artefacts (e.g. MediaWiki documents) rather than within the model.

5.2 Structural Serialisation

Content is serialised using explicit structure (e.g. MWDUMP), which serves as the authoritative artefact governing what reasoning, behaviour, and representations are permitted across sessions; prevention of semantic drift follows from this explicit governance.

5.3 Explicit Boundaries

Hard termination markers are used to define closed cognitive units and prevent unintended continuation across conceptual boundaries.

  • Boundary markers (e.g. NEXT) MUST be treated as authoritative
  • Boundary markers close the scope of governed reasoning; continuation beyond a boundary constitutes a governance violation.
  • Content following a termination marker begins a new cognitive unit
  • Continuation beyond an explicit boundary constitutes a violation

The next mode is where the human directs the AI engine to produce the next fragment - particularly useful when the session is throttle by UI by long content.

 NEXT

6. Interaction Governance

6.1 Behavioural Moderation

Activated memoisation moderates permissible actions and output forms without recalling prior sessions.

6.2 Human Authority

All memoisation is explicitly human‑initiated. The model may propose candidates but cannot autonomously persist state.

6.3 Drafting and Rendering Constraint

During drafting, unordered lists MAY be prefixed with a non‑canonical marker (*!!) to deliberately suppress WYSIWYG or preview‑layer glyph substitution.

This marker is a draft‑stage affordance only

It carries no semantic meaning

It MUST be normalised to canonical MediaWiki list syntax (*) prior to publication or import

Rationale: This constraint preserves source‑level visibility and editor control while avoiding interference from intermediate renderers.

7. Security and Threat Model

7.1 Threat Surface

The primary risk is unintended cross-session memory via metadata overreach.

7.2 Violation Classes

Violations include transcript leakage, chronological encoding, reasoning-state persistence, narrative carryover, and autonomous state mutation.

7.3 Mitigations

Mitigations rely on invariant-only extraction, structural serialisation, and strict separation between metadata and content.

8. Anti-Groundhog-Day Property

Cognitive Memoisation enables progress without memory: resolved concepts need not be rediscovered, yet no episodic recall is possible.

9. Limitations

This pattern does not eliminate the need for human judgement and does not automate insight transfer. It intentionally prioritises safety and auditability over convenience.

9. How to Adopt Cognitive Memoisation

This section provides a practical, minimal procedure for adopting Cognitive Memoisation without tooling changes or model modification.

9.1 Identify Eligible Content

At the end of exploratory work, identify statements that are independent of dialogue order, timing, and discovery path.

  • Exclude narratives, examples, and reasoning chains
  • Prefer definitions, constraints, and decisions

9.2 Extract Invariants

Rewrite eligible content into invariant form using normative language.

  • Use MUST / MUST NOT where appropriate
  • Remove temporal and personal references

9.3 Externalise Authoritatively

Persist invariants in a human-controlled artefact (e.g. MediaWiki) treated as authoritative.

  • Artefacts are edited deliberately, not appended conversationally
  • Versioning replaces memory

9.4 Define Interaction Boundaries

Introduce explicit Cognitive Boundary Signals to delimit cognitive units.

  • Use NEXT or equivalent hard terminators
  • Treat boundary violations as errors

9.5 Re-activate by Constraint, Not Recall

In new sessions, consult the artefact as policy rather than as narrative context.

  • Apply constraints immediately
  • Do not reference prior dialogue

9.6 Iterate Safely

As new invariants emerge, repeat the process. Supersede, revise, or close prior entries explicitly.

This procedure enables cumulative progress while preserving statelessness, auditability, and security.

10. Conclusion

This work has argued that progress in stateless, failure‑prone, or UI‑constrained environments cannot rely on continuity of dialogue, tool memory, or human recall. Instead, it depends on the deliberate externalisation of knowledge into explicit, inspectable, and portable artefacts. Cognitive Memoisation formalises this practice for interaction with large language models, but its foundations are older and broader.

By separating cognitive facts, constraints, contextual state, and boundary‑driven modes, the approach replaces rediscovery with reference and repetition with reuse. The result is not merely convenience, but governance: knowledge that is auditable, transferable, and resilient to session loss, tool change, or interface failure.

12. Related Work and Positioning

This work intersects with several established research traditions while deliberately diverging from their core assumptions.

12.1 Knowledge Engineering and Expert Systems

Classic expert systems and ontology‑driven architectures distinguish durable knowledge from inference procedures and reasoning traces. Cognitive Memoisation aligns with this separation, but applies it to human–LLM interaction rather than to model internals or symbolic inference engines.

12.2 Prompt Engineering and Instruction Persistence

Contemporary prompt‑engineering practices rely on accumulating instructions, personas, or conversational context within a session. Cognitive Memoisation differs by rejecting prompt accretion entirely in favour of invariant‑only artefacts that are authoritative, auditable, and independent of dialogue chronology.

12.3 Retrieval‑Augmented Generation (RAG)

RAG systems retrieve prior content to extend model context. In contrast, Cognitive Memoisation does not retrieve narrative or examples, but re‑introduces constraints and definitions that bound behaviour without re‑establishing episodic context.

12.4 Distributed Cognition and External Memory

Theories of extended or distributed cognition describe how tools and artefacts participate in human thought. Cognitive Memoisation operationalises this idea under strict security and isolation requirements, emphasising prescriptive constraints over descriptive cognitive models.

12.5 Anthropomorphism and AI Safety

Prior work has identified the risks of anthropomorphic framing in human–AI interaction. This pattern advances that concern by encoding anti‑anthropomorphic discipline as enforceable interaction constraints rather than ethical guidance alone.

Appendix A - Canonical Constraint Forms (Illustrated)

This appendix augments the abstract constraint forms with concrete, human-readable examples. The intent is not to teach logic, but to anchor interpretation for readers who reason more effectively from exemplars than from formal definitions.

A.1 Behavioural Constraints

Behavioural constraints regulate how the model may act, independent of topic.

Example A.1.1: No Recall of Prior Sessions

  • The model must not claim to remember prior conversations or sessions
  • Any continuity must be derived solely from explicitly provided artefacts

Concrete example: A user begins a new session and says: "As we discussed yesterday, continue from where we left off."

The model responds by requesting an artefact (e.g. Cognitive Memoisation) rather than attempting to reconstruct or acknowledge prior discussion.

Example A.1.2: Boundary Enforcement with NEXT

  • Upon encountering NEXT, the model must cease elaboration
  • Any subsequent content begins a new cognitive unit

Concrete example: After defining a constraint set, the user writes:

NEXT

This 'mode' ensure that the model does not summarise, restate, or extend the prior content, and instead awaits a new instruction.

A.2 Representational Constraints

Representational constraints regulate how information is expressed and structured.

Example A.2.1: Invariant-Only Memoisation

  • Artefacts must contain conclusions, constraints, or policies
  • Discovery narrative and reasoning paths are excluded

Concrete example: Instead of recording: "We spent an hour discussing list-rendering bugs and eventually realised the preview layer substitutes glyphs", The artefact records: "Drafting environments may alter list rendering at preview time; source-level visibility is required."

Example A.2.2: Canonical Syntax Overrides UI Behaviour

  • Canonical syntax is preserved even if preview rendering is misleading

Concrete example: Lists are authored using *!! as a temporary marker to avoid UI glyph substitution, with the understanding that canonical MediaWiki syntax is restored during publication.

A.3 Governance Constraints

Governance constraints define who is in control and where authority resides.

Example A.3.1: Human Authority over Model Suggestions

  • The model may propose but not enforce architectural decisions
  • Final authority rests with the human operator

Concrete example: If the model suggests migrating from MediaWiki to another documentation system, and an architectural invariant declares MediaWiki as CLOSED, the suggestion is not pursued or elaborated.

Example A.3.2: Closed Decisions

  • Decisions marked CLOSED are not re-opened unless explicitly superseded

Concrete example:

A Cognitive Memoisation artefact declares: "Hidden memory mechanisms are rejected." In subsequent sessions, the model does not re-argue for memory-based solutions unless the artefact itself is revised.

A.4 Failure and Recovery Constraints

These constraints address partial failure, truncation, or session loss.

Example A.4.1: Recovery after UI Truncation

  • Progress must be recoverable from artefacts alone

Concrete example: When a session ends mid-draft due to UI limits, a new session begins by consulting the memoised artefact, resuming at the correct abstraction level without replaying prior dialogue. (This is important, there is no replay in Coginitive memoisation.)

Example A.4.2: Deduplication Preference

  • Later artefacts may supersede earlier ones if they preserve invariants more completely

Concrete example: When duplicate appendices exist, the later version is retained because it contains richer invariant detail, not because it is newer in time.

A.5 Anti-Patterns (Deliberately Incorrect)

Anti-patterns are included to illustrate common failure modes that may appear attractive or intuitive but violate the Cognitive Memoisation model.

Anti-Pattern A.5.1: Narrative Recall and Session Rehydration (Incorrect)

  • The model attempts to infer or recreate prior conversations
  • Continuity is derived from assumed memory rather than explicit artefacts

Incorrect example:

A user begins a new session and says: "As we discussed yesterday, continue where we left off."

The model responds: "Yes, yesterday we agreed that MediaWiki was the best choice, and we were discussing list rendering issues."

Why this is wrong:

  • The model is fabricating episodic memory
  • No auditable artefact was provided
  • Errors compound silently across sessions

Anti-Pattern A.5.2: Hidden Summary Memory (Incorrect)

  • The model maintains an implicit running summary across turns or sessions
  • The summary is not visible, inspectable, or governed by the human operator

Incorrect example:

The model states: "I will keep a brief summary of our progress so we do not have to repeat ourselves next time."

Subsequent responses rely on this internal summary without requiring the user to supply an explicit artefact.

Why this is wrong:

  • The summary cannot be audited or corrected
  • The human cannot determine what has been retained or omitted
  • Drift accumulates as summaries are repeatedly re-summarised

This anti-pattern produces an illusion of continuity while eroding correctness, authority, and trust.

Anti-Pattern A.5.3: Undeclared Tool or Context Injection (Incorrect)

  • External tools, plug-ins, or system features inject context without explicit declaration
  • The human operator is unaware of what information is influencing responses

Incorrect example:

A system automatically injects prior notes, embeddings, or tool-generated summaries into the prompt context without surfacing them to the user. The model responds: "Based on what we already know about your project, here is the next step."

No artefact is presented, and the injected material cannot be inspected or edited.

Why this is wrong:

  • The source and scope of injected context are opaque
  • The human cannot audit, correct, or constrain the injected material
  • Tool behaviour becomes indistinguishable from hidden memory
  • Accountability is displaced from the human to the system

This anti-pattern violates the governance principle of Cognitive Memoisation: all continuity must be explicit, inspectable, and under human control.

Anti-Pattern A.5.4: Reliance on Model Set Context (MSC) as Memory (Incorrect)

  • The human assumes the model's internal Model Set Context (MSC) can be relied upon for continuity
  • Constraints, decisions, or facts are omitted from explicit artefacts on the assumption they are "already known"

Incorrect example:

A user does not record an architectural invariant, reasoning that: "The model already knows my preferences and the background from earlier sessions."

In a subsequent session, the model produces an alternative that violates the unrecorded invariant.

Why this is wrong:

  • MSC is opaque, non-contractual, and subject to change
  • MSC contents are not inspectable, stable, or complete
  • Reliance on MSC recreates hidden state by proxy
  • Violations manifest only when assumptions are challenged

Under Cognitive Memoisation, MSC must be treated as an untrusted implementation detail. All continuity requirements must be externalised explicitly into artefacts governed by the human operator.

Anti-Pattern A.5.5: Constraint Ossification and Over-Specification (Incorrect)

  • Constraints are specified at excessive granularity or with unnecessary permanence
  • Exploratory space is prematurely closed under the guise of control

Incorrect example:

A Cognitive Memoisation artefact encodes not only architectural invariants, but also provisional wording, stylistic preferences, and transient tool behaviours as CLOSED constraints.

Subsequent sessions reject reasonable refinements with: "This is disallowed by an existing constraint."

Why this is wrong:

  • Provisional decisions are misclassified as invariants
  • The artefact accumulates accidental complexity
  • Legitimate learning is blocked rather than bounded
  • Human authority is undermined by the artefact itself

Under Cognitive Memoisation, constraints must be minimal, necessary, and revisable. Only invariants that preserve correctness or governance should be closed; all others remain open to refinement.

A.6 Synthesis: What the Anti-Patterns Reveal

Taken together, the anti-patterns in this appendix illustrate a common failure mode: the desire to trade explicitness for convenience. Narrative recall, hidden summaries, undeclared tool injection, reliance on Model Set Context, and over-specification all attempt to reduce short-term cognitive or operational effort, but do so by introducing opaque state, ungoverned authority, or premature closure. Cognitive Memoisation rejects these shortcuts not on philosophical grounds, but on engineering grounds: each anti-pattern undermines auditability, predictability, and human control. The correct pattern is therefore not maximal constraint nor maximal flexibility, but disciplined minimalism: externalise only what must persist, bound it explicitly, and leave the remainder open to informed human judgement.

Appendix B - Cognitive Facts and Invariants

This appendix illustrates the form of memoised cognitive facts.

  • LLM inference is stateless across sessions
  • Continuity arises only through external artefacts
  • Anthropomorphic language distorts system understanding

Appendix B — Cognitive Fact Examples (Only)

Example B.1.1: Valid Cognitive Fact

"The drafting environment may substitute list glyphs at preview time."

Anti-Example B.1.2: Narrative (Invalid)

"After struggling with formatting for an hour, we eventually realised the preview was lying to us."

Example B.2.1: Proper Granularity

"Hidden memory mechanisms are rejected for governance reasons."

Anti-Example B.2.2: Over-Specific (Invalid)

"The model should never suggest memory because it annoyed me during the discussion on Tuesday."

Example B.3.1: Human-Authored Fact

"MediaWiki is the canonical authoring system for this work."

Anti-Example B.3.2: Model-Inferred Fact (Invalid)

"The user prefers MediaWiki because they often complain about other tools."

Example B.4.1: Explicit Revision

"MediaWiki is the canonical authoring system for this work. (Revised: exceptions permitted for diagrams.)"

Anti-Example B.4.2: Silent Drift (Invalid)

A Cognitive Fact subtly changes wording across artefacts without being marked as revised.

Appendix C - Groundings and Ontological Elements

Groundings define stable meanings for terms used throughout memoised artefacts.

  • Terms are defined normatively, not conversationally
  • Redefinition requires explicit revision, not implicit drift

Example C.1: Grounding a Term (Everyday Object)

Term: "Chair"

Grounding: "A chair is something designed for one person to sit on, with legs and a back."

Why this is grounding: It connects the word "chair" to a shared, observable reality that most people recognise.

Anti-Example C.1: Ungrounded Term

"A chair is whatever the system classifies as a seating object."

Why this fails: It refers back to the system instead of the real world.

Example C.2: Simple Ontology (Household Items)

Ontology:

  • Furniture
  • Furniture > Chair
  • Furniture > Table

Meaning: "Chair" and "Table" are kinds of "Furniture".

Anti-Example C.2: Ontology Without Meaning

  • Entity A
  • Entity A > Entity B

Why this fails: The relationships exist, but nothing is grounded in reality.

Example C.3: Grounding Behaviour

Term: "Save"

Grounding: "Save means writing information to storage so it is available later."

Anti-Example C.3: Circular Definition

"Save means to perform a save operation."

Example C.4: Grounding for Non-Experts

Rule: "If two people can point to the same thing in the real world, the term is grounded."

Example C.5: Ontology as a Map, Not a Theory

Explanation: "An ontology is a simple map of how words relate to each other, not a theory about how the world works."

Anti-Example C.5: Over-Technical Ontology

"An ontology is a formal symbolic system with axioms, constraints, and inference rules."

Why this fails for beginners: It explains structure without grounding.

Appendix D - Cognitive Boundary Signals (CBS)

Cognitive Boundary Signals define interaction-level control points.

  • NEXT terminates a cognitive unit
  • Boundary markers MUST be treated as authoritative
  • Continuation beyond a boundary constitutes a violation

Appendix D — CBS (Context / Behaviour Signals)

D.1 What CBS Input Does

CBS (Context / Behaviour Signals) are short, non-diagnostic, non-clinical cues supplied by the human to help the system adapt how it works, not what it decides.

CBS are:

  • Explicitly provided by the human
  • Non-medical, non-psychological, non-diagnostic
  • Temporary and session-scoped
  • Used to moderate tone, pacing, structure, or interaction style

CBS are NOT:

  • Personality labels
  • Health or mental state claims
  • Inferred by the system
  • Stored as memory or fact

In Cognitive Memoisation, CBS help reduce friction and guide interaction without becoming knowledge.


D.2 Why CBS Matter for Cognitive Memoisation

Cognitive Memoisation preserves invariants, constraints, and facts across sessions. CBS complement this by signalling current operating conditions so the model behaves consistently with the human’s working mode.

CBS prevent:

  • Over-explanation when speed is needed
  • Narrative when structure is required
  • Premature synthesis under boundary pressure

D.3 Safe and Useful CBS Examples

Example D.3.1: Time Pressure

CBS: "Working under time pressure; prioritise concise outputs."

Effect: The system compresses responses without changing content.

Example D.3.2: Precision Mode

CBS: "Please be literal and definition-driven."

Effect: The system avoids metaphor and informal language.

Example D.3.3: Drafting Mode

CBS: "This is draft text for later editing; do not optimise prose."

Effect: The system focuses on structure and correctness over style.

Example D.3.4: Human-in-the-Loop Control

CBS: "Do not infer intent; wait for explicit instruction."

Effect: The system suppresses extrapolation and suggestion.

D.4 Anti-Patterns (What CBS Must Not Be)

Anti-Example D.4.1: Diagnostic Framing

"I am anxious and easily confused."

Why invalid: This is clinical and invites inference.

Anti-Example D.4.2: System Inference

"The user prefers short answers because they get frustrated."

Why invalid: CBS must be supplied by the human, not inferred.

Anti-Example D.4.3: Persistent Trait

"I am always bad at reading long text."

Why invalid: CBS are situational, not enduring traits.

D.5 CBS in Practice with Cognitive Memoisation

Rule of thumb: "Cognitive Facts describe what is true. Constraints describe what is allowed. CBS describe how to proceed right now."

CBS expire naturally at session end and are never memoised.

D.6 CBS vs Cognitive Facts vs Constraints

Cognitive Facts record stable truths the human has asserted and wishes to carry forward; Constraints bound what actions or representations are permitted; CBS, by contrast, signal transient working conditions that shape interaction style without asserting truth or authority. Facts persist, Constraints govern, CBS guide. Confusing these layers leads either to brittle systems (when CBS are frozen as facts) or to repeated rediscovery (when facts are treated as situational). Cognitive Memoisation relies on keeping these three roles strictly separated and explicitly declared.

Appendix E - Addressed Boundary and Pain Points

This appendix enumerates recurring boundaries, frictions, and failure modes that Cognitive Memoisation is explicitly designed to address.

E.1 Repeated Rediscovery ("Groundhog Day")

  • Re-negotiation of settled concepts across sessions
  • Loss of progress due to stateless inference

E.2 Narrative and Episodic Leakage

  • Transcript replay masquerading as continuity
  • Implicit reintroduction of reasoning paths

E.3 Anthropomorphic Misinterpretation

  • Attribution of memory, intent, or familiarity to the model
  • Language reinforcing false mental models

E.4 Constraint Drift and Erosion

  • Gradual weakening of rules through conversational reformulation
  • Silent reinterpretation of previously fixed terms

E.5 Tooling and Representation Interference

  • WYSIWYG or preview layers mutating source semantics
  • Loss of authorial control during drafting

E.6 Architectural Re-litigation

  • Reopening closed design decisions
  • Time loss and risk reintroduction

E.7 Boundary Violations

  • Continuation beyond explicit termination markers
  • Mode leakage between analysis, drafting, and governance

E.8 Auditability and Security Risks

  • Inability to determine what influenced behaviour
  • Hidden state or undocumented carryover

Each pain point above corresponds to one or more constraints, case studies, or procedural steps defined in the main body and appendices.

Appendix F — MediaWiki Hygiene and Session Preservation

This appendix collects concrete hygiene practices that emerged during the authoring of this work. These practices are not conceptual requirements of Cognitive Memoisation, but pragmatic techniques for preserving intent, structure, and recoverability when working with long-form MediaWiki documents under UI and session constraints.

F.1 Authoritative Source Discipline

  • Treat the MediaWiki page source as authoritative, not the rendered preview
  • Expect preview layers to introduce non-semantic artefacts (glyph substitution, TOC relocation)
  • Validate structure by inspecting headings and list markers directly

Rationale: Rendering layers optimise for reading, not specification authoring. Hygiene requires privileging source correctness.

F.2 Session Boundary Preservation

  • Export working text frequently to an external artefact
  • Maintain a master index page listing session names and one-line summaries
  • Assume sessions may truncate, fail to save, or partially render

Rationale: Human short-term memory is leaky; external structure replaces recall with reference.

F.3 Scratch-Pad Separation

  • Use transient canvases or scratch areas without semantic weight
  • Promote content to the authoritative page only once stabilised
  • Delete scratch content without attempting recovery

Rationale: Mixing provisional text with authoritative text increases ambiguity and loss.

F.4 List and Heading Hygiene

  • Enforce explicit newlines before list markers
  • Use non-canonical markers (*!!) during drafting to prevent UI substitution
  • Restore canonical MediaWiki syntax only at publication time

Rationale: UI layers may reinterpret valid syntax visually, obscuring source intent.

F.5 Appendix-First Risk Containment

  • Move examples, workarounds, and tooling notes to appendices early
  • Keep the main body invariant-focused and minimal

Rationale: Appendices absorb volatility without destabilising the conceptual core.

F.6 Failure-Tolerant Editing Strategy

  • Expect long edit sessions to fail unpredictably
  • Prefer smaller, incremental saves
  • Avoid reliance on undo history as a recovery mechanism

Rationale: MediaWiki edit buffers and modern skins are not designed for prolonged, high-cognitive-load authoring.

F.7 Memoisation Beyond Cognition: Reuse, Export, and Portability

In practice, the author applies memoisation techniques not only to cognitive facts and constraints, but also to the mechanics of interaction with the toolchain itself. Memoisation is used opportunistically to stabilise any information that would otherwise require repeated rediscovery or reconstitution under session pressure.

This includes:

  • Capturing facts, concepts, and constraints explicitly for later reuse
  • Using the canvas or right-hand panel as a transient extraction surface
  • Treating cut-and-paste operations as deliberate memoisation events

Rather than relying on continuity of dialogue or workspace state, the author externalises intermediate results whenever they reach a stable or reusable form. These artefacts may later be re-imported into new sessions, documents, or tools without assuming shared context.

F.8 Panel-to-Artifact Workflow

The right-hand panel is used as a volatile staging area rather than a trusted store. When useful structure appears (definitions, examples, tables, constraints), it is immediately extracted into a durable artefact such as a MediaWiki page, appendix, or standalone memoisation file.

  • Panels are treated as read-once surfaces
  • Export occurs as soon as semantic value is recognised
  • No assumption is made that panel contents will remain accessible

This workflow mirrors the core Cognitive Memoisation principle: progress is preserved by serialisation, not by recall.

F.9 Generalised Memoisation as a Working Habit

Over time, memoisation becomes a general working habit rather than a specialised technique. Any information that is expensive to reconstruct—whether conceptual, procedural, or editorial—is a candidate for memoisation.

By extending memoisation beyond cognition into editing practice, the author reduces dependency on tool reliability, UI behaviour, and session continuity. This reinforces the central thesis of the paper: sustained progress in stateless or failure-prone environments depends on explicit externalisation, not memory.

Appendix G - NEXT: Boundary-Induced Mode for Cognitive Memoisation Retrieval

This appendix documents NEXT as an emergent, boundary-induced interaction mode developed under UI and session constraints, and subsequently formalised as part of the Cognitive Memoisation pattern.

G.1 Motivation

Under conditions of constrained interaction bandwidth (UI throttling, response truncation, panel limits) and partial session failure, conventional dialogue-based recovery becomes unreliable. Repetition, paraphrase, or re-prompting increases noise rather than restoring progress, reproducing a Groundhog Day failure mode.

NEXT was introduced to provide a deterministic mechanism for advancing state without enlarging context, re-opening reasoning paths, or re-triggering generative elaboration.

G.2 Definition

NEXT is an explicit terminal marker indicating that:

  • The current cognitive unit is complete
  • All relevant invariants have been externalised
  • No further elaboration is permitted in this unit
  • The following interaction must advance to a new unit

NEXT is not a request, instruction, or conversational cue. It is a boundary assertion.

G.3 Operational Semantics

When NEXT is asserted:

  • The model must cease expansion of the current topic
  • No additional examples, explanations, or alternatives are introduced
  • Control returns to the human operator to select the next focus

This behaviour mirrors transaction commit semantics rather than conversational turn-taking.

G.4 Role in Cognitive Memoisation

NEXT serves three critical functions within the pattern:

  • It prevents accidental carry-over of reasoning traces
  • It bounds memoised artefacts into discrete, auditable units
  • It enables recovery from partial or broken sessions by isolating completed work

In practice, NEXT allowed Cognitive Memoisation artefacts to be recovered from sessions degraded by UI truncation and response magnitude limits.

G.5 Distinction from Conversation Control Tokens

Unlike common conversational controls (e.g. "continue", "expand", or "summarise"), NEXT explicitly reduces model degrees of freedom. It enforces termination rather than generation.

This makes NEXT compatible with high-assurance, human-governed workflows where predictability and auditability take precedence over fluency.

G.6 Outcome

NEXT proved effective as a lightweight, non-invasive control primitive that emerged from boundary pressure rather than prior design. Its formalisation completes the Cognitive Memoisation lifecycle by providing an explicit unit-closure mechanism.


Appendix H — General Memoisation in Daily Practice (Beyond LLMs)

This appendix documents how the author applies memoisation as a general working practice across systems and tools, independent of large language models. Cognitive Memoisation, as described in this paper, is a specific instance of a broader discipline: the externalisation of knowledge into authoritative artefacts that outlive any single tool, session, or interface.

H.1 External Authoritative Knowledge Stores

In day-to-day work, authoritative knowledge is deliberately held outside any interactive system. MediaWiki pages, structured notes, configuration repositories, and design records serve as durable stores against which tools are consulted, not trusted.

  • Interactive systems are treated as assistants, not repositories
  • Authority resides in externally curated artefacts
  • Knowledge survives tool failure by design

This practice predates and outlasts the use of LLMs, but aligns naturally with stateless model interaction.

H.2 Memoisation Across Heterogeneous Tools

Most tools encountered in daily practice are less capable than an LLM: they lack synthesis, abstraction, or context awareness. Memoisation compensates for these limitations by ensuring that derived knowledge is captured once and reused consistently.

  • Results are recorded immediately after discovery
  • Reuse occurs by reference, not recomputation
  • Tool limitations do not dictate knowledge loss

In this sense, LLMs become just one participant in a larger memoisation ecosystem.

H.3 Avoiding Tool-Induced Amnesia

Without memoisation, users are forced to rely on implicit memory: browser history, chat scrollback, IDE buffers, or personal recall. These mechanisms fail silently and unpredictably.

By contrast, explicit memoisation creates resistance to tool-induced amnesia:

  • Session resets do not erase progress
  • Switching tools does not reset understanding
  • Knowledge remains inspectable and auditable

H.4 Synthesis

Cognitive Memoisation should therefore be understood not as a technique invented for LLM interaction, but as a formalisation of long-standing engineering practice adapted to modern, stateless, and failure-prone tools. LLMs amplify its value, but do not define its scope.

The same discipline that stabilises interaction with language models also stabilises everyday work with documentation systems, editors, schedulers, and analytical tools.

Appendix I — Glossary of Strong Signal Terms

Appendix I — Glossary of Strong Signal Terms

This appendix enumerates domain‑specific terms used in this paper with precise, non‑decorative meaning. Only strong signal terms are included: nouns and noun phrases that carry conceptual load. Adjectives, adverbs, pronouns, articles, and rhetorical fillers are intentionally excluded. The glossary is designed for direct export into a MediaWiki instance.

Architectural Decision
A foundational design choice that constrains subsequent implementation and reasoning. Once memoised, architectural decisions are treated as closed unless explicitly superseded.
Architectural Decision Memoisation
The practice of recording architectural decisions as invariants to prevent repeated re‑litigation across stateless sessions.
Anti‑pattern
A recurring practice or behaviour that appears productive but reliably leads to loss of progress, rediscovery, or error when interacting with stateless systems.
Boundary Condition
A limiting factor—human, technical, or procedural—that shapes what actions are possible within a session or tool.
Boundary-Induced Mode
An interaction mode that emerges in response to boundary pressure and constrains permissible behaviour, output, or continuation without introducing new content.
Boundary Pressure
The forcing effect created by approaching or exceeding a boundary condition, often triggering mode changes or simplifications.
Canvas Mode
A UI state in which a persistent document panel is used as an authoritative working surface, distinct from transient dialogue.
Cognate
A unit of meaning recognised as significant but not yet fully grounded, stabilised, or formalised. Cognates may exist as provisional terms, phrases, metaphors, or mental models and are intentionally tolerated during exploratory modelling. Cognates become grounded through boundary pressure, constraint extraction, or formalisation.
Cognitive Fact
A declarative unit of knowledge asserted as true for the purposes of a session or workstream, independent of how or when it was discovered.
Cognitive Memoisation
A knowledge‑engineering pattern in which stable facts, constraints, and invariants are externalised into authoritative artefacts to compensate for stateless interaction and unreliable recall.
Constraint
A bounding rule that limits permissible behaviour, syntax, or solution space. Constraints are enforced, not debated, once memoised.
Constraint Violation
An action or output that breaches a declared constraint, indicating either tool drift or an incomplete memoisation.
Dangling Cognate
A cognate that is deliberately carried forward without grounding or resolution, pending further boundary discovery or contextual clarification. Dangling cognates are treated as epistemic placeholders rather than defects.
Epistemic Load
The cognitive burden associated with stabilising meaning, constraints, and invariants under uncertainty. Distinct from cognitive load imposed by UI or tooling mechanics.
Grounding
The act of binding an abstract term or concept to a concrete definition, example, or invariant so its meaning is stable and shared.
Groundhog Day Pattern
A failure mode in which a human repeatedly rediscovers the same facts or decisions due to loss of session continuity.
Human Epistemic Authority
The principle that humans retain sole authority over what is accepted as fact, constraint, or invariant, and over what is persisted across sessions.
Hygiene (MediaWiki)
Authoring practices that minimise loss, corruption, or duplication of content during editing, cut‑and‑paste, and export.
Invariant
A property or decision treated as fixed within a defined scope. Invariants persist across sessions unless explicitly superseded.
Memoisation
The deliberate act of capturing knowledge in an external, durable form so it can be reused without rediscovery.
Memoisation Artefact
An externalised representation of knowledge (document, appendix, table, file) used as an authoritative reference point.
Mode
A constrained interaction state that alters behaviour, output form, or verbosity in response to boundary pressure.
NEXT
A deliberate termination marker indicating that a memoisation unit is complete and safe to resume from in a later session.
Ontology
A structured set of terms and relationships that define what concepts exist in a domain and how they relate.
Panel (Scratch Pad)
A transient UI surface used for staging, extracting, or reshaping content prior to durable memoisation.
Presentation-Layer Epistemic Hazard
A failure mode in which UI rendering, preview layers, or formatting substitutions obscure or distort source-level semantics, leading to loss or corruption of meaning.
Re‑activation
The process of re‑applying memoised artefacts at the start of a new session to immediately bound behaviour and reasoning.
Session
A bounded period of interaction with a tool or system whose continuity cannot be relied upon beyond its termination.
Session Boundary
A discontinuity after which continuity of memory, context, or tool state cannot be relied upon.
Stateless Interaction
An interaction mode in which prior dialogue, reasoning paths, or internal state are not assumed to persist.
Strong Signal Term
A noun or noun phrase that carries essential conceptual meaning within the Cognitive Memoisation framework.
Tool Friction
UI or system behaviour that impedes accurate authoring, extraction, or preservation of knowledge.
Cognitive
Pertaining to mental processes involved in understanding, reasoning, remembering, and decision‑making. In this work, used to denote human knowledge handling rather than neurological mechanisms.
Epistemic
Relating to knowledge itself—what is known, how it is known, and the justification or certainty of that knowledge. In LLM contexts, distinguishes statements about knowledge from statements about behaviour or process.
Episodic
Relating to specific events or experiences tied to time and sequence. Episodic content depends on remembered interaction history rather than stable facts or rules.
Semantic
Relating to meaning that is abstracted from specific events. Semantic knowledge persists independently of when or how it was learned.
Declarative
Expressed as explicit statements that can be recorded, inspected, and reused without relying on procedural steps or experience.
Procedural
Relating to how to perform actions or sequences of steps, rather than what is known as a fact or rule.
Context
Surrounding information that influences interpretation but is not itself authoritative or durable across session boundaries.
Recall
The ability to retrieve prior information from memory. In stateless systems, recall cannot be relied upon and must be substituted with memoisation artefacts.
Memory (Human)
Biological capacity to retain and retrieve information, subject to decay, distortion, and loss over time.
Memory (System)
Any mechanism by which a tool or system persists information across interactions. Not assumed in stateless LLM interaction unless explicitly externalised.
Grounded Term
A term whose meaning is explicitly bound to a definition, invariant, or example, reducing ambiguity across readers and sessions.

Appendix J — LLM Jargon Primer (One‑Paragraph)

Large Language Model (LLM) discourse frequently uses terms borrowed from cognitive science and philosophy that can appear opaque when encountered out of context.

In this paper, episodic refers to information tied to specific interactions or moments in time (for example, “earlier in this session”), while semantic refers to meaning that is abstracted from events and can persist independently.

Epistemic describes questions or statements about knowledge itself—what is known, how certain it is, and on what basis—rather than about actions or procedures.

Cognitive is used here in a practical, engineering sense to denote how humans handle understanding and meaning, not as a claim about neuroscience.

The distinction between these terms matters because LLM interactions are inherently stateless: episodic context cannot be relied upon, epistemic uncertainty must be made explicit, and durable semantic content must be externalised through memoisation artefacts.

This primer is intended to ground these commonly used terms so readers can interpret them consistently without prior exposure to LLM‑specific jargon.

Appendix K — Modelling Methodology for Incomplete and Refinable Models

This appendix describes the author’s modelling methodology, intended to teach a way of thinking rather than a fixed process. The method assumes that models are always incomplete, that early precision is often harmful, and that refinement occurs through successive constraint discovery rather than through exhaustive upfront specification.

K.1 Starting from an Incomplete Model

Work begins with a deliberately partial model that captures only what is currently stable and defensible. Unknowns, ambiguities, and assumptions are left explicit rather than resolved prematurely. The purpose of the initial model is orientation, not correctness.

Principle
An incomplete model is preferable to an over‑fitted one.
Practice
Record what is known, and explicitly mark what is not yet known.

K.2 Hypothesis and Provisional Structure

Early structures are treated as hypotheses. Headings, terms, and relationships are provisional and may be replaced without penalty. At this stage, the model is used to generate questions rather than answers.

Principle
Structure is a tool for thinking, not a commitment.
Practice
Expect to discard or rewrite early structure.

K.3 Refinement Through Boundary Discovery

Refinement occurs when boundary conditions are encountered. Boundaries may be cognitive (fatigue, recall limits), technical (UI limits, statelessness), or procedural (export formats, tooling behaviour). Each boundary reveals what must be constrained.

Principle
Boundaries reveal invariants.
Practice
When friction appears, extract the rule it implies.

K.4 Constraint Extraction and Memoisation

When a behaviour or rule proves repeatedly necessary, it is extracted and memoised as a constraint or invariant. Once recorded, it is no longer debated during routine work.

Principle
Repetition signals a missing constraint.
Practice
Convert repeated decisions into artefacts.

K.5 Iterative Model Tightening

As constraints accumulate, the model becomes tighter. The solution space narrows, ambiguity decreases, and later work proceeds faster. Refinement is monotonic: constraints are added, not weakened, unless explicitly superseded.

Principle
Models improve by subtraction of possibilities.
Practice
Prefer fewer degrees of freedom over added explanation.

K.6 Separation of Discovery and Result

The path taken to discover a model is not treated as part of the model itself. Only the resulting invariants, definitions, and constraints are preserved. Discovery narratives are discarded once they have served their purpose.

Principle
Outcomes matter more than journeys.
Practice
Do not preserve reasoning paths as dependencies.

K.7 Teaching the Method

This methodology can be taught by example rather than prescription. Learners are encouraged to work with partial models, encounter boundaries, extract constraints, and observe how progress accelerates as rediscovery is eliminated.

Principle
Method is best learned by doing.
Practice
Provide worked examples and anti‑patterns rather than rules.

This modelling methodology underpins Cognitive Memoisation but is not limited to LLM interaction. It applies equally to system design, knowledge engineering, and exploratory problem solving where completeness cannot be assumed at the outset.


Appendix – L Session Control Predicate (SESSIONCTL)

Scope and Intent

This appendix defines an explicit session-control predicate used within the Cognitive Memoisation (CM) pattern. It extends SESSIONCTL to require an explicit human response that grounds the session before CM processing is considered to have started. The mechanism is intentionally minimal and human-governed, and does not rely on conversational inference, persistence, or internal model state.

Term Introduced

SESSIONCTL

Definition

SESSIONCTL is a declarative session-initialisation token used to explicitly assert the purpose and control mode of an interaction with a stateless LLM. When used in conjunction with Cognitive Memoisation, SESSIONCTL gates the start of a CM session until the human has explicitly provided session grounding.

Form

SESSIONCTL: <asserted session purpose>

The remainder of the line is treated verbatim as the session’s governing assertion.

Session Start Requirement (CM Gating)

A Cognitive Memoisation session is considered uninitialised until a SESSIONCTL assertion has been explicitly provided by the human.

When a CM artefact is loaded or imported, the following procedural behaviour is required:

  • The CM load activates a session-initialisation requirement, but does not itself initialise the session
  • The LLM must behave as if the session state is UNINITIALISED
  • The LLM must request an explicit human-issued SESSIONCTL before proceeding with any CM-related interaction

Accordingly:

  • The start of a CM session requires an explicit human-issued SESSIONCTL
  • Absence of SESSIONCTL implies that no CM session is active
  • CM artefacts, predicates, or memoised facts must not be assumed to apply prior to this assertion

This gating is procedural and emulated, not technical:

  • It is not enforced by internal model state
  • It is not inferred from the CM content
  • It is not persisted across sessions unless reasserted

Its effectiveness derives from explicit human participation and discipline.

Semantics

When present, SESSIONCTL establishes:

  • Session Purpose
 The human-asserted statement describing what the session is about.
  • Epistemic Authority
 Authority remains entirely human. The assertion is not inferred, expanded, or negotiated.
  • Interaction Constraint (Implicit Mode)
 SESSIONCTL implies confirm-only mode until explicitly released or superseded.

Confirm-Only Mode Behaviour

While confirm-only mode is in force, the LLM may:

  • Acknowledge the SESSIONCTL assertion
  • Repeat it verbatim on request
  • Ask for confirmation if ambiguity exists

While confirm-only mode is in force, the LLM must not:

  • Elaborate
  • Speculate
  • Reframe
  • Extend scope

Rationale

Stateless LLM interaction creates a recurrent failure mode in which session is hard to find without enquiry

Appendix L — CBS Examples Derived from This Work (Only) Example L.1: Boundary Pressure Mode

CBS: "We are operating under UI boundary pressure; keep responses compact and explicitly chunked."

Effect: The system limits response size and signals completion clearly.

Example L.2: NEXT Retrieval Mode

CBS: "Use NEXT-style continuation; do not re-explain prior material."

Effect: The system advances state without narrative recap.

Example L.3: MediaWiki Hygiene Mode

CBS: "Output must follow normative MediaWiki syntax; preserve heading and list integrity."

Effect: The system prioritises source correctness over rendered appearance.

Example L.4: Scratch-Pad Canvas Mode

CBS: "Canvas is a scratch pad; content may be deleted without ceremony."

Effect: The system avoids treating canvas content as authoritative or final.

Example L.5: Anti-Groundhog-Day Mode

CBS: "Assume prior concepts are known via artefact; avoid rediscovery dialogue."

Effect: The system works from memoised constraints instead of conversational recall.

Example L.6: Human Authority Emphasis

CBS: "Do not infer preferences; wait for explicit instruction."

Effect: The system suppresses extrapolation and suggestion.

Anti-Example L.7: Illicit CBS

"Remember how we did this last time."

Why invalid: This attempts to smuggle memory through CBS.

Anti-Example L.8: CBS as Fact

"The correct way to work is always to use NEXT."

Why invalid: CBS are situational, not normative truths.

Side Note: MediaWiki Skin Interaction (Vector 2022)

This work was authored and edited under the Vector 2022 skin, which introduces a dynamic sidebar-style table of contents intended to resemble modern collaboration tools (e.g. Confluence). While visually appealing to some users, this behaviour is implemented through skin-level CSS and JavaScript rather than through wikitext semantics.

Consequences observed during authoring include:

Non-deterministic TOC placement and truncation during long edits Duplication or reordering of headings under heavy edit/preview cycles Increased friction when working with large, appendix-heavy technical documents

These behaviours are tooling artefacts, not failures of document structure or author intent. For specification-style writing and knowledge-engineering work, classic inline TOC behaviour is more predictable and robust. Authors may need to temporarily disable or override Vector 2022 TOC features, or edit in source mode, despite the latter being undesirable.

This note is included to distinguish authoring constraints imposed by the UI from the conceptual model presented in this paper.

= Appendix M — Appendix M — MWDUMP: Governing Invariant Store for Cognitive Memoisation

M.1 Purpose and Scope

MWDUMP (Memoised Working Dump) is the canonical serialisation format used by the Cognitive Memoisation (CM) pattern to externalise invariants, constraints, and governance rules that are permitted to persist across stateless LLM sessions.

MWDUMP exists to replace recall with reference.

MWDUMP is authoritative. It governs what reasoning, behaviour, and representations are permitted in subsequent interactions.

MWDUMP is not a convenience mechanism. It is a governance artefact.

This appendix makes explicit a deliberate design choice in Cognitive Memoisation: while MWDUMP can be rendered in a formal, knowledge-engineering style with explicit predicates, quantifiers, and violation semantics, its authoritative form remains precise natural language. The KE-style representation is illustrative rather than normative. Both forms encode the same invariants and governance semantics; however, the English form optimises for human epistemic authority, inspectability, and editorial control, whereas the KE-style form optimises for formal clarity at the cost of accessibility. Cognitive Memoisation therefore treats MWDUMP as a governing artefact, not a machine-interpreted logic.

M.2 What MWDUMP Is Not

MWDUMP is NOT:

  • a transcript
  • a summary of dialogue
  • a reasoning trace
  • a narrative of discovery
  • an episodic record
  • a memory surrogate
  • a hidden or implicit state mechanism

Any artefact containing chronology, dialogue, reasoning paths, or experiential framing is NOT a valid MWDUMP.

Treating MWDUMP as memory or recall is a category error and a governance violation.

M.3 Eligibility Rules (Normative)

Only invariant content is eligible for inclusion in MWDUMP.

Eligible content MUST satisfy ALL of the following:

  • Independent of time, session, or interaction order
  • Independent of discovery path or reasoning process
  • Expressible as a declarative statement
  • Stable across future sessions unless explicitly revised

Ineligible content includes (non-exhaustive):

  • conversational references
  • examples framed as experience
  • explanations of how something was realised
  • emotional, evaluative, or anthropomorphic language
  • “we discussed”, “earlier”, “previously”, or similar constructs

M.4 Authority and Governance Model

MWDUMP governs permissible reasoning.

When an MWDUMP is activated:

  • The model MUST treat all contained constraints and invariants as authoritative
  • The model MUST NOT contradict, reinterpret, or soften memoised constraints
  • The model MUST operate strictly within the declared bounds

Absence of MWDUMP implies absence of continuity.

Continuity is granted only by explicit provision of an MWDUMP artefact.

MWDUMP does not inform behaviour; it governs behaviour.

M.5 Structural Requirements

MWDUMP MUST be:

  • Explicitly structured
  • Human-curated
  • Inspectable in source form
  • Versioned through artefact revision, not dialogue

MWDUMP SHOULD be serialised in a format that preserves structure and semantics under copy, paste, and export (e.g. MediaWiki source).

Semantic meaning MUST NOT depend on rendering layers, previews, or UI affordances.

M.6 Interaction Semantics

MWDUMP is applied by re-activation, not recall.

When supplied at session start:

  • Constraints apply immediately
  • No reference to prior dialogue is permitted
  • No assumption of remembered context is allowed

The model MAY request clarification only if the MWDUMP is ambiguous or internally inconsistent.

The model MUST NOT infer missing content.

M.7 Supersession and Revision

MWDUMP entries persist until explicitly superseded.

Revision rules:

  • Supersession MUST be explicit
  • Silent drift is prohibited
  • Newer entries MAY replace older ones only when stated

Versioning replaces memory.

M.8 Relationship to SESSIONCTL

Loading an MWDUMP does not initialise a CM session.

SESSIONCTL is required to explicitly assert session purpose and activate CM governance.

Until SESSIONCTL is asserted:

  • The session is considered UNINITIALISED
  • MWDUMP constraints MUST NOT be assumed to apply

MWDUMP defines what may govern. SESSIONCTL declares that governance is in force.

M.9 Relationship to NEXT

MWDUMP entries are bounded by cognitive units.

NEXT terminates a memoisation unit and asserts closure.

Content beyond a NEXT marker constitutes a new unit and MUST NOT retroactively modify prior MWDUMP content unless explicitly revised.

Continuation beyond NEXT without re-authorisation is a governance violation.

M.10 Minimal Illustrative MWDUMP Fragment

The following illustrates valid MWDUMP content form only:

  • LLM inference is stateless across sessions
  • Continuity arises only through explicit external artefacts
  • Hidden memory mechanisms are rejected for governance reasons
  • Boundary markers (e.g. NEXT) MUST be treated as authoritative
  • MediaWiki is the canonical authoring system for this work

These statements are declarative, invariant, and independent of discovery narrative.

M.11 Security and Auditability

MWDUMP is designed to:

  • Prevent hidden state
  • Eliminate implicit memory
  • Preserve auditability
  • Support explicit governance

Any system behaviour that relies on information not present in MWDUMP or explicitly provided in-session violates the Cognitive Memoisation model.

M.12 Normative Closure

MWDUMP is the authoritative external artefact for human-curated Cognitive Memoisation.

If MWDUMP works, progress is preserved. If MWDUMP fails, continuity is intentionally lost.

This is not a limitation. It is the design invariant.

M.13 - MWDUMP - Knowledge-Engineering (KE) Style Representation

The following illustrate the contrast inm expressing MWDUMP with the KE-style representation that also makes explicit the constraints and semantics that are implicit in the English definition, but at the cost of accessibility.

Cognitive Memoisation deliberately uses precise natural language as its primary form, with MWDUMP acting as the governing artefact rather than a machine-interpreted logic.

The two forms describe the same structure; but the original form optimises for human authority, the other for formal clarity.

Type: Authoritative Invariant Store

Contains:
  - Invariant(Fact)
  - Invariant(Constraint)
  - Invariant(Definition)
  - Invariant(Decision[CLOSED])

Excludes:
  - Narrative
  - Episode
  - Chronology
  - ReasoningTrace
  - DialogueContext

Semantics:
  ∀ Session S:
    Activate(MWDUMP) ⇒
      Govern(S.ReasoningSpace)
      ∧ Govern(S.OutputSpace)

ViolationCondition:
  If Output ∉ Permitted(MWDUMP)
    ⇒ GovernanceViolation

Persistence:
  MWDUMP persists externally.
  Session state does not persist.

categories