Cognitive Memoisation: corpus guide: Difference between revisions

From publications
Line 99: Line 99:
* [[Reflexive Development of Cognitive Memoisation: A Round-Trip Cognitive Engineering Case Study]]
* [[Reflexive Development of Cognitive Memoisation: A Round-Trip Cognitive Engineering Case Study]]
* [[Cognitive Memoisation and LLMs: A Method for Exploratory Modelling Before Formalisation]] (Primary paper)
* [[Cognitive Memoisation and LLMs: A Method for Exploratory Modelling Before Formalisation]] (Primary paper)
= Case Study 1: [[ChatGPT UI Boundary Friction as a Constraint on Round-Trip Knowledge Engineering]] =
== Overview ==
This case study addresses the '''boundary friction''' that arises when using **ChatGPT** (or any stateless LLM) in the **Cognitive Memoisation (CM)** framework. The friction is caused by the inherent **statelessness** of the model, which requires a **human-curated externalisation** of knowledge to overcome limitations in **model memory**.
== Problem Description ==
* **Stateless LLMs** do not retain memory across sessions, which causes knowledge to be lost when a new interaction begins.
* **Boundary friction** emerges when the human needs to re-establish previously established facts or reasoning steps that are not stored by the model.
== CM Solution ==
* **Cognitive Memoisation** externalises cognitive artefacts—such as concepts, facts, and Dangling Cognates—into **MWDUMP** or **TMLDUMP** formats.
* These externalised artefacts can then be **re-ingested** into the reasoning process in future sessions, allowing for **continuity of thought** and preventing the **Groundhog Day effect**.
== Impact and Learnings ==
* The case study illustrates how **CM** can bridge the memory gap in stateless systems and how the **externalisation** and **re-integration** process help reduce friction.
* This also demonstrates the importance of **human curation** in preserving **conceptual memory** over stateless model interactions.
---
= Case Study 2: [[Reflexive Development of Cognitive Memoisation: A Round-Trip Cognitive Engineering Case Study]] =
== Overview ==
This case study explores the process of **reflexive development** within **Cognitive Memoisation**. It examines the **iterative process** by which **cognitive artefacts** are externalised, refined, and reintegrated back into reasoning cycles through **Round-Trip Knowledge Engineering (RTKE)**.
== Problem Description ==
* As human knowledge evolves and new insights are gained, previously established concepts and reasoning steps need to be updated or expanded.
* The **stateless nature** of LLMs means that this process often involves reintroducing cognitive elements, leading to redundancy and wasted time.
== CM Solution ==
* **Round-Trip Knowledge Engineering** is facilitated by **CM** through the preservation of **resolved facts** and the ability to **track and elaborate on Dangling Cognates**.
* As humans develop new insights, they can **update** and **refine** their conceptual memory, while ensuring that the model’s outputs remain within its **stateless boundaries**.
== Impact and Learnings ==
* The case study shows how **CM** supports **knowledge evolution** while respecting **LLM safety boundaries**, allowing for continuous human-driven **knowledge development**.
* It highlights the value of **reflexive development**, where insights can be iterated upon without losing previously acquired knowledge.
---
= Case Study 3: [[Reflexive Development of Cognitive Memoisation: Dangling Cognates as a First-Class Cognitive Construct]] =
== Overview ==
This case study focuses on the treatment of **Dangling Cognates**—partially defined, evolv


== Role of This Paper in the Corpus ==
== Role of This Paper in the Corpus ==

Revision as of 05:53, 23 December 2025

Cognitive Memoisation: A Framework for Human Cognition

Title: UI Boundary Friction as a Constraint on Round-Trip Knowledge Engineering
Author: Ralph B. Holland
version: 1.0.0
Publication Date: 2025-12-23
Affiliation: Arising Technology Systems Pty Ltd
Contact: ralph.b.holland [at] gmail.com
Provenance: This is an authored paper maintained as a MediaWiki document as part of the category:Cognitive Memoisation corpus.
Status: final =

Introductory Position

This paper serves as the primary introduction and conceptual anchor for the Cognitive Memoisation (CM) corpus.

Cognitive Memoisation is a human-governed knowledge-engineering framework designed to preserve conceptual memory across interactions with stateless Large Language Models (LLMs). CM helps humans avoid repeated rediscovery (“Groundhog Day”) and carry forward both resolved knowledge and unresolved cognition (Dangling Cognates).

CM operates entirely outside model-internal memory, leveraging the power of LLMs to infer postulates and perform stochastic pattern matching, all under the curation of the human controlling the CM session.

The stateless nature of LLMs (such as ChatGPT) is an intentional design choice made for human safety and privacy. This design ensures that no personal or contextual information is retained across sessions, aligning with OpenAI's commitment to data protection. The safety mechanism prevents LLMs from making introspection or gaining agency, ensuring that the model does not evolve autonomously or retain knowledge beyond its interactions.

Cognitive Memoisation (CM) bridges this lack of memory by enabling humans to externalise cognitive artefacts, preserving knowledge over time. This allows for continuous human reasoning while keeping LLMs sand-boxed—both the human and the model are sandboxed to ensure security. Through CM, humans can elaborate on unresolved cognition (Dangling Cognates) and carry forward insights and propositions, while the LLM remains within its functional boundaries, executing only permitted tasks and with no capacity to alter its inherent state or memory.

This document establishes the rationale, scope, and interpretive framework required to understand Cognitive Memoisation and its role in enabling human-centric knowledge workflows with stateless LLMs.

Normative CM Paper References

The following documents constitute the authoritative CM corpus. Titles are normative MediaWiki page names and must not be paraphrased.

Dimensions Addressed in This Paper

The following dimensions are key to understanding the problems that Cognitive Memoisation (CM) addresses, especially in the context of stateless Large Language Models (LLMs) and the human-managed preservation of cognitive state:

1. Statelessness and Memory Management in LLMs

Core Concept: Addressing the statelessness of LLMs and the challenge of managing conceptual memory externally. Dimension Addressed: How can cognitive memory be maintained outside the LLM model to overcome statelessness, and how does CM provide this functionality while respecting LLM safety constraints? Relevant Papers:

2. Externalisation of Cognitive Artefacts

Core Concept: The process of externalising concepts, facts, inferences, and unresolved cognition into structured, durable formats. Dimension Addressed: How can cognitive content be externalised and stored in a manner that ensures its continued use across sessions, without being lost due to session termination or model limitations? Relevant Papers:

3. Round-Trip Knowledge Engineering (RTKE)

Core Concept: The cyclical process of taking externalised cognitive artefacts, reintegrating them into reasoning processes, and ensuring that knowledge evolves without loss. Dimension Addressed: How can externalised knowledge be reused, refined, and preserved over time through iterative processes, and how does CM facilitate this while maintaining consistency? Relevant Papers:

4. Dangling Cognates and Unresolved Cognition

Core Concept: Managing cognitive elements that are under construction or incomplete, allowing them to participate in reasoning without forcing premature resolution. Dimension Addressed: How can unresolved cognitive elements (Dangling Cognates) be preserved, tracked, and used safely in ongoing reasoning, without prematurely solidifying them? Relevant Papers:

5. Constraints and Knowledge Integrity

Core Concept: Defining and applying constraints to preserve the integrity of cognitive memory and prevent “Groundhog Day” rediscovery. Dimension Addressed: How can constraints be implemented to ensure that knowledge persists across sessions without redundancy, and how can it be efficiently reused? Relevant Papers:

6. Human Curated Knowledge vs. Model State

Core Concept: Differentiating between human-curated knowledge and LLM model state, ensuring that cognitive memory and decision-making remain under human control. Dimension Addressed: How can the human maintain full authority over cognitive content while ensuring that the stateless nature of LLMs is respected? Relevant Papers:

Case Study 1: ChatGPT UI Boundary Friction as a Constraint on Round-Trip Knowledge Engineering

Overview

This case study addresses the boundary friction that arises when using **ChatGPT** (or any stateless LLM) in the **Cognitive Memoisation (CM)** framework. The friction is caused by the inherent **statelessness** of the model, which requires a **human-curated externalisation** of knowledge to overcome limitations in **model memory**.

Problem Description

  • **Stateless LLMs** do not retain memory across sessions, which causes knowledge to be lost when a new interaction begins.
  • **Boundary friction** emerges when the human needs to re-establish previously established facts or reasoning steps that are not stored by the model.

CM Solution

  • **Cognitive Memoisation** externalises cognitive artefacts—such as concepts, facts, and Dangling Cognates—into **MWDUMP** or **TMLDUMP** formats.
  • These externalised artefacts can then be **re-ingested** into the reasoning process in future sessions, allowing for **continuity of thought** and preventing the **Groundhog Day effect**.

Impact and Learnings

  • The case study illustrates how **CM** can bridge the memory gap in stateless systems and how the **externalisation** and **re-integration** process help reduce friction.
  • This also demonstrates the importance of **human curation** in preserving **conceptual memory** over stateless model interactions.

---

Case Study 2: Reflexive Development of Cognitive Memoisation: A Round-Trip Cognitive Engineering Case Study

Overview

This case study explores the process of **reflexive development** within **Cognitive Memoisation**. It examines the **iterative process** by which **cognitive artefacts** are externalised, refined, and reintegrated back into reasoning cycles through **Round-Trip Knowledge Engineering (RTKE)**.

Problem Description

  • As human knowledge evolves and new insights are gained, previously established concepts and reasoning steps need to be updated or expanded.
  • The **stateless nature** of LLMs means that this process often involves reintroducing cognitive elements, leading to redundancy and wasted time.

CM Solution

  • **Round-Trip Knowledge Engineering** is facilitated by **CM** through the preservation of **resolved facts** and the ability to **track and elaborate on Dangling Cognates**.
  • As humans develop new insights, they can **update** and **refine** their conceptual memory, while ensuring that the model’s outputs remain within its **stateless boundaries**.

Impact and Learnings

  • The case study shows how **CM** supports **knowledge evolution** while respecting **LLM safety boundaries**, allowing for continuous human-driven **knowledge development**.
  • It highlights the value of **reflexive development**, where insights can be iterated upon without losing previously acquired knowledge.

---

Case Study 3: Reflexive Development of Cognitive Memoisation: Dangling Cognates as a First-Class Cognitive Construct

Overview

This case study focuses on the treatment of **Dangling Cognates**—partially defined, evolv

Role of This Paper in the Corpus

This paper is authoritative for the following assertions:

  • Cognitive Memoisation preserves conceptual-level memory, not dialogue or model state.
  • CM exists specifically because LLMs are stateless by architecture.
  • CM externalises cognition into durable artefacts to enable Round-Trip Knowledge Engineering.
  • CM supports the deliberate carriage of unresolved cognition (Dangling Cognates).
  • CM places authority, curation, and provenance entirely with the human.

All listed papers assume this framing and should be interpreted in relation to it.

categories

categories