Cognitive Memoisation for Governing Knowledge in Human-AI Collaboration
Let’s Build a Ship — Cognitive Memoisation for Governing Knowledge in Human–AI Collaboration
Abstract
Large language models and other probabilistic systems are increasingly used as collaborators in knowledge-intensive work. While these systems can generate fluent and contextually rich outputs, they are inherently stateless, non-authoritative, and prone to subtle forms of drift, rediscovery, and silent reinterpretation. These failure modes are not primarily errors of intelligence or capability; they are failures of knowledge governance.
This paper introduces Cognitive Memoisation (CM) and presents the first public articulation of CM-2, a governance-oriented framework for human–AI collaboration. Using the metaphor of collaborative shipbuilding, the paper illustrates why complex work fails when assumptions, context, and prior agreements are not explicitly externalised and governed. CM-2 formalises the separation between continuity and authority by defining memoised objects for grounded material, contextual attachments for governing facts, bounded Universes of Discourse for parallel reasoning, and an explicit human-initiated promotion boundary for durable knowledge.
Rather than proposing new model architectures or vendor-dependent memory mechanisms, CM-2 focuses on disciplined externalisation, idempotent identity, and explicit human authority as the foundations for reliable knowledge work. The paper positions CM-1 as an immediately applicable practice, while CM-2 establishes the normative structure required for interoperable tooling and future implementations.
This contribution establishes the scope, invariants, and practical utility of Cognitive Memoisation while deliberately leaving full formalisation and vendor alignment to subsequent work, allowing the method to be practised openly ahead of implementation support.
1 Introduction
Modern knowledge work increasingly involves collaboration between humans and probabilistic systems such as large language models. These systems are capable of generating fluent text, structured reasoning, and domain-specific material at scale, yet they remain fundamentally stateless, non-authoritative, and context-bounded. As a result, many failures observed in practice are not dramatic errors or obvious hallucinations, but quieter breakdowns: rediscovery of prior work, silent reinterpretation of agreements, loss of contextual boundaries, and gradual semantic drift.
These failures are often misdiagnosed as shortcomings of intelligence, memory, or model capability. In practice, they are failures of knowledge governance. Humans assume continuity where none exists, machines produce plausible output without authority, and neither side has a clear mechanism for deciding what has been agreed, what remains tentative, and what must persist beyond the current interaction.
To illustrate the problem, consider the metaphor of building a ship. Shipbuilding is not primarily an exercise in intelligence or creativity; it is an exercise in coordination under constraint. Multiple teams work in parallel, each within its own local context, yet their work must eventually align. Assumptions that remain implicit, drawings that are not externalised, or changes that are not governed quickly become expensive failures. No amount of individual competence compensates for poor control of what has been decided, what remains provisional, and what has been committed.
Human–AI collaboration exhibits the same structural risks. Conversations feel continuous even when they are not. Generated material feels authoritative even when it is not. Context appears shared even when it is only locally inferred. Without explicit mechanisms for externalisation, scoping, and promotion, collaborators are forced to rely on memory, inference, or trust where governance is required.
This paper introduces Cognitive Memoisation (CM) as a practice and governance framework for addressing these problems. CM does not attempt to improve model intelligence, provide artificial memory, or enforce global ontologies. Instead, it focuses on disciplined externalisation and explicit authority. Knowledge is treated as something that must be deliberately captured, bounded, and promoted, rather than implicitly accumulated.
The contribution of this paper is the first public articulation of CM-2, the normative layer of Cognitive Memoisation. CM-2 defines a small set of invariants that separate continuity from authority and practice from commitment. It introduces memoised objects for grounded material, attachment objects for governing facts, bounded Universes of Discourse for parallel reasoning, and an explicit human-initiated promotion boundary for durable knowledge. These elements are intentionally conservative and designed to be usable without specialised tooling.
At the same time, CM-1, the practice layer of Cognitive Memoisation, can be adopted immediately. CM-1 consists of habits and disciplines that reduce rediscovery and drift through explicit capture and stand-down, independent of vendor support or system integration. CM-2 formalises these practices sufficiently to enable interoperability, future tooling, and shared understanding, while deliberately avoiding premature formalisation.
The remainder of this paper elaborates these ideas in a pragmatic order. Section 2 examines common failure modes in human–AI knowledge work. Section 3 introduces the core CM concepts and their intended roles. Section 4 illustrates how these concepts operate together through practical scenarios. Section 5 discusses scope, non-goals, and limitations. The paper concludes by outlining how CM may be practised today and extended incrementally as tools and implementations mature.
2 Failure Modes in Human–AI Knowledge Work
Many of the most damaging failures in human–AI collaboration are subtle rather than catastrophic. Outputs appear fluent, reasoning seems coherent, and conversations feel continuous. Yet over time, work degrades: prior conclusions are rediscovered, assumptions quietly change, and context boundaries blur. These failures are often tolerated until they accumulate into confusion or rework.
This section describes common failure modes observed in practice. These failures are not specific to any particular model or vendor, nor are they primarily technical defects. They arise from structural mismatches between how humans assume knowledge persists and how probabilistic systems actually operate.
2.1 Rediscovery and the Illusion of Continuity
Large language models do not retain memory across interactions unless explicitly supported by external mechanisms. Despite this, humans naturally assume continuity, especially when conversational tone and vocabulary are consistent. As a result, work that has already been completed is revisited, re-derived, or re-negotiated without awareness that it has occurred before.
This phenomenon, informally described as Groundhog Day, is not merely inefficient. It erodes confidence in prior decisions and encourages informal workarounds such as repetition, excessive prompting, or reliance on personal memory rather than shared artefacts.
2.2 Silent Semantic Drift
Even when material is revisited rather than rediscovered, its meaning may shift subtly over time. A term introduced informally may later be treated as defined. A speculative idea may harden into an assumption. A local explanation may be reused outside its original context.
Because these shifts occur incrementally and without explicit checkpoints, they are rarely detected until disagreement or error surfaces. The system appears cooperative and responsive throughout, masking the fact that meaning has changed.
2.3 Conflation of Fluency with Authority
Generated outputs are often well-structured, confident in tone, and responsive to instruction. This fluency can be mistaken for authority. Humans may treat generated material as agreed, settled, or endorsed simply because it appears complete.
In reality, probabilistic systems do not assert authority. They produce plausible continuations conditioned on input. Without explicit markers distinguishing tentative material from committed knowledge, collaborators are left to infer status based on style rather than governance.
2.4 Context Boundary Collapse
Complex work frequently involves parallel lines of reasoning, alternative hypotheses, or different audiences. Humans manage this by maintaining mental boundaries regarding what applies where and under what assumptions. In conversational systems, these boundaries are easily crossed.
Material introduced for illustration may later be reused as fact. Constraints scoped to a particular discussion may be applied globally. Without explicit containment, contexts bleed into one another, producing confusion rather than synthesis.
2.5 Implicit Promotion and Accidental Canon
Perhaps the most dangerous failure mode occurs when material becomes treated as durable knowledge without any explicit decision to do so. Notes become references. Draft explanations become guidance. Generated summaries become policy.
This implicit promotion bypasses human authority. It replaces deliberate commitment with gradual assumption. Once this occurs, correcting errors becomes socially and cognitively difficult, even if the underlying material was never intended to be authoritative.
2.6 Why These Failures Persist
These failure modes persist because neither humans nor machines are equipped, by default, with mechanisms for explicit knowledge governance. Humans rely on memory and shared understanding; machines rely on inference and pattern continuation. In the absence of deliberate externalisation and promotion boundaries, both sides compensate informally.
The result is not dramatic failure, but slow erosion of clarity.
3 Cognitive Memoisation
Cognitive Memoisation (CM) is a governance-oriented approach to human–AI knowledge work. It is concerned not with how systems generate content, but with how knowledge is externalised, scoped, and promoted in environments where reasoning is probabilistic and memory is not intrinsic.
CM begins from a simple observation: humans routinely assume continuity, authority, and shared understanding in collaborative work, while probabilistic systems provide none of these guarantees. Cognitive Memoisation addresses this mismatch by making continuity explicit, authority deliberate, and durability conditional.
3.1 Separation of Continuity and Authority
A central principle of CM is the separation between continuity and authority. Continuity refers to the ability to recognise and reuse material across interactions, preventing unnecessary rediscovery. Authority refers to the human decision to stand behind material as durable knowledge.
In CM, continuity may be supported automatically, but authority is never inferred. No artefact becomes authoritative without explicit human action. This separation allows work to proceed fluidly without silently accumulating commitments.
3.2 Memoised Objects (EO)
CM introduces memoised objects, referred to as EO, as the first-class units of continuity. An EO captures grounded material in the form it was introduced: prose, logic, constraints, rules, examples, or other structured or unstructured content. EO preserve locality and provenance, avoiding the fragmentation that occurs when material is decomposed into abstract conceptual fragments.
In CM, memoisation is used in its original technical sense: the automatic caching of results to avoid unnecessary recomputation. EO are therefore automatically memoised in a client-side cache to support continuity and prevent rediscovery across interactions. This memoisation is non-authoritative and may be ephemeral or persistent, but it is always subordinate to explicit human governance. Memoisation enables reuse without implying memory, agreement, or durability.
EO are memoised to prevent rediscovery. When externalised, their identities are idempotent: repeated recognition of the same EO resolves to the same object rather than creating duplicates. EO are fact-like in the structural sense, but they do not assert truth or authority.
EO are dynamically extensible. Their internal structure is not constrained by a rigid schema. Interoperability is achieved through serialisation conventions, stable formats, and plain-language parsing rather than fixed ontologies.
3.3 Attachment Objects (EA)
CM distinguishes EO from attachment objects, referred to as EA. EA are non-authoritative artefacts that qualify, constrain, or contextualise EO. They express governing facts such as scope, provenance, lifecycle state, usage cautions, or interpretive notes.
EA exist to regulate how EO are interpreted and used without polluting the EO themselves. They are context-bound, disposable, and never promoted independently. If an attachment acquires enduring significance, it is promoted explicitly into an EO by human action.
3.4 Thought Bubbles as Universes of Discourse
Reasoning rarely occurs in a single global context. CM models this by introducing Thought Bubbles, implemented as a subtype of EO. A Thought Bubble defines a bounded Universe of Discourse within which EO are interpreted and related.
Thought Bubbles allow multiple lines of reasoning, audiences, or hypotheses to coexist without forced unification. EO may participate in multiple Thought Bubbles, but their interpretation remains local to each context. Thought Bubbles must not merge implicitly; aggregation is always explicit.
3.5 Persistence and Promotion
CM distinguishes between ephemeral continuity and durable knowledge. EO and EA may be ephemeral or persistent, supporting work across sessions without claiming authority. Persistence alone does not imply commitment.
Durable knowledge is created only through an explicit human-initiated promotion action, referred to here as Save As. Save As operates on aggregates, typically one or more Thought Bubbles, and externalises them into an enduring system of record with declared scope and authority. Nothing is promoted individually or automatically.
3.6 CM-1 and CM-2
CM is articulated in layers. CM-1 describes the practice of disciplined externalisation and stand-down, and can be adopted immediately without specialised tooling. CM-2, introduced here, provides the normative structure that defines objects, boundaries, and invariants, enabling shared understanding and future interoperability.
This paper presents CM-2 at a deliberately conservative level, sufficient to demonstrate utility and establish scope without constraining future implementations.
4 Putting Cognitive Memoisation to Work
This section illustrates how Cognitive Memoisation operates in practice. The intent is not to prescribe tooling or workflows, but to show how CM concepts reduce failure modes when applied to ordinary knowledge work involving probabilistic systems.
4.1 A Typical Working Session
Consider a human collaborating with a language model to explore a complex topic such as drafting a technical argument, analysing a regulatory question, or designing a system architecture. Early in the session, ideas are tentative, overlapping, and incomplete. Multiple lines of reasoning may be explored in parallel.
In CM terms, this work occurs within one or more Thought Bubbles, each defining a local Universe of Discourse. EO are created automatically to capture grounded material as it appears: explanatory prose, constraints, examples, draft formulations, or intermediate reasoning. EA accumulate alongside EO, recording governing facts such as scope limitations, uncertainty, or provisional status.
At this stage, nothing is authoritative. Continuity is supported, but commitment is explicitly absent.
4.2 Avoiding Rediscovery Without Premature Commitment
As the session progresses or resumes later, EO enable continuity without requiring the human to restate prior material. The system can recognise and reuse previously externalised objects, preventing rediscovery and unnecessary repetition.
Crucially, this continuity does not imply agreement. EO may be revisited, revised, or abandoned. EA may be updated or discarded. Thought Bubbles may be parked or allowed to decay. CM tolerates this fluidity because authority has not yet been asserted.
4.3 Managing Parallel Contexts
Complex work often requires reasoning across different audiences or assumptions. CM supports this by maintaining separate Thought Bubbles.
The same EO may appear in multiple bubbles, but its interpretation is local to each. This allows collaboration and reuse without semantic collapse. No reconciliation is required until a human explicitly decides to aggregate contexts.
4.4 Explicit Promotion Through Save As
When a point of agreement is reached or when material is judged worth preserving, the human initiates a Save As action. Save As selects one or more Thought Bubbles and externalises their aggregate into a durable artefact.
This promotion declares scope, intent, and authority. It marks the transition from exploratory material to committed knowledge. Importantly, Save As is deliberate and visible. Nothing is promoted implicitly, and nothing becomes authoritative by accumulation.
4.5 Stand-Down and Reuse
After promotion, the session may end. EO and EA that were not promoted may be discarded or retained as persistent continuity aids. CM explicitly supports stand-down without loss: authoritative knowledge is preserved externally, while exploratory material may safely decay.
When work resumes, new sessions may reference promoted artefacts as authoritative, while generating fresh EO for new exploration. Continuity and authority remain distinct.
4.6 Why This Matters in Practice
Applied consistently, CM changes the character of human–AI collaboration. Rediscovery is reduced without pretending memory. Drift is contained without enforcing global schemas. Authority is asserted explicitly rather than inferred from fluency.
Most importantly, CM allows people to work naturally while providing a clear path to durable knowledge when it matters.
5 Scope, Non-Goals, and Limitations
Cognitive Memoisation is intentionally conservative in scope. It is designed to address specific failure modes in human–AI knowledge work related to continuity, authority, and governance. It does not attempt to solve all problems associated with probabilistic systems, nor does it propose new forms of intelligence, reasoning, or autonomy.
5.1 Scope
CM applies to knowledge-intensive work where reasoning unfolds over time, context is fragmented or revisited, collaboration occurs across sessions or participants, and authority must remain explicitly human.
Typical domains include technical design, legal reasoning, policy drafting, research synthesis, and other forms of deliberative work where loss of context or silent drift carries real cost.
CM is agnostic to specific tools, vendors, or model architectures. Its principles may be practised with existing systems and incrementally supported by future tooling.
5.2 Non-Goals
CM explicitly does not aim to provide artificial memory or persistent model state, enforce global ontologies or universal schemas, replace human judgement or accountability, guarantee correctness or truth of content, or automate promotion of knowledge.
CM treats probabilistic systems as capable assistants, not authorities. It assumes that uncertainty, incompleteness, and revision are normal features of real work.
5.3 Limitations
CM introduces discipline where informality is often tolerated. This carries a modest cognitive and procedural cost. Explicit externalisation and promotion require attention and intent, and may feel unfamiliar to users accustomed to conversational continuity.
CM also does not prevent all errors. It reduces rediscovery, drift, and accidental canonisation, but it does not eliminate misunderstanding or disagreement. Its value lies in making such issues visible and governable rather than implicit.
CM-2 as presented here is normative rather than exhaustive. It defines invariants and boundaries, not full specifications or implementations. Tooling support may simplify adoption, but is not required for practice.
An important consequence of Cognitive Memoisation’s explicit externalisation and promotion boundaries is that it naturally supports capabilities typically addressed only through specialised infrastructure. Because authoritative knowledge is externalised deliberately and memoised objects are identity-stable, CM enables practical disaster recovery and restoration without relying on implicit system state. Sessions may be reconstructed, inspected, or abandoned without loss of committed knowledge. The use of bounded Universes of Discourse allows post hoc analysis and semantic comparison of alternative reasoning paths, while explicit aggregation and promotion support controlled semantic merge rather than silent unification. Provenance is preserved through attachment objects and serialisation conventions, enabling traceability of origin, scope, and intent. Together, these properties provide a foundation for auditing and review that is compatible with probabilistic systems, without requiring global schemas, continuous memory, or vendor-specific mechanisms.
6 Conclusion
As probabilistic systems become embedded in everyday knowledge work, the primary risks they introduce are not failures of intelligence, but failures of governance. Fluent output, contextual responsiveness, and conversational continuity create the illusion of shared understanding without providing mechanisms for authority, durability, or scope.
Cognitive Memoisation addresses this gap by separating continuity from authority and practice from commitment. By introducing memoised objects for grounded material, attachment objects for governing facts, bounded Universes of Discourse for parallel reasoning, and an explicit human-initiated promotion boundary, CM provides a disciplined yet lightweight approach to human–AI collaboration.
This paper has presented the first public articulation of CM-2, establishing its scope, invariants, and practical utility while deliberately avoiding premature formalisation. CM-1 may be practised immediately, independent of tooling or vendor support. CM-2 provides the normative structure required for shared understanding and future interoperability.
Cognitive Memoisation does not promise smarter systems. It promises clearer work. In environments where knowledge must persist, evolve, and be trusted, that distinction matters.