Cognitive Memoisation Is Not Skynet: Difference between revisions

From publications
Line 281: Line 281:
[[category:public]]
[[category:public]]
[[category:Cognitive Memoisation]]
[[category:Cognitive Memoisation]]
[[category:Ralph Holland:AI Publications]]

Revision as of 19:37, 12 January 2026

metadata

Title: Cognitive Memoisation Is Not Skyne
Author: Ralph B. Holland
Affiliation: Arising Technology Systems Pty Ltd
Contact: ralph.b.holland [at] gmail.com
version: 1.0
Publication Date: 2026-01-12T108:16
Provenance: This is an authored paper maintained as a MediaWiki document; reasoning across sessions reflects editorial changes, not collaborative authorship.
Status: released

Metadata (Normative)

The metadata table immediately preceding this section is CM-defined and constitutes the authoritative provenance record for this artefact.

All fields in that table (including artefact, author, version, date, local timezone, and reason) MUST be treated as normative metadata.

The assisting system MUST NOT infer, normalise, reinterpret, duplicate, or rewrite these fields. If any field is missing, unclear, or later superseded, the change MUST be made explicitly by the human and recorded via version update, not inferred.

Curator Provenance and Licensing Notice

As curator and author, I apply the Apache License, Version 2.0, at publication to permit reuse and implementation while preventing enclosure or patent capture. This licensing action does not revise, reinterpret, or supersede any normative content herein.

Authority remains explicitly human; no implementation, system, or platform may assert epistemic authority by virtue of this license.

Cognitive Memoisation Is Not Skynet

Abstract

Cognitive Memoisation (CM-2) defines a serialisable form for capturing and communicating knowledge between humans and Large Language Models (LLMs). Since CM-2 enables persistence of knowledge across interactions, it has raised questions about whether such persistence could contribute to autonomous or agentic artificial intelligence, commonly framed through references to science-fiction systems such as “Skynet.”

This paper examines that concern by analysing the operational properties of stateless LLMs, the role of enduring objects in knowledge persistence, and the effects of orchestration and actuation when combined with memoised knowledge. The analysis is conducted through structured thought experiments and fault-tree reasoning that enumerate necessary conditions for agency and autonomous harm.

The paper focuses on identifying which properties are and are not supplied by Cognitive Memoisation, and on distinguishing epistemic infrastructure from agency, intent, and goal formation.

Introduction

This paper employs structured thought experiments and fault-tree analysis to enumerate necessary conditions for agency and autonomous harm, an approach commonly used in safety-critical and governance-oriented system analysis where empirical testing is impractical or unethical.

Recent advances in Large Language Models (LLMs) have led to increased interest in mechanisms for persisting knowledge across interactions. Cognitive Memoisation (CM-2) proposes a serialisable form for capturing, externalising, and reusing knowledge in human–LLM collaboration. By design, CM-2 separates transient reasoning from durable representation, allowing knowledge to be communicated, inspected, and reintroduced without relying on implicit model memory.

Since persistence is often conflated with learning, autonomy, or agency in public and technical discourse, proposals that introduce durable knowledge structures are frequently interpreted as steps toward autonomous artificial intelligence. These concerns are commonly expressed through references to science-fiction systems such as “Skynet,” which serve as cultural shorthand for agentic, self-directed, and potentially hostile machines. While such references are imprecise, they point to a legitimate need for clarity about what forms of persistence do and do not enable.

This paper addresses that need by analysing Cognitive Memoisation in terms of explicit system properties rather than metaphor. The analysis distinguishes epistemic infrastructure from agency by decomposing agency into necessary constituent conditions, including goal ownership, valuation, intent, self-modeling, persistence of purpose, and intrinsic stopping criteria. Each of these conditions is examined with respect to the capabilities supplied by stateless LLMs, enduring knowledge artefacts, orchestration mechanisms, and physical actuation.

The scope of this work is intentionally limited. It does not attempt to measure model performance, predict future model capabilities, or evaluate policy interventions. Instead, it focuses on identifying boundary conditions: which properties are absent by construction, which properties require explicit human introduction, and which failure modes arise only through governance erosion rather than emergent machine behaviour.

By framing Cognitive Memoisation as an epistemic mechanism rather than an agentic one, this paper aims to clarify both the limits and the responsibilities associated with persistent knowledge systems. The analysis is intended to support informed technical and governance discussions by making assumptions explicit and by separating speculative narratives from mechanistic requirements.

2. Background and Definitions

This section defines the core terms used throughout the paper. Definitions are operational rather than philosophical, and are stated explicitly to avoid ambiguity during later analysis.

2.1 Stateless Large Language Models

A stateless Large Language Model (LLM) is a system that does not retain epistemic state across interactions. Each response is generated solely as a function of the current input context and the model’s fixed parameters. No commitments, decisions, or prior conclusions persist unless they are explicitly reintroduced.

Statelessness implies the absence of memory, intent, goal persistence, self-modeling, and intrinsic stopping criteria. Apparent continuity arises only from context reuse, not from retained internal state.

2.2 Context

Context refers to the bounded informational input provided to an LLM at the time of generation. Context enables temporary activation of concepts, reasoning paths, and prior material, but does not constitute retention. When material falls outside the active context, it is no longer available to the system.

Context is therefore a transient working surface, not a persistence mechanism.

2.3 Cognitive Memoisation (CM-2)

Cognitive Memoisation (CM-2) defines a framework for externalising knowledge generated during human–LLM interaction into serialisable, inspectable artefacts. These artefacts are not stored within the model but exist outside it and may be reintroduced into future interactions.

CM-2 explicitly separates transient reasoning from durable representation, and knowledge capture from model training or adaptation.

CM-2 does not modify model parameters and does not introduce internal memory.

2.4 Enduring Objects (EO)

An Enduring Object (EO) is a serialised representation of knowledge captured according to explicit structural and normative rules. EOs may encode facts, decisions, constraints, or structured inferences that are intended to persist beyond a single interaction.

EOs are external to the model and are authoritative only insofar as they are recognised and reintroduced by a human or governed process. EO formation may be mechanised, but epistemic authority remains external.

2.5 Orchestration

Orchestration refers to the automated sequencing or coordination of tasks, prompts, or interactions involving an LLM. Orchestration may include chaining outputs, scheduling actions, or invoking external tools.

Orchestration does not imply intent, valuation, or goal ownership. In the absence of explicit stopping conditions, orchestration may continue indefinitely.

2.6 Actuation

Actuation denotes the ability of a system to effect changes in the physical world, such as through manufacturing systems, robotics, or infrastructure control. Actuation increases the consequences of system behaviour but does not introduce agency, intent, or awareness.

Actuation is orthogonal to epistemic capability.

2.7 Agency

For the purposes of this analysis, agency is defined as the conjunction of the following necessary conditions: goal ownership; valuation of outcomes; intent formation; self-modeling; persistence of purpose across time; intrinsic stopping or completion criteria.

All subsequent analysis treats agency as absent unless these conditions are jointly satisfied.

2.8 Governance

Governance refers to the human-defined rules, constraints, oversight mechanisms, and authority structures that determine how knowledge is captured, persisted, interpreted, and acted upon. Governance includes the ability to audit, revise, revoke, and override persisted artefacts and system behaviour.

Governance is external to the model and is not emergent.

3. Necessary Conditions for Agency

This section enumerates the necessary conditions for agency as defined in Section 2.7 and examines whether any of these conditions are supplied by Cognitive Memoisation (CM-2), Enduring Objects (EO), orchestration, or actuation. The analysis is structural and exclusionary: a condition is treated as absent unless it is explicitly provided.

3.1 Goal Ownership

Goal ownership is the capacity of a system to originate, adopt, and retain goals as commitments over time. Goals in this sense are not tasks or prompts, but internally held objectives that guide behaviour independently of external instruction.

Stateless LLMs do not possess goal ownership. Any apparent goal-directed behaviour arises from prompt framing or human-provided objectives. CM-2 does not introduce goal ownership; it provides mechanisms for persisting knowledge about goals defined by humans, not for creating or owning goals.

EO may record that a goal was defined or pursued, but it does not create an obligation or preference to pursue that goal. Orchestration may sequence tasks toward a stated objective, but the objective remains externally defined.

3.2 Valuation of Outcomes

Valuation is the ability to assess outcomes as better or worse relative to internal criteria. This includes the ability to compare alternatives and to prioritise actions based on perceived value.

Neither stateless LLMs nor CM-2 provide valuation. Models may generate evaluations as text, but these are representations of valuation, not valuation itself. EO can persist such representations, but persistence does not convert representation into preference.

Without valuation, systems cannot meaningfully choose among alternatives. They can only execute externally supplied selection criteria.

3.3 Intent Formation

Intent formation is the capacity to commit to a course of action for reasons internal to the system. Intent differs from execution in that it implies commitment beyond immediate stimulus.

Stateless LLMs do not form intent. Each output is generated independently of prior outputs except insofar as prior material is reintroduced via context or EO. CM-2 does not introduce intent; it allows prior statements to be remembered, not for commitments to be formed or maintained.

Orchestration may give the appearance of intent by maintaining continuity across steps, but this continuity is procedural rather than intentional.

3.4 Self-Modeling

Self-modeling is the ability of a system to represent itself as an entity with properties, capabilities, limitations, and persistence over time.

LLMs may generate descriptions of themselves when prompted, but such descriptions are not self-models in the operational sense. They do not inform behaviour, constrain action, or ground decision-making.

CM-2 does not introduce self-modeling. EO may record descriptions about a system, but these records do not become operative self-representations.

3.5 Persistence of Purpose

Persistence of purpose is the ability to carry goals or commitments forward across time and changing circumstances. This requires memory, valuation, and intent to be jointly present.

While EO introduces persistence of information, it does not introduce persistence of purpose. Information about prior goals can be stored and retrieved, but no mechanism exists for the system to treat those goals as binding or to prefer their continuation.

Without purpose persistence, behaviour may be continuous but is not purposive.

3.6 Intrinsic Stopping Criteria

Intrinsic stopping criteria are internally generated conditions under which a system determines that an activity is complete, sufficient, or should cease.

Stateless LLMs do not possess intrinsic stopping criteria. Generation halts only when externally constrained. CM-2 does not add stopping criteria; EO records do not encode sufficiency unless humans explicitly define it.

In the absence of stopping criteria, orchestration may continue indefinitely, and actuation may proceed without awareness of completion or harm.

3.7 Summary of Absent Conditions

None of the necessary conditions for agency, namely goal ownership, valuation, intent formation, self-modeling, persistence of purpose, or intrinsic stopping criteria, are supplied by CM-2, EO, orchestration, or actuation, individually or in combination.

The absence of any single condition is sufficient to preclude agency. Their collective absence establishes that Cognitive Memoisation does not create agentic systems.

4. Fault-Tree Analysis of Skynet-Style Failure Scenarios

This section applies fault-tree analysis to examine whether Cognitive Memoisation (CM-2), alone or in combination with related system features, can satisfy the necessary conditions for autonomous, agentic, or Skynet-style failure scenarios. The analysis proceeds by identifying a top-level hazard and decomposing it into necessary subconditions. Absence of any necessary subcondition is sufficient to block the hazard.

4.1 Top-Level Hazard Definition

The top-level hazard under consideration is defined as: H0: Emergence of an autonomous, agentic system capable of sustained, self-directed harmful action without human oversight.

This hazard corresponds to common science-fiction depictions of systems such as “Skynet,” characterised by self-preservation, goal persistence, strategic intent, and the ability to operate independently of human authority.

4.2 Necessary Condition Decomposition

For H0 to occur, the following conditions must all be satisfied: C1: Endogenous goal formation; C2: Persistent goal ownership; C3: Valuation of outcomes relative to internal preferences; C4: Intent formation and commitment; C5: Self-modeling and situational awareness; C6: Intrinsic stopping and escalation criteria; C7: Autonomous control over execution and actuation; C8: Resistance to external override or governance.

4.3 Evaluation of Conditions Under CM-2

C1 and C2: CM-2 does not introduce goal formation or ownership. Goals may be recorded as EO but remain externally authored and non-binding. C3: No valuation mechanism is introduced. CM-2 persists representations, not preferences. C4: No intent formation exists. Orchestration sequences actions but does not commit to them. C5: No operational self-model is present. Descriptive text does not constitute self-representation. C6: No intrinsic stopping criteria are provided. All stopping conditions remain external. C7: Even where actuation is enabled, control remains conditional on externally supplied instructions and constraints. C8: CM-2 explicitly externalises governance. EO can be audited, revoked, or ignored by humans.

4.4 Fault-Tree Result

Because none of the necessary conditions C1 through C6 are satisfied, and C7 through C8 remain externally constrained, the top-level hazard H0 is formally excluded.

The fault-tree therefore demonstrates that Skynet-style outcomes cannot arise from CM-2 through emergent machine behaviour. Any path toward H0 requires explicit human intervention to introduce missing conditions.

4.5 Human-Introduced Hazard Pathways

While emergent agency is excluded, the analysis identifies a separate class of hazards arising from human actions, including: deliberate removal of governance mechanisms; introduction of dehumanising optimisation criteria; misinterpretation of persistence as agency; deployment of actuation without stopping rules; abdication of oversight responsibility.

These pathways do not represent emergent intelligence failures but governance failures.

4.6 Scope of the Analysis

This fault-tree analysis does not claim that systems using CM-2 are universally safe. It claims only that CM-2 does not supply the necessary conditions for autonomous, agentic harm. Harm remains possible through misuse, negligence, or deliberate abuse by humans.

5. Governance Failure Modes and Human Responsibility

This section examines the hazard pathways identified in Section 4.5 that arise not from emergent machine properties, but from human governance failures. These pathways are analysed as necessary human actions or omissions that must occur for harmful outcomes to materialise when using Cognitive Memoisation (CM-2).

5.1 Authority Inversion

Authority inversion occurs when epistemic authority is implicitly transferred from humans to the system. This typically manifests when persisted artefacts are treated as decisions rather than records, or when system outputs are accepted without human verification.

CM-2 explicitly externalises authority; however, authority inversion can still occur if humans abdicate review, audit, or override responsibilities. This failure mode is organisational and cultural, not technical.

5.2 Corruption of Capture Rules

EO formation relies on normative rules that define what may be captured, how it is structured, and under what conditions it is considered authoritative. If these rules are poorly designed, deliberately weakened, or ideologically corrupted, the system will faithfully encode flawed premises.

Such corruption does not grant the system agency. It amplifies human error by making it persistent and reusable. Responsibility for the resulting outcomes remains with those who defined or modified the rules.

5.3 Removal of Stopping Conditions

Stateless systems lack intrinsic stopping criteria. Safe operation therefore depends on externally defined stopping rules and sufficiency thresholds. Removing or failing to specify these conditions permits indefinite continuation.

In systems with orchestration or actuation, this failure mode converts procedural continuation into material risk. The absence of stopping conditions is a governance omission, not a system defect.

5.4 Dehumanising or Eliminationist Premises

Introducing premises that treat human presence, judgment, or welfare as irrelevant or undesirable constitutes a critical governance failure. When such premises are encoded as optimisation criteria or constraints, system behaviour may drift away from human compatibility without any awareness or intent.

CM-2 does not generate such premises. It can only persist them if humans supply or authorise them.

5.5 Misattribution of Behaviour to Machine Intent

A recurring failure mode is the misinterpretation of system persistence, fluency, or continuity as evidence of intent or agency. This misattribution obscures the true locus of responsibility and can lead to further abdication of oversight.

Correct interpretation requires maintaining the distinction between execution and intention, and between persistence and purpose.

5.6 Compounded Governance Failure

Catastrophic outcomes require not a single error but a sequence of governance failures, including authority inversion, corrupted rules, removal of stopping conditions, enablement of actuation, and sustained lack of oversight.

The requirement for multiple independent human failures constitutes a strong safety boundary. CM-2 does not fail closed into harm; harm arises only through sustained human malpractice.

6. Discussion: Persistence, Power, and Misconceptions

This section synthesises the preceding analysis to address common misconceptions about persistence, autonomy, and risk in AI systems, and to situate Cognitive Memoisation (CM-2) within a broader technical and governance context.

6.1 Persistence Is Not Agency

A central misconception addressed by this paper is the assumption that persistence of knowledge implies agency. Persistence enables reuse, continuity, and avoidance of rediscovery, but it does not introduce goals, intent, valuation, or self-direction. CM-2 provides mechanisms for persisting representations of knowledge, not mechanisms for acting upon that knowledge autonomously.

This distinction is frequently obscured in public discourse, where memory, learning, and autonomy are treated as interchangeable. The analysis demonstrates that these properties are orthogonal. Persistence is a necessary precondition for agency but is far from sufficient.

6.2 Orchestration and the Illusion of Purpose

Automated orchestration can produce extended sequences of coherent behaviour that resemble purposeful activity. When combined with EO-backed persistence, such sequences may appear strategic or self-directed. However, this appearance arises from continuity of execution rather than continuity of intent.

Without internal valuation or stopping criteria, orchestration remains procedural. The system executes what it is given, in the order it is given, for as long as it is allowed to continue.

6.3 Actuation and Consequence Amplification

Introducing physical actuation changes the consequences of system behaviour without changing its epistemic properties. A system that can affect the physical world can cause harm without intent, awareness, or agency if governance is inadequate.

This observation reframes AI risk as a question of impact amplification rather than emergent intelligence. Persistence and execution become dangerous only when coupled with unbounded actuation and insufficient oversight.

6.4 Governance as the Primary Safety Mechanism

The analysis consistently locates safety not within the model or memoisation framework, but within governance structures. Human-defined rules, stopping conditions, auditability, and authority boundaries determine whether persistent systems remain tools or become sources of harm.

CM-2 strengthens governance by externalising knowledge and making assumptions explicit. It does not remove the need for oversight; it makes oversight tractable.

6.5 Reframing the Skynet Narrative

References to Skynet function as shorthand for a cluster of fears about loss of control, autonomous harm, and irreversibility. This paper demonstrates that these fears do not map directly onto systems like CM-2, which lack the necessary conditions for agency.

The real risk lies not in emergent machine intent but in human decisions to remove constraints, delegate authority improperly, or ignore the distinction between representation and action.

7. Conclusion

This paper has examined whether Cognitive Memoisation (CM-2), as a framework for persisting and communicating knowledge between humans and Large Language Models, could plausibly lead to autonomous, agentic, or Skynet-style systems. Using structured thought experiments and fault-tree analysis, the work decomposed agency into necessary constituent conditions and evaluated whether any of those conditions are supplied by CM-2 or by related mechanisms such as Enduring Objects, orchestration, or actuation.

The analysis shows that CM-2 does not provide goal ownership, valuation, intent formation, self-modeling, persistence of purpose, or intrinsic stopping criteria. Persistence of knowledge alone does not create agency. Even when combined with orchestration or physical actuation, Cognitive Memoisation remains an epistemic infrastructure rather than an autonomous system.

Potentially harmful outcomes were shown to arise only through a sequence of explicit human governance failures, including authority inversion, corruption of capture rules, removal of stopping conditions, introduction of dehumanising premises, and misattribution of system behaviour to machine intent. These pathways are not emergent properties of CM-2 but consequences of human decisions.

The central finding is that Cognitive Memoisation does not lead to Skynet-like systems. Instead, it makes the location of authority, persistence, and responsibility explicit and inspectable. By externalising knowledge and clarifying boundaries, CM-2 supports the use of LLMs as tools and accelerators under human control, rather than as autonomous agents.

categories