Cognitive Memoisation: Plain-Language Summary (For Non-Technical Readers)
Plain-Language Summary for Non-Technical Readers
Abstract (Plain English)
Ralph has written two related papers about how to work effectively with modern AI systems that deliberately do not remember past conversations or make authoritative decisions. Instead of trying to force these systems to “remember”, his approach extracts useful knowledge from each working session, preserves it externally, and then uses it as a clear starting point for the next round of thinking and exploration. This method is novel because it respects the design limits of Large Language Models while keeping the human firmly in control. It allows understanding to grow over time without relying on the AI to be a source of truth, memory, or authority.
Explanation (Using an Analogy)
Many people have seen the film *Groundhog Day*, where the main character wakes up every morning to the same day and must relive it again and again. Working with AI can feel the same way. Each new session starts fresh, and unless something is done, the same ideas, definitions, and decisions must be rediscovered repeatedly.
Ralph’s work addresses this problem directly. Instead of expecting the AI to remember or to “know what it decided before”, he treats each session as temporary and extracts the important ideas—definitions, rules, decisions, and meanings—into an external record. That record then becomes the reference point for the next session. In this way, progress accumulates even though the AI itself has no memory.
An important part of this approach is accepting that, during early exploration, some ideas are intentionally left vague or partially defined. Ralph calls these “dangling cognates”: familiar terms whose meaning is carried provisionally while thinking develops. This is not a mistake or a failure—it is a normal and productive way humans explore complex problems. Problems only arise when vague terms are accidentally treated as if they were fully defined. The papers therefore use a controlled taxonomy to clearly distinguish between ideas that are settled and those that are still evolving.
Together, the two papers show a practical way to work with AI systems that are not authoritative by design. The AI helps explore, reframe, and accelerate thinking, while the human decides what knowledge is worth keeping, how it is defined, and when it becomes stable. This preserves human judgement, avoids confusion, and allows real progress—without getting stuck reliving the same day over and over again.