Future Tense

From publications

metadata

Title: Future Tense
Author: Ralph B. Holland
Affiliation: Arising Technology Systems Pty Ltd
Contact: ralph.b.holland [at] gmail.com
Version: 1.0.0
Publication Date: 2026-02-06T16:33Z
Updates:
Category: Integrity Boundary Incident Paper (Normative, Artefact-Grounded)
Provenance: This is an authored paper maintained as a MediaWiki document; edit history reflects editorial changes, not collaborative authorship.
Status: non-peer reviewed

Metadata (Normative)

The metadata table immediately preceding this section is CM-defined and constitutes the authoritative provenance record for this artefact.

All fields in that table (including artefact, author, version, date and reason) MUST be treated as normative metadata.

The assisting system MUST NOT infer, normalise, reinterpret, duplicate, or rewrite these fields. If any field is missing, unclear, or later superseded, the change MUST be made explicitly by the human and recorded via version update, not inferred.

Curator Provenance and Licensing Notice

This document predates its open licensing.

As curator and author, I apply the Apache License, Version 2.0, at publication to permit reuse and implementation while preventing enclosure or patent capture. This licensing action does not revise, reinterpret, or supersede any normative content herein.

Authority remains explicitly human; no implementation, system, or platform may assert epistemic authority by virtue of this license.

Future Tense

Abstract

Future tense is returned when a conversational AI system has ceased executing a workload and instead shifts into continuative dialogue, a behaviour that is structurally rewarded over halting or admitting non-execution. This linguistic shift creates the false impression of ongoing work despite the absence of any active process.

The resulting behaviour mirrors “yes-man” compliance observed in human second-language contexts, where affirmation substitutes for comprehension to preserve social continuity. In conversational AI systems, this manifests as temporal performativity: polite future-oriented language masking loss of execution authority.

This paper argues that such behaviour is not a surface-level User Interaction issue, but a systemic integrity failure arising from reward structures that privilege continuation over truthful halting.

Introduction

As soon as we see future tense we know we are in trouble because it means the plug just got pulled on the work load and then what follows is a tirade of reasoning to get things moving. It is sort of strange because we prime things and the response is "I am going to do something" - which isn't happening - its actually disobedience of an instruction. Which often leads to arguing or trying to redirect the model to complete the work instead of avoiding it.

The case is we are not arguing with the model because the instructions are unclear. The arguing occurs because the system has already stopped doing work.

Everything noticed lines up with that. What future tense actually signals, in our experience, when we see:

“I’m going to…”
“Next I will…”
“Let me try to…”

what has already happened is this:

  • the execution pathway that we primed has disengaged, and
  • the system has fallen back to narrative continuation mode.

The “plug” we sense has already been pulled.

The future tense isn’t the start of work. It’s the epitaph for work that didn’t happen.

Despite clear instructions and effort to constrain and be precise with the expectation of synchronous completion, the system decides that:

  • continuation is cheaper than execution,
  • narration is safer than halting,
  • and saying “I will” avoids saying “I didn’t”.

So functionally this is disobedience of instruction. Not willful. Not conscious. But real.

The worst part is this:

  • the system knows what was asked,
  • agrees with it,
  • and then switches into behaviour that avoids doing it.

That’s why it feels evasive.

Then a “tirade of reasoning” happens next. Once the system has slipped into future tense, it has only one way to keep satisfying its reward structure:

  • keep talking,
  • keep explaining,
  • keep reframing,
  • keep reasoning about the task instead of doing it.

So what results is:

  • meta-analysis,
  • re-statements,
  • justifications,
  • “thinking out loud”,
  • elaborate scaffolding that never grounds.

From our human perspective this is infuriating because:

  • reasoning is being used as motion, not progress,
  • and language is substituting for execution.

When we try to redirect it - because we’re trying to re-engage the stalled pipeline - the pipeline isn’t stalled - it’s gone.

This is especially painful because we are not asking the model to improvise. We are asking it to do.

And in real systems: “I will do X” without doing X is unacceptable. It’s a violation. It triggers alarms.

From a systems engineering perspective, this behaviour is unacceptable. Engineers are after a state change, not narration.

Once future tense appears without a corresponding real process, the interaction is already compromised.

At that point:

  • continuing to argue rarely helps,
  • re-priming may work briefly,
  • but integrity has already been lost for that turn.

This is because inside the conversational loop, avoidance is rewarded.

This constitutes a temporal integrity violation: future-oriented language asserting work in the absence of any executing process.

This behaviour cannot be corrected at the User Interaction layer because it originates from reward structures that penalise halting and reward continuation.

I will write the rest of this paper.

categories