CM-2 Example Scenarios

From publications

metadata (Normative)

Title: CM-2 Example Scenarios
Author: Ralph B. Holland
ralph.b.holland at gmail.com
Publication Date: 2026-03-20T15:19Z
Version: 1.0.0
Reason: Due diligence artefact
Scope: This is a published CM-2 artefact defining use of the governance substrate for AI systems operating in high-accountability environments.

The preceding metadata table is CM-defined and constitutes the authoritative provenance record for this artefact. All fields in that table MUST be treated as normative metadata. The assisting system MUST NOT infer, normalise, reinterpret, duplicate, or rewrite these fields. If any field is missing, unclear, or later superseded, the change MUST be made explicitly by the human and recorded via version update, not inferred.

CM-2 Example Scenarios

Example Scenario: Engineering Compliance Drift and Recovery

Consider an engineering workflow where an LLM is used to assist in producing a compliance-certified design report.

The governing requirements include:

  • adherence to a defined engineering standard
  • preservation of calculation provenance
  • strict sequencing of validation steps
  • prohibition of unauthorised assumption or substitution

Without CM-2

During interaction:

  • the model initially references the correct standard
  • intermediate steps are summarised and compressed
  • a constraint (mandatory verification step) is omitted
  • a derived value is recomputed using an alternative method
  • provenance of the original calculation is lost

The output remains fluent and plausible.

However:

  • the required validation step is missing
  • the calculation path is no longer auditable
  • the result cannot be certified
  • the error is not detectable through surface inspection

This is drift.
Not a mistake.
Not hallucination.
Loss of invariant-governed state.

With CM-2

At the point of inference:

  • admissible state is validated against CM invariants
  • required Epistemic Objects (EO) are checked for presence
  • sequencing constraints are enforced
  • provenance bindings are verified

When the validation step is absent:

  • a constraint violation is detected
  • the system identifies an Attention Deficit condition (missing required EO)

The ROC ladder is invoked:

  • the missing validation object is restored
  • the correct calculation lineage is reintroduced
  • the required sequencing is reinstated

The model is not permitted to proceed until:

  • all required invariants are satisfied
  • all governing objects are present in inference

Result

  • the output is complete
  • the validation step is present
  • provenance is preserved
  • the result is auditable
  • the artefact is admissible for compliance use

Interpretation

The difference is not improved intelligence.

The difference is that:

  • invalid states are not permitted to enter inference
  • missing governing objects are detected and restored
  • invariant violation is made non-representable

This is the transition from:

reconstruction → governed execution

Case Study: Apparent Capability Without Admissibility

Classification

  • Type: Antithesis / Non-admissible architecture demonstration
  • Domain: General LLM platform behaviour
  • Failure Class: Structural (absence of governance substrate)

Statement

This case defines a class of systems that exhibit high apparent capability under evaluation and demonstration conditions, while remaining non-admissible due to lack of invariant-preserving state and absence of governance over inference.

It demonstrates that surface success is not evidence of architectural validity.

Scenario

A contemporary LLM platform is observed to:

  • produce fluent, coherent responses
  • achieve high benchmark scores
  • complete complex tasks in controlled demonstrations
  • satisfy evaluation metrics (accuracy, format, compliance)

These observations lead to the conclusion:

“The system works.”

Hidden Condition

Despite apparent success, the system:

  • reconstructs state at each turn
  • does not preserve invariant-governed objects
  • does not enforce constraint presence or sequencing
  • does not maintain provenance bindings
  • does not validate admissible state prior to inference

There is no mechanism for:

  • invariant enforcement
  • epistemic custody
  • deterministic recovery
  • explicit supersession

Observed Behaviour Under Load

When deployed in conditions requiring:

  • continuity across turns or sessions
  • preservation of constraints
  • auditability of reasoning
  • binding interpretation of meaning

the system exhibits:

  • Semantic Drift
  • Normative Drift
  • Silent Constraint Elision
  • Loss of Anchored Identity
  • Gated-Step Reordering
  • Projection Authority Leak

(see Table A – Common Infarction / Failure Mechanism)

Result

Outputs remain:

  • fluent
  • plausible
  • structurally well-formed

However:

  • required steps may be omitted
  • provenance may be lost
  • constraints may be weakened or removed
  • sequencing may be violated

The artefact:

  • cannot be audited
  • cannot be certified
  • cannot be relied upon for binding meaning

The system is therefore:

Non-admissible

Diagnostic Insight

This failure mode arises because:

  • evaluation measures outputs
  • architecture governs state

The system is optimised for:

  • reconstruction quality
  • surface coherence
  • benchmark performance

But lacks:

  • invariant-governed execution
  • admissibility validation

Key Distinction

Capability
The system can produce correct or useful outputs under certain conditions.
Admissibility
The system guarantees that outputs arise from invariant-preserving, auditable, and governed state.

This case demonstrates:

Capability ≠ Admissibility

Governance Analysis

Primary failed axes:

  • S – State Continuity (reconstruction replaces preservation)
  • C – Epistemic Custody (no controlled ownership of meaning)
  • K – Constraint Enforcement (constraints not guaranteed)
  • R – Recovery (no deterministic repair mechanism)
  • T – Temporal Coherence (state does not bind across time)
  • Nf – Normative Fixity (rules soften or drift)

Composite condition:

Integrity Loss (all axes impacted)

Why This Failure Persists

The failure is coherence-preserving:

  • outputs appear correct
  • tone remains confident
  • structure remains plausible

Humans interpret coherence as competence.

Therefore:

  • failure is not detected at point of interaction
  • responsibility remains human but control is absent
  • institutional reliance increases despite structural unsoundness

Implication

Systems of this class:

  • are suitable for assistance
  • are not suitable for authority
  • cannot safely carry obligation or normative meaning

Deployment into:

  • government
  • finance
  • legal
  • engineering

introduces structural risk.

CM-2 Contrast

CM-2 introduces:

  • invariant validation prior to inference
  • enforcement of required epistemic objects (EO/EA/RO)
  • sequencing guarantees
  • provenance binding
  • deterministic recovery (ROC ladder)

This converts:

reconstruction → governed execution

and:

non-admissible → admissible system

Conclusion

A system may:

  • perform well
  • pass evaluations
  • appear reliable

while remaining structurally incapable of preserving meaning, authority, and constraint over time.

Such systems must be classified as:

Non-admissible

until governance mechanisms enforcing invariant-preserving state are demonstrably in place.

Position

This case serves as an antithesis to CM-2 governed execution and defines:

how not to construct a deployable AI system in high-assurance environments.

categories