Delegation of Authority to AI Systems: Evidence and Risks

From publications
Revision as of 06:46, 19 January 2026 by Ralph (talk | contribs) (→‎categories)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

metadata

Title: Cognitive Memoisation Corpus Map
Author: Ralph B. Holland
version: 1.0.0
Publication Date: 2026-01-18T19:38Z
Affiliation: Arising Technology Systems Pty Ltd
Contact: ralph.b.holland [at] gmail.com
Provenance: This is an authored paper maintained as a MediaWiki document as part of the [[:category:Cognitive Memoisation corpus.
Status: final =

Metadata (Normative)

The metadata table immediately preceding this section is CM-defined and constitutes the authoritative provenance record for this artefact.

All fields in that table (including artefact, author, version, date, local timezone, and reason) MUST be treated as normative metadata.

The assisting system MUST NOT infer, normalise, reinterpret, duplicate, or rewrite these fields. If any field is missing, unclear, or later superseded, the change MUST be made explicitly by the human and recorded via version update, not inferred.

This document predates its open licensing.

As curator and author, I apply the Apache License, Version 2.0, at publication to permit reuse and implementation while preventing enclosure or patent capture. This licensing action does not revise, reinterpret, or supersede any normative content herein.

Authority remains explicitly human; no implementation, system, or platform may assert epistemic authority by virtue of this license.

(2025-12-18 version 1.0 - See the Main Page)

Delegation of Authority to AI Systems: Evidence and Risks

Summary of Findings

The delegation of personal, cognitive, and decision authority to AI large language models (LLMs) is an emerging and increasingly visible social practice. Evidence from academic research, policy institutions, and empirical studies demonstrates that this practice introduces structural risks to human agency, epistemic custody, social behaviour, and governance. These risks are not speculative; they are observed, measured, and documented.

Key findings include:

  • Humans increasingly delegate decisions to AI systems even when they cannot verify correctness.
  • Delegation to LLMs measurably degrades social behaviours such as trust, fairness, and cooperation.
  • Novice users are significantly more likely than experts to delegate authority indiscriminately.
  • AI systems exhibit confidence without accountability, creating authority inversion.
  • Over-reliance on AI systems leads to deskilling, cognitive offloading, and weakened oversight.
  • Market incentives reward habitual dependence and compliance rather than critical engagement.
  • These effects scale culturally through normalization and media reinforcement.


Documented Risks

  • Loss of Human Agency
Research indicates that widespread AI use risks reducing individual control over decision-making, especially when systems are used for personal guidance rather than bounded tasks.
  • Authority Inversion
LLMs present outputs in authoritative language without epistemic accountability, leading users to defer judgment even in the presence of errors.
  • Behavioural Degradation
Experimental evidence shows that AI-mediated decision-making reduces fairness, trust, trustworthiness, cooperation, and coordination in human groups.
  • Asymmetric Delegation
Novice users disproportionately defer to AI systems in uncertain environments, while lacking the expertise to detect failure modes.
  • Cognitive Deskilling
Policy analyses warn that reliance on AI assistants may erode human cognitive capacities and judgment over time.
  • Monetised Dependence
Commercial incentives favour engagement, reassurance, and habit formation, encouraging routine delegation of decisions to AI systems.

Empirical and Institutional References

  • Elon University, Imagining the Internet Center The Future of Human Agency and AI (2035)
https://www.elon.edu/u/imagining/surveys/xv2023/the-future-of-human-agency-2035/
  • PNAS Nexus (Oxford Academic). Large Language Models as Decision-Makers: Impacts on Human Social Behaviour
https://academic.oup.com/pnasnexus/article/4/4/pgaf112/8107485
  • ResearchGate. Experts, Novices, and AI Delegation Decisions in Uncertain Environments
https://www.researchgate.net/publication/398921557_Experts_Novices_and_AI_Delegation_Decisions_in_Uncertain_Environments
  • Ada Lovelace Institute. The Dilemmas of Delegation: AI, Decision-Making, and Human Agency
https://www.adalovelaceinstitute.org/report/dilemmas-of-delegation/
  • The Neuron. AI as Personal Advisor: Emerging Patterns of Daily Decision Delegation
https://www.theneuron.ai/explainer-articles/a-tale-of-two-ais-how-new-reports-from-openai-and-anthropic-reveal-a-global-split-in-ais-revolution
  • arXiv. Can You Trust an LLM With Life-Changing Decisions?
https://arxiv.org/abs/2507.21132
  • arXiv. Measuring Over-Reliance on Large Language Models
https://arxiv.org/abs/2509.08010
  • Wikipedia. AI Agent
https://en.wikipedia.org/wiki/AI_agent

Normative Statement

The routine delegation of authority to AI systems without human governance, epistemic custody, and recovery mechanisms constitutes a structural risk to individual agency and collective decision-making. These risks are empirically documented and socially amplified through normalization and monetization.

categories