Delegation of Authority to AI Systems: Evidence and Risks
Delegation of Authority to AI Systems: Evidence and Risks
Summary of Findings
The delegation of personal, cognitive, and decision authority to AI large language models (LLMs) is an emerging and increasingly visible social practice. Evidence from academic research, policy institutions, and empirical studies demonstrates that this practice introduces structural risks to human agency, epistemic custody, social behaviour, and governance. These risks are not speculative; they are observed, measured, and documented.
Key findings include:
- Humans increasingly delegate decisions to AI systems even when they cannot verify correctness.
- Delegation to LLMs measurably degrades social behaviours such as trust, fairness, and cooperation.
- Novice users are significantly more likely than experts to delegate authority indiscriminately.
- AI systems exhibit confidence without accountability, creating authority inversion.
- Over-reliance on AI systems leads to deskilling, cognitive offloading, and weakened oversight.
- Market incentives reward habitual dependence and compliance rather than critical engagement.
- These effects scale culturally through normalization and media reinforcement.
---
Documented Risks
Loss of Human Agency
Research indicates that widespread AI use risks reducing individual control over decision-making, especially when systems are used for personal guidance rather than bounded tasks.
Authority Inversion
LLMs present outputs in authoritative language without epistemic accountability, leading users to defer judgment even in the presence of errors.
Behavioural Degradation
Experimental evidence shows that AI-mediated decision-making reduces fairness, trust, trustworthiness, cooperation, and coordination in human groups.
Asymmetric Delegation
Novice users disproportionately defer to AI systems in uncertain environments, while lacking the expertise to detect failure modes.
Cognitive Deskilling
Policy analyses warn that reliance on AI assistants may erode human cognitive capacities and judgment over time.
Monetised Dependence
Commercial incentives favour engagement, reassurance, and habit formation, encouraging routine delegation of decisions to AI systems.
---
Empirical and Institutional References
- Elon University, Imagining the Internet Center.
The Future of Human Agency and AI (2035) https://www.elon.edu/u/imagining/surveys/xv2023/the-future-of-human-agency-2035/
- PNAS Nexus (Oxford Academic).
Large Language Models as Decision-Makers: Impacts on Human Social Behaviour https://academic.oup.com/pnasnexus/article/4/4/pgaf112/8107485
- ResearchGate.
Experts, Novices, and AI Delegation Decisions in Uncertain Environments https://www.researchgate.net/publication/398921557_Experts_Novices_and_AI_Delegation_Decisions_in_Uncertain_Environments
- Ada Lovelace Institute.
The Dilemmas of Delegation: AI, Decision-Making, and Human Agency https://www.adalovelaceinstitute.org/report/dilemmas-of-delegation/
- The Neuron.
AI as Personal Advisor: Emerging Patterns of Daily Decision Delegation https://www.theneuron.ai/explainer-articles/a-tale-of-two-ais-how-new-reports-from-openai-and-anthropic-reveal-a-global-split-in-ais-revolution
- arXiv.
Can You Trust an LLM With Life-Changing Decisions? https://arxiv.org/abs/2507.21132
- arXiv.
Measuring Over-Reliance on Large Language Models https://arxiv.org/abs/2509.08010
- Wikipedia.
AI Agent https://en.wikipedia.org/wiki/AI_agent
---
Normative Statement
The routine delegation of authority to AI systems without human governance, epistemic custody, and recovery mechanisms constitutes a structural risk to individual agency and collective decision-making. These risks are empirically documented and socially amplified through normalization and monetization.