top of page

The AI dependency loop

A few days ago, I tried to recall a close family member’s phone number and couldn’t. I once knew dozens by memory. Family, friends, office lines, even local stores. Then mobile phones arrived, and contact lists took over that mental space. The convenience was clear, but something subtle changed. Our minds stopped practicing those small acts of recall.

That moment made me think about something larger. Today, we are extending that habit. We are handing over more and more of our remembering, reasoning, and deciding to machines that learn and predict for us.


Artificial intelligence now drafts our messages, recommends what to read, plans our calendars, and completes our sentences. Each time it does, we save a few moments. Over time, we may also surrender a few habits of thought.



Cognitive outsourcing

Researchers at McGill University studied this pattern in a different context. In 2020, Louisa Dahmani and Véronique Bohbot published findings showing that people who frequently used GPS performed worse on self-guided navigation tasks. They called it a case of cognitive offloading, where a tool takes over a mental function. The full paper is available here: Scientific Reports (Nature.com).


A later meta-analysis by the University of Padua found a consistent negative correlation between habitual GPS use and spatial knowledge. The study is available here: University of Padua Research Portal.


While these studies focus on navigation, the pattern is familiar. Once a task moves from memory to machine, our ability to perform it declines.



Data extraction and prediction

AI depends on data. Every search, click, and pause adds to a behavioral model that predicts what we will want next. Shoshana Zuboff, professor emerita at Harvard, described this process in The Age of Surveillance Capitalism. She explains how companies turn human experience into raw material for prediction and profit. The Harvard Gazette summarizes her work here: Harvard Gazette.


As more data is collected, the system’s predictions become increasingly accurate. With higher accuracy comes greater persuasive power. Each interaction supplies new input that improves the model and strengthens the cycle. Convenience keeps the loop running.



Ecosystem lock-in

Modern AI systems are embedded across entire ecosystems. One platform handles your messages, notes, photos, and work files. Leaving it often means losing history or compatibility. The European Union addressed this issue through the Digital Markets Act, which requires large platforms to ensure effective data portability. Details of the regulation are here: European Commission.


Despite these requirements, moving data remains complex. Files often export in unreadable formats, and automation breaks when transferred. Friction keeps users from leaving. This design benefits corporations by keeping users within their ecosystems, where behavior can be continuously analyzed and monetized.


Till regulation explicitly grants individuals ownership of their personal data and control over how it is shared, the balance of power will remain with the platforms that collect and profit from it.



Attention and agency

The Center for Humane Technology describes this as part of the “attention economy,” where technology products compete for time and emotional engagement. Their overview is available here: Center for Humane Technology.


When engagement becomes the goal, design choices follow. Algorithms learn which emotions keep users scrolling. The result is a system optimized for reaction, not reflection. Tristan Harris, co-founder of the center, has discussed how these systems alter attention and agency. His perspective is summarized here: Humanetech.com.


The same mechanisms that make AI seem helpful can make it quietly persuasive. You can see this everywhere today, especially on social media. Algorithms prioritize content that keeps users engaged longer, often by amplifying emotion, outrage, or novelty. The system learns which posts drive interaction and adapts to reinforce those behaviors.


Over time, this continuous feedback loop shapes what people see and how they respond. What feels like personal choice is often a product of statistical optimization.



Reclaiming autonomy

AI can improve accuracy, speed, and access to knowledge. Its risks lie in how we use it. The more we rely on automation, the more we risk dulling the skills it replaces.


A simple exercise is to recall a few phone numbers without looking them up. Write something without predictive text. Draw a map from memory before turning on GPS. These are small ways to keep cognitive muscles active.


Governments can strengthen rights around data portability and algorithmic transparency. Companies can design for user control instead of passive dependence. Individuals can choose when to rely on machines and when to think unaided.


The question is not whether AI will think for us. The question is whether we will keep thinking for ourselves.





References
  • Dahmani, L., & Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10, 6310. Nature.com

  • Ishikawa, T. et al. (2021). GPS Use and Navigation Ability: A Systematic Review and Meta-Analysis. University of Padua. Research.unipd.it

  • Zuboff, S. (2019). The Age of Surveillance Capitalism. Harvard University Press. Harvard Gazette

  • European Commission. (2022). Digital Markets Act. Press release

  • Center for Humane Technology. (2023). The Attention Economy. Humanetech.com

  • Center for Humane Technology. (2023). Attention and Mental Health. Humanetech.com


Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

ⓒ Rajib Ghosh. 2024 - 2025. All rights reserved.

bottom of page