Permeable: The Monetization of Human Fragility

by Muhammad Aurangzeb Ahmad

“Amongst Men” by Haroon Gunn-Salie

There is a particular moment of weakness that most of us recognize without needing to name it. You are lying in your bed, it is late at night. The light from the phone is brighter than the background in the room. The day has thinned your patience as well as your judgment. You click on posts and stories what you would otherwise have ignored. You may even be tempted to buy what you do not need. In other words, your guard is down. You read for a few seconds and then doom scroll to the next story and then the next.  You feel like going to sleep but then you lie to yourself, one more scroll.” Welcome to the world of attention economy. It is basically a system in which human focus is treated as a scarce resource to be captured, measured, and monetized by digital platforms. The attention economy began with a relatively simple goal i.e., capture and hold the gaze. Over time, that ambition evolved into not just focusing on what you like but also focusing on when you are least able to resist. Timing, not just taste, has become the new frontier that these algorithms focus on. While it is true that these platforms were informed by decades of research on human psychology, they do not need to understand us in any deep, human sense. What they do need to do is to predict when you are tired, lonely, bored, anxious, or depleted. It is precisely in these states of vulnerability that it is easier to influence people and steal their attention.

Behavioral science has long recognized that cognition is not constrained by what else is going on in one’s life. Decision fatigue is real. Cognitive depletion makes us more suggestible. Our skepticism varies with sleep, stress, and emotional load. What has changed is that our devices now emit a steady exhaust of behavioral signals that correlate with these states. The phone does not know that you are lonely. It does not need to. It only needs to register that your scrolling pattern has slowed, that you are lingering on certain kinds of content, that the hour is late, that your interactions look different from your baseline. Correlation at scale does not need understanding. The use of human vulnerability is not a rhetorical device, it is anchored in behavioral research. In clinical research, the idea of delivering support at the right moment has matured into the framework of just-in-time adaptive interventions (JITAIs). These interventions are designed to detect states of vulnerability and receptivity and to intervene precisely when help is most likely to succeed. The same logic that can prompt someone to take medication, go for a walk, or reach out to a friend can also prompt them to gamble, to purchase, to keep scrolling. The architecture of timing is morally neutral. The metric it serves may not be neutral.

Behavioral research has demonstrated that timing is not a trivial variable. A study in JAMA Network Open examining nighttime smartphone use among adults at high suicide risk found that use between 11 p.m. and 1 a.m. was associated with higher next-day suicidal ideation and planning, independent of total time spent on the phone. What mattered was not simply duration but context and hour. The mind at midnight is not the mind at noon. If such windows can be detected for clinical risk assessment, they can also be detected for engagement optimization. The data is agnostic of how it will be used. The children’s ecosystem makes the stakes visible even more so. Over the past year, reporting has documented a proliferation of mass-produced, AI-generated children’s videos, sometimes called “AI slop.”These are optimized not for educational coherence but for retention and views. Bloomberg described content pitched even to very young children that mimics educational structure while functioning primarily as engagement bait. There has been a flood of chronicled surreal, repetitive, algorithmically amplified videos flooding YouTube. There are now multiple channels serving AI-generated cartoon content that appears child-friendly on the surface but veers into disturbing and inappropriate imagery. This is not because there is a grand conspiracy to poison the mind of children but rather there is system tuned to maximize watch time among the least defended minds.

Children are not simply small adults; they are developing cognitive systems. Their boredom thresholds, their impulse control, their capacity for skepticism etc. are not fully formed. To optimize an environment for holding their attention is to shape the architecture of that development. The “invisible babysitter” is not just occupying time, it is actively shaping who they are. The long term problem is that once attention is trained toward constant novelty and frictionless stimulation, sustained focus becomes harder to recover. Also, vulnerability is not distributed evenly. Algorithmic ad delivery can differ sharply by social class, with lower-income boys reportedly exposed to substantially more gambling-related advertising than their higher-income peers. Here is the problem, personalization systems can infer socioeconomic signals. Afterwards the algorithms can route users into different commercial worlds e.g., some could be fed lifestyle brands, others could be fed junk food. This means that attention and temptation become stratified.  The problem is that the design of sociotechnical systems can be weaponized against its users. Human-computer interaction scholarship talks about dark patterns i.e., interface designs that distort behavior through misleading cues, friction asymmetries, or engineered urgency. Scholars have even begun to frame attention itself as an autonomy substrate. If attention is persistently redirected by design, then the conditions for meaningful choice erode. In this framing, the problem is not that users are weak. The problem is that systems are engineered to capitalize on predictable weaknesses.

Regulators are finally addressing such issues. In February 2026, the European Commission announced preliminary findings that TikTok’s addictive design features like infinite scroll, autoplay, and highly personalized recommender systems etc. breach obligations under the Digital Services Act. What these moves towards regulation show is that the object of concern for regulators is not just what is shown to the end user, but how and when it is shown. An ACM review on embedding human values into recommenders underscores that these systems are not peripheral filters but central selection mechanisms shaping what enters consciousness. Since short-term engagement is the key metric for most such algorithmic systems then the system will select for whatever stimuli most effectively capture the mind. More often than not this means that these are the signals that exploit fatigue, outrage, or curiosity gaps. Policy reports such as Georgetown’s Better Feeds argue that shifting optimization targets toward longer-term user well-being is not utopian but a matter of metric design. Incentives, once set, cascade through ecosystems. However, To be fair, the evidence linking social media use and mental health is complex. Some large-scale studies have found that total screen time alone is a poor predictor of adolescent mental health outcomes. Context, directionality, and individual differences matter.

What we need is an ethics of attention. This would begin by recognizing that focusing on vulnerability moments is not fair game. It would treat late-night depletion, post-crisis fragility, and childhood cognitive development as protected contexts rather than profit opportunities. It would question business models that thrive on insomnia, loneliness, and compulsion. It would demand that recommender systems be evaluated not only by engagement metrics but by their effects on autonomy and dignity over time. There is a difference between persuading a person vs. catching them at their weakest.  If we care about autonomy as more than a slogan, we will have to decide which side of that line our technologies are allowed to stand on. In Brave New World Huxley imagined a future where we would come to love our own diminishment. The attention economy suggests he may have been less dystopian than diagnostic.