Gréta Tímea Biró at Sapiens:
Dora and I walked through the quiet nighttime streets of Chow Kit, a downtown neighborhood in Kuala Lumpur. [1] Pungent food smells mingled with the sweet scent of fruit and flowers from a nearby market. Abandoned rainbow-colored confetti shivering under the dim, yellowish streetlights reminded us of some celebration that took place earlier. [2]
In the 1970s and 1980s, Chow Kit was a bustling red-light district. Today only around 15 to 20 sex workers can be seen on any given night, according to Dora. The decline is due to a worsening economy and increased surveillance by Islamic authorities.
“Most hide from the religious police in these rundown buildings, hoping to find clients using apps,” she said. As we passed a police station, Dora explained that officers required bribes from each sex worker to allow them to work. A local mafioso further exploited them, demanding “protection money” while offering no real security.
More here.
Enjoying the content on 3QD? Help keep us going by donating now.

Most current digital doppelgängers, for all practical purposes, are automatons i.e., their behavior is relatively fixed with relatively well defined boundaries. I would argue that this is a feature and not a bug. The fixed nature of the automata is what gives them the feeling of familiarity. Now imagine if we were to take away this assumption and tried to incorporate semblance of some of autonomy in digital doppelgängers. In other words we would be allowing it to evolve and make its own decisions while staying true to the original person that it is based upon. A digital self trained on a person’s emails, messages, journals, and conversations may approximate that person’s style, but approximation is not equivalent to being the same. Over time, the model encounters friction e.g., queries it cannot answer cleanly, emotional tones it cannot reconcile, contradictions it can detect but not resolve. If we let the digital doppelgänger evolve to address these challenges, divergence between the model and the original will start to emerge until one point one is forced to admit that one is no longer dealing with a representation of the same person. What if it not an outsider interlocutor that comes upon this realization but the digital clone itself?
I first discovered the poetry of Weldon Kees in 1976—fifty years ago—while working a summer job in Minneapolis. I came across a selection of his poems in a library anthology. I didn’t recognize his name. I might have skipped over the section had I not noticed in the brief headnote that he had died in San Francisco by leaping off the Golden Gate Bridge. As a Californian in exile, I found that grim and isolated fact intriguing.
The ASPI’s 
I write this from the front of a Columbia classroom in which about 60 first-year college students are taking the final exam for Frontiers of
S
On 1 November 2025, the south-western Indian state of Kerala – home to 34 million people – was
The way the fabled investor Bill Ackman sees it, he was born to move markets. It’s right there in the name: BILL-ionaire ACK-tivist MAN, as the 59-year-old always loves pointing out, whether in a
The emergence of agentic Artificial Intelligence (AI) is set to trigger a “Cambrian explosion” of new kinds of personhood. This paper proposes a pragmatic framework for navigating this diversification by treating personhood not as a metaphysical property to be discovered, but as a flexible bundle of obligations (rights and responsibilities) that societies confer upon entities for a variety of reasons, especially to solve concrete governance problems. We argue that this traditional bundle can be unbundled, creating bespoke solutions for different contexts. This will allow for the creation of practical tools—such as facilitating AI contracting by creating a target “individual” that can be sanctioned—without needing to resolve intractable debates about an AI’s consciousness or rationality. We explore how individuals fit in to social roles and discuss the use of decentralized digital identity technology, examining both ‘personhood as a problem’, where design choices can create “dark patterns” that exploit human social heuristics, and ‘personhood as a solution’, where conferring a bundle of obligations is necessary to ensure accountability or prevent conflict. By rejecting foundationalist quests for a single, essential definition of personhood, this paper offers a more pragmatic and flexible way to think about integrating AI agents into our society.