The Ghost in the Machine (Part II): Simplifying the Ghost of AI

by Ali Minai

But nature is a stranger yet:
The ones that cite her most
Have never passed her haunted house,
Nor simplified her ghost.

—Emily Dickinson, “What Mystery Pervades a Well”

The first article of this 2-part series laid out the idea of emergence in complex systems, discussed how the appearance of abilities such as the generation of grammatical, syntactically correct, and meaningful text can reasonably be seen as an example of emergence, but also why these emergent abilities are just a shadow – or ghost – of the deeper language generation process in humans. This second part gets more deeply into the last point, making a detailed argument for why the linguistic abilities of LLMs should be seen as limited, and what would be needed to extend them.

The Meaning of Meaning

Perhaps the most critical difference between an LLM’s model of language and that of a human is in the way meanings are understood in the two. The task on which LLMs are trained provides no explicit information about meanings, and depends only on knowing the structural relationships between words in text. However, the fact that LLMs almost always use words in meaningful ways indicates that they have an implicit model of meanings too. What is the nature of that? The answer lies almost certainly in a linguistic idea called the distributional hypothesis of meaning, which says that the meaning of a word can be inferred from the statistics of its use in the context of other words. As described above, LLMs based on transformers are pre-disposed to the statistical learning of structural relationships in text, and their representations of meaning must be derived implicitly from this because of the tight linkage between word usage and meaning per the distributional hypothesis. Given enough data, the statistics can become very accurate – hence the meaningful output of GPT-4 et al. But such meanings – though accurate for purposes to usage – are abstractions. In contrast, the meanings in the human mind are grounded in experience. GPT-4 might understand the meaning of the word “burn” as something associated with the words, “fire”, “heat”, “flame”, “smoke”, “pain”, etc., but a person understands it in terms of the sensation of feeling heat and getting burned. Similarly, GPT-4 may use the words “near” and “far” correctly, but it has no experience of an object being near enough to touch or too far away to recognize. Concrete meanings in the human mind are thus grounded in the fact of the human animal being embodied – a physical system with sensations and the capability of action that leads to physical consequences. The LLM is “all talk” – just simulating concrete meanings as ungrounded symbols. Of course, not all concepts in human language are concrete enough to be defined in direct experiential terms, and there is ongoing debate about how the mind grounds abstract meanings. However, it seems plausible that they are grounded in the substrate of more concrete meanings, and thus indirectly in experience as well (see reference [1] for a recent overview, reference [2] for my views in detail). It is in this sense above all that LLM models are ghosts in the machine. Read more »



The Pizza (or Cognitive Bias and Uses of Distraction)

by Tim Sommers

I’m not really interested in magic. But I am interested in crime. So, recently while reading a discussion of close-in magic by neuroscientists I perked up when they got to the question of how criminals, and others of questionable character (like magicians), steal wristwatches right off their victims’ wrist without being detected. (Still relevant? Wrist watch wearing, admittedly, is way down from its peak in the 1950s, but a third of adult males still wear one every day – and then there’s smartwatches, step counters, etc.)

Apparently, there’s a skill and a trick involved in stealing a watch. The skill is to learn to undo the clasp using just your middle and pointer fingers, while simultaneously shaking hands with someone. Sure, that’s quite a skill to master, but I was more curious about how you then yank a watch off someone’s wrist without them noticing – particularly a watch with a “deployant clasp” that doesn’t open all the way. (See image above.)

Here’s what you do. You just clap the person on the shoulder at the same moment you yank off the watch. Apparently, this works. It’s all about distraction. Magicians have a fancy word for distraction. They call it “misdirection.” Personally, I find this irritating. I’ll stick with distraction.

Here’s another example of distraction from a magician. Before the start of a magic show I was dragged to, the audience was encouraged to come on stage and examine this large container to make sure it was real, solid, and had no trap door or hidden egress. Right before the show started the container was turned around by assistants, then closed. When the show opened, the container was turned back around, opened, and the magician stepped out to thunderous applause.

The crazy part is how the trick works. Having the audience check it out ahead of time was just a distraction. When they turned the container around, before they closed the door, the magician strolled casually out of the wings and climbed inside in full-view of the audience. I saw it clearly because I knew it was going to happen (because I had read a book the magician had written). But shouldn’t everyone have seen it? (Don’t even get me started on the “Invisible Gorilla.”)

Which brings me to my old roommate Nick. Read more »

Driving Under the Influence of Cancel Culture

by Steven Gimbel and Gwydion Suilebhan

Whether you love, hate, or tolerate Tesla may say something about the era you grew up in.

Every generation, when it reaches a certain age, makes two proclamations: Saturday Night Live used to be funnier, and “kids these days” are lazy and stupid.

Neither claim is true, of course, but they feel true. We members of Generation X insist SNL isn’t as good as it was, but that’s because they aren’t making jokes for us any more. Their audience now consists largely of Millennials, who have a unique sense of humor shaped by the age in which they were raised. That younger audience isn’t either lazy or stupid, either. Millennials work hard, but in different ways, and they know different things than we do as well.

Still, there are meaningful differences between Gen Xers and Millennials, and one of those differences has become particularly stark of late: our tolerance for moral ambiguity.

As Gen Xers, we grew up in a world in which our favorite sports heroes were discovered taking steroids, and we had to figure out how to keep rooting for them anyway. A world in which bona fide heroes like Senators John Glenn and John McCain could get caught up in a major political scandal and still get re-elected. A world in which our own idealistic (and idealized) Baby Boomer parents entered their peak earning years and started voting for tax cuts instead of justice. We learned to distrust everything and everyone: even those we loved.

We also learned to hold multiple conflicting opinions at the same time. We need more people of color on the Supreme Court AND Clarence Thomas harassed Anita Hill AND the accusations against him played into racist tropes about Black men. Bill Clinton abused his power in having a tryst with Monica Lewinksy AND we shouldn’t portray Lewinsky as a powerless pawn unable to make her own choices AND Clinton’s politics were generally favorable to women AND he was prosecuted by other White men who were equally (or more greatly) compromised.

Life taught us to doubt the existence of pure “good” and “bad.” We came of age in a morally ambiguous world. Millennials, by contrast, came of age in a morally bankrupt world. Read more »

Stoicism as Symptom

by Chris Horner

The general terms ‘true’ and ‘good’ or ‘wisdom’ and ‘virtue’, with which stoicism is stuck, are on the whole undeniably uplifting, but because they cannot in fact end up in any kind of expansion of content, they quickly start to become tiresome. —Hegel [1]

Stoicism seems to be everywhere at the moment, on Tiktok, Instagram, YouTube (‘The Daily Stoic’) and in plenty of best selling books on how to be a Stoic. But why would a philosophy from the Ancient world be found so appealing to so many, right now? I think I can at least give a partial answer to that. And I also want to raise some of the problems of this Neostoicism. In what follows I will be less concerned with the details of the philosophy as it was taught in ancient times, the developments it went through or the logic and metaphysics it involved, than with the way it has been received in the 21st century. Stoicism is a symptom of a malaise, a problem in the modern world, rather than any kind of solution to its ills. But first – what is, or was,  Stoicism? [2]

Originally associated with Zeno of Citium in the 3rd century BCE, It is a philosophy focused on developing self-control, fortitude, and reason as a way to overcome destructive emotions and to achieve inner peace, and resilience, by  focusing on what is within one’s control, (one’s own feelings, thoughts) while letting go what is outside one’s control. External events and other people’s actions should not disturb one’s inner tranquility. Rather, one cultivates an attitude of calm and detachment from external events. Stoics believed in the importance of reason, logic and self-discipline as essential for leading a fulfilling life, based on following Nature, which has a logos, an order with which we must harmonise our thoughts, feelings and actions.

 It was a philosophy that developed and changed during the ancient world, but these main points can be held to be true to the philosophies of such famous Stoics as Seneca, Epictetus, and Marcus Aurelius. Read more »

Pausing

by Eric Bies

I think of Pearl S. Buck and end up thinking of William F. Buckley. I think of The Good Earth—of hovels unworthy of Frankenstein’s monster and a palace sacked in smoke, its shriveled matriarch glued to an opium pipe—and end up thinking of Firing Line. Clicking over to YouTube, typing, hitting enter, clicking, watching: after the FBI Warning about unauthorized reproduction and so forth, past the Hoover Institution’s little slide about same, through the twittery culture-conferring bit of Bach as theme, this is number 267, an interview with Jorge Luis Borges. I giggle a little as South America’s Titan, gazing literally blindly out past everyone in attendance that day in Buenos Aires, clips and interrupts Buckley’s stately introduction, an introduction that begins with Buckley reading off words about Borges, by Borges.

Buckley: About himself he said recently, “As for a message, well, I have no message. Some things—”

Borges: That’s right, there’s no message whatever.

Buckley [places a hand on Borges’ arm]: “Some things simply occur to me and I write them down with no aim to hurt anyone or to convert anyone. This is all I can say. I make this public confession of my poverty before everybody—”

Borges: Yes.

Buckley: “Besides, had I not done so, you would have known—”

Borges: But I think you may know.

Buckley: “—you would have known it was true,” yes that’s what I said.

Borges: Yes.

Buckley: I’m just going to finish this introduction, and then we’ll exchange—

Borges: That’s right, yes, that’s right.

Buckley: Uh, about him others—

Borges: [inarticulate]

Buckley: About him others have written that he is the greatest living writer. Still others, that he has influenced the literature of the world more than anyone alive. Jorges Luis Borges lives here in Buenos Aires, although he has traveled extensively, especially in the United States, and taught most recently at Harvard for a year. He is blind, since the late fifties. He does not mind it, he says [Buckley’s brows go up], because now he can live his dreams with less distraction—

Me: CLICK.

Video paused, I get up from my desk, traverse the kitchenette, use the toilet, and refill my glass with water from the tap. My apartment isn’t bugged or anything, but if it were, say, with a dozen little cameras, or even if the little man in my phone were to tune in for a peep to help pass the afternoon, dangling his eye from the eye of my phone which I carry from point to point before returning to my desk—nothing would appear awry. On the surface, I, the walls, even the water in this glass remain totally unperturbed. But, for seventy seconds or so, those last few pre-pause words, far from the conclusion of Buckley’s introduction, have been shimmering brightly inside of me. Read more »

When Will the US Government’s Failure to Decarbonize the Healthcare Industry Compel Litigation?

by David Introcaso

In late March the United Nations adopted a landmark resolution requesting an advisory opinion from the International Court of Justice “on the obligations of States in respect of climate change.”  Specifically, the resolution seeks an opinion regarding legal consequences under international law states may face for acts or omissions that have caused significant damage to the climate that in turn harm other states.  The US opposed the resolution arguing disingenuously diplomatic efforts constitute the best approach to addressing the climate crisis.  Disingenuous in that the US, for example, is the only country not to sign the UN’s 1992 Convention on Biological Diversity.  Also in late March the European Court of Human Rights heard two lawsuits brought by French and Swiss citizens who argued their governments had violated their human rights by failing to address the climate crisis.  In the fall the European Court will hear a related case brought by Portuguese citizens that names all 27 European Union and five other nations as defendants.

Because annual global greenhouse gas emissions (GHG) continue to increase and because there is no credible pathway to limiting global warming to an average of 1.5C, not surprisingly the number of climate crisis-related lawsuits have rapidly increased since 2020.  Of the 2,000 globally 500 are in the US.  The most noted US lawsuit is Juliana v the US, a case that has been termed the most important in history largely because the US is responsible for 40% of excess global GHG emissions.  In 2015, 21 youth plaintiffs, moreover minorities as young as eight, filed a lawsuit against President Obama and seven executive departments including Agriculture, Commerce, Energy and the Department of Defense, the world’s largest institutional GHG polluter.  The Department of Health and Human Services (HHS) was not named despite the fact the healthcare industry, extensively regulated and almost entirely financed or subsidized by the federal government, emits four times more GHG emissions than the US defense department.  The plaintiffs allege the federal government has violated, in part, their constitutional due process rights by supporting the use of fossil fuels for over fifty years.  The government does this, the plaintiffs argue, despite knowing resulting GHG emissions endanger the climate’s stability thereby compromising the plaintiffs’ health and the health of all future Americans.  The plaintiffs requested the court to order the government to implement a plan to phase out fossil fuel use and draw down excess atmospheric GHG emissions. Read more »

Why We Have No Theory of Gastronomy

by Dwight Furrow

The term “gastronomy” has no agreed-upon, definitive meaning. Its common meaning, captured in dictionary definitions, is that gastronomy is the art and science of good eating. But the term is often expanded to include food history, nutrition, and the ecological, political, and social ramifications of food production and consumption. For my purposes, I want to focus on the conventional meaning of gastronomy for which that dictionary definition will suffice.

We have thousands of recipes from all over the world and, thanks to food historians, this data spans many generations. From this vast database, we know the combinations of ingredients that cooks have used to satisfy our need for enjoyment. We have practical guides to the techniques and methods that make each dish successful as elaborated in countless media devoted to cooking. And we have a robust science of cooking that explains the chemical interactions that occur when dishes are properly made and that also expands our understanding of what is possible. But we don’t have a general theory of the organization and structure of dishes that explains what it means for something to “taste good.” In other words, we can give accounts of what it means for a paella to taste good according to conventional standards of paella making, while acknowledging widespread disagreement on some of the details. But we have no theory of how that is related to a butter chicken or ossobuco tasting good. In gastronomy there is nothing akin to music theory or theories in the visual arts that elaborate the general conditions for the composition of recipes and no account of what kind of aesthetic achievement a dish or a meal is.

This is not to say there are no rules of thumb that guide chefs and cooks. Good dishes must be skillfully made, balanced, have enough flavor variation and texture to be interesting, be appropriate to the season or occasion, and be made from quality ingredients. But these factors make only a minimal contribution to a conceptual system that would organize the vast and highly differentiated world of cuisine. Read more »

The Queer (An Explanation)

by Ethan Seavey

Photo taken by the author. Artist unknown.

On the night that I first told someone else that I was gay, the world was held together with a single phrase which was echoed from speaker to listener and back again. My friend and I were both queer but I was the first one saying it. So for us, it was healing to say and to hear: “It doesn’t mean anything. I (or you) won’t let it define me (or you).” We repeated this wish, that someone’s sexuality could be considered separate from their social identity. I decided to trust that I could be gay, but not this or that kind of gay, just myself. I used the phrase like a prayer and begged that I could exist in a straight-minded Catholic world. It worked for a few months. Then I started to tell more people. Two more friends and then my parents. Siblings and more friends. And finally, everyone. Most everyone was kind and most everyone said it didn’t change a thing. But it changed many things. I finally accepted that I was the Queer.

The Queer is a lonely identity. They are raised believing that they are and can be the same as everyone else, but as they grow up, they realize that they are The Queer in a space that is not made for them. They are The Only Queer in a class of three hundred, but they heard there’s another Queer in the grade above. They don’t speak to one another because they are afraid to be seen associating. They realize that they will never have the lives of their parents.

The Queer must reckon with the close-minded Gods that dominate the globe. They are born into a world that is not made for them and given an identity that defines them forever. This world is ruled by a strong God in the clouds who supports the structure that hurts them. With tall walls He puts each person in a room and investigates them. He studies the outliers; not to know them, but to find out ways to make them fit in. Read more »

Monday, May 8, 2023

Oppenheimer I: “An unctuous, repulsively good little boy”

by Ashutosh Jogalekar

“Oppenheimer, Julius Robert”, by David A. Wargowski, December 7, 2018

This is the first of a series of short pieces on J. Robert Oppenheimer. The others can all be found here. Popular interest in Oppenheimer’s life seems to have peaked this year with the upcoming release of Christopher Nolan’s mainstream film, “Oppenheimer”. Several books about Oppenheimer – and even a popular opera – have come out just in the last two decades. Analogies between Oppenheimer and nuclear weapons and new technologies like AI and gene editing are frequently drawn, sometimes incisively and often misleadingly. Why does this man continue to inspire so much interest? And why now?

My goal is to provide readers who might not want to read a full biography of Oppenheimer with some of the highlights of his life that address these questions and put them in context. Needless to say, this series – which is essentially chronological – is not supposed to be an exhaustive biography and is biased by what I personally think was most interesting about this brilliant, fascinating, complicated man’s life and times.

The physicist Isidor Rabi – a friend who perhaps knew Robert Oppenheimer better than he knew himself – once said that Oppenheimer was a man composed of many shining splinters. That succinct assessment captures the central dilemma of Oppenheimer’s life: identity. It was a dilemma that made him who he was and one that contributed to the myriad problems he faced in his life. I also believe that it was this dilemma that makes him a fascinating man of enduring interest, more interest than many of his contemporaries who were far better-known scientists.

To understand this dilemma it’s worth starting, as it often is, with Oppenheimer’s childhood. The affluent household where Oppenheimer was born and grew up was, in the words of a biographer, “like Ibsen’s Rosmersholm, that aristocratic estate where voices and passions were always subdued, and where children never cried – and when they grew up never laughed.” Read more »

The Ghost in the Machine (Part I): Emergence and Intelligence in Large Language Models

by Ali Minai

Part II of this article can now be read here.

One of the most interesting debates within the larger discussion around large language models (LLMs) such as GPT-4 is whether they are just mindless generators of plausible text derived from their training – sometimes termed “stochastic parrots” – or systems with a spark of real intelligence and “emergent” abilities. In this article, I will try to look at these issues from a complex systems perspective. While this article will focus mainly on large language models (LLMs) such as the GPT systems, I will begin by laying out a more general conceptual framework.

Complexity and Complex Systems

Complex systems are defined as systems that consist of a large number of nonlinearly interacting components with the ability to generate large-scale organization without explicit design or control. This self-organization is the essential feature that distinguishes these systems from other complicated but specifically designed systems such as computers and aircraft. Complex systems are common in the natural world, including everything from galaxies and planets to forest fires and hurricanes, but the most profound examples occur in the domain of life. All living systems from bacteria to humans are complex systems. In more complex organisms, their subsystems are also complex systems, e.g., the brain and the immune system in humans. Ecosystems and ecologies too are complex systems, as are collective systems of living agents from slime molds and insect colonies to human societies and economies. In addition to self-organization, some complex systems – and, in particular, those involving living agents – also have the property of adaptivity, i.e., they can change in response to their environment in ways that enhance their productivity. Crucially, this adaptation too is not controlled explicitly by an external agency but occurs within the system through its interactions with the environment and the consequences of these interactions. These systems are called complex adaptive systems. An example of this is evolving species in changing ecosystems, but one that is more pertinent to the current discussion is the nervous system of complex animals such as humans. This system is embedded in another complex system – the rest of the animal’s body – and that, in turn, is embedded in the complex system that is the rest of the world.

Complexity in the sense defined above has several profound implications. One of these is that a complex system’s behavior is inherently impossible to predict by reductionistic causal analysis, and thus impossible to control by any top-down mechanism. This is because almost all large-scale phenomena – attributes, structures, processes, functions – in the system arise bottom-up from the interaction of a very large number of components – often billions or more, as in the cells of the brain – and can neither be reduced to nor described by the behavior of individual components. This property is called emergence. Read more »

Sweet Truth

by Deanna K. Kreisel (Doctor Waffle Blog)

The first time I came across the “candy bar interiors” quiz, I was not disturbed by how many I got wrong, but rather how many I got right. While a few of the pictured confections were alien to me (Zagnut? who the hell eats a Zagnut? did Charlie Chaplin enjoy one in Modern Times?), I was intimately familiar with enough of the images that I could extrapolate the rest.

I just picked the weirdest and creepiest looking option as the Zagnut. I was correct.

I have been unhealthily obsessed with candy bars for as long as I can remember. My obsession is deep, tender, and gently festering, and I feel ambivalent about stirring up the dead leaves at the bottom of that pool. Even though I have written essays about my consistently abusive mother, my intermittently abusive father, my history of panic disorder, and many other delicate topics, an exploration of my feelings about candy bars feels like the most difficult thing I have ever attempted.

The outside layer of a Zagnut is coconut. Who does that?

My parents were largely uninterested in their children, but every once in a while they made a symbolic stab at parenting by doing something like restricting our food choices or giving us a curfew. They forbade sugary breakfast cereals, soda, candy, and chips of any kind—thus setting us up for an enduring obsession with junk food that I carry with me to this day. Even though I am now in my 50s and wholly in charge of my own snack choices, my stomach still flips over a little at the sight of a potato chip bag or a Chip Ahoy. So salty! So mouthfeely! So naughty. Read more »

Is there something fishy about radiocarbon dating?

by Paul Braterman

England Great Army map.svg
A map of the route taken by the Viking Great Heathen Army. Hel-hama, own work, via Wikipedia

The Vikings started out as raiders, but then, in the way of these things, ended up as rulers, and their influence stretched from Greenland to what is now Russia. They first enter English history in 793, with the sacking of the Monastery of Lindisfarne. By the late 9th century, they were colonising Iceland, and serving as mercenaries to the Emperor of Byzantium. In 862, Vikings under Rurik established themselves in Novgorod, forming the nucleus of what would become Kyivan Rus. In 885, Vikings besieged Paris, and although they were beaten back settled in what is now Normandy (Norman, Northmen). In 865, the Viking Great Heathen Army arrived in England, and a year later, under Ivar the Boneless, captured York, which would remain their capital in England until the defeat of Eric Bloodaxe in at 954.

The Vikings’ goal was to establish themselves as rulers over Anglo-Saxon England, divided at that time into the four kingdoms of Northumbria, East Anglia, Mercia, and Wessex, and in this they were almost successful. After establishing their kingdom in York, they swept south, taking control of East Anglia and killing its king, who had earlier provided them with horses. They then spend the next five years consolidating their hold over what had been the most powerful of the Saxon kingdoms, Mercia, stretching from the Thames to the Humber, whose king took refuge in Paris. The Anglo-Saxon Chronicle tells us that in 873-44, a date that will prove significant for us, the army spend the winter in Repton, then a town of some importance. It then divided, one part going north to consolidate control over York, while the other swept south through Mercia into Wessex, which they effectively overran over the next two years. Read more »

On the use and abuse of the term “fascism” to describe current events

by David J. Lobina

For someone who grew up in the South of Europe but has lived in the UK for the last 20 or so years, and who, moreover, is a sort-of linguist, the recent proliferation of the word “fascism” to refer to certain political events and tendencies in the English-speaking world, especially in the US, is not a little surprising. After all, the people we used to refer to as fascists when I was growing up in Italy and Spain certainly bore a resemblance to classical fascism – some were the descendants of actual fascists, in fact – whereas the guys who get called fascist all the time these days, especially in the US, are nothing like them. And in any case, it was the 1990s then and it is 2023 now, is the term “fascism” still relevant today?[i]

Who were these fascists from my youth, then? If you lived in Italy or Spain in the 1990s and were politically active, you would most certainly run into them sooner or later and there were a couple of dates in the calendar that you needed to look out for, as neo-fascists, to employ a perhaps more appropriate nomenclature, tended to come out to commemorate events such as the so-called March on Rome, on the 27th of October, in Italy, and Francisco Franco’s death, on the 20th of November, in Spain.[ii]

As mentioned, some of these people were the descendants of real fascists, and this is perhaps clearest in the case of the Movimento Sociale Italiano, or MSI (The Italian Social Movement), a political party founded in 1946 by veterans from the so-called Republic of Salò – more properly, the Repubblica Sociale Italiana (The Italian Social Republic), a Nazi puppet state nominally led by Benito Mussolini in his nadir days – and which, in 1994, and under the name of Alleanza Nazionale (The National Alliance), entered the government of Silvio Berlusconi, the business magnate turned politician.[iii] This is to some extent also true of the many offshoots of the original Falange Española (The Spanish Falange), a party that was founded in the 1930s on the model of Mussolini’s Partito Nazionale Fascista (the National Fascist Party), the latter created in 1921. In its latest iteration, the Falange goes by the name of the Falange Española de las JONS.[iv]

The modern versions of these organisations, however, differ greatly from classical fascism, by which I mean Italian fascism from 1922 to 1943 (roughly), as well as from each other, and these differences become a chasm when it comes to US politics. And yet seemingly every other week there is an article out there about how the Republican Party is becoming a fascist party, or about how Donald Trump, Tucker Carlson, or Ron DeSantis are all fascists. Read more »

Eugenics and the Biological Justification of Economic Exploitation in Southern Italy

by Andrea Scrima (excerpt from a work-in-progress)

When they arrived in the U.S., Southern Italians brought with them the sense that they’d been branded as underdogs, that they belonged and would forever belong to a lower class, but the birth of the Italian-American gangster was rooted in attitudes toward the Mezzogiorno that dated back far earlier. After Italy was unified under Vittorio Emanuele II in 1861, a new national government imposed Piedmont’s centralized administrative system on the South, which led to violent rebellion against State authority. Politicians and intellectuals took pains to deflect responsibility for what they saw as the “barbarism” of the Mezzogiorno, and were particularly receptive to theories that placed the blame for the South’s many problems on Southern Italians’ own inborn brutishness. The decades following Unification saw the nascent fields of criminal anthropology and psychiatry establish themselves in the universities of Northern Italy; implementing the pseudosciences of phrenology and anthropometry in their search for evolutionary remnants of an arrested stage of human development manifested in the people of the Mezzogiorno, they used various instruments to measure human skulls, ears, foreheads, jaws, arms, and other body parts, catalogued these, and correlated them with undesirable behavioral characteristics, inventing in the process a Southern Italian race entirely separate from and unrelated to a superior Northern race and officially confirming the biological origins of Southern “savagery.” Read more »