What do we mean when we talk about “responsibility”? We say things like “he is a responsible parent”, “she is responsible for the safety of the passengers”, “they are responsible for the financial crisis”, and in each case the concept of “responsibility” seems to be tracking different meanings. In the first sense it seems to track virtue, in the second sense moral obligation, and in the third accountability. My goal in this article is not to go through each and every kind of responsibility, but rather to show that there are at least two important senses of the concept that we need to take seriously when it comes to Artificial Intelligence (AI). Importantly, it will be shown that there is an intimate link between these two types of responsibility, and it is essential that researchers and practitioners keep this mind.
Recent work in moral philosophy has been concerned with issues of responsibility as they relate to the development, use, and impact of artificially intelligent systems. Oxford University Press recently published their first ever Handbook of Ethics of AI, which is devoted to tackling current ethical problems raised by AI and hopes to mitigate future harms by advancing appropriate mechanisms of governance for these systems. The book is wide-ranging (featuring over 40 unique chapters), insightful, and deeply disturbing. From gender bias in hiring, racial bias in creditworthiness and facial recognition software, and sexual bias in identifying a person’s sexual orientation, we are awash with cases of AI systematically enhancing rather than reducing structural inequality.
But how exactly should (can?) we go about operationalizing an ethics of AI in a way that ensures desirable social outcomes? And how can we hold those causally involved parties accountable, when the very nature of AI seems to make a mockery of the usual sense of control we deem appropriate in our ascriptions of moral responsibility? These are the two sense of responsibility I want to focus on here: how can we deploy AI responsibly, and how can we hold those responsible when things go wrong. Read more »
It is natural to assume that technological artifacts have instrumental value. That is, the value of given technology lies in the various ways in which we can use it, no more, and no less. For example, the value of a hammer lies in our ability to make use of it to hit nails into things. Cars are valuable insofar as we can use them to get from A to B with the bare minimum of physical exertion. This way of viewing technology has immense intuitive appeal, but I think it is ultimately unconvincing. More specifically, I want to argue that technological artifacts are capable of embodying value. Some argue that this value is to be accounted for in terms of the designed properties of the artifact, but I will take a different approach. I will suggest that artifacts can come to embody values based on their affordances.
Before doing so, however, I need to convince you that the instrumental view of technology is wrong. While some technological artifacts are perhaps merely instrumentally valuable, there are others that are clearly not so There are two ways to see this. First, just reflect on all the ways which technologies are just tools waiting to be used by us but are rather mediators in our experience of reality. Technological artifacts are no longer simply “out there” waiting to be used but are rather part of who we are (or at least, who we are becoming). Wearable technology (such as fitness trackers or smart watches) provides us with a stream of biometric information. This information changes the way in which we experience ourselves and the world around us. Bombarded with this information, we might use such technology to peer pressure ourselves into exercising (Apple allows you to get updates, beamed directly to your watch, of when your friends exercise. It is an open question whether this will encourage resentment from those who see their friends have run a marathon while they spent the day on the couch eating Ritter Sport.), or we might use it to stay up to date with the latest news (by enabling smart notifications). In either case, the point is that these technologies do not merely disclose the world “as it is” to us, but rather open up new aspects of the world, and thus come to mediate our experiences. Read more »
For many wine lovers, understanding wine is hard work. We study maps of wine regions and their climates, learn about grape varietals and their characteristics, and delve into various techniques for making wine, trying to understand their influence on the final product. Then we learn a complex but arcane vocabulary for describing what we’re tasting and go to the trouble of decanting, choosing the right glass, and organizing a tasting procedure, all before getting down to the business of tasting. This business of tasting is also difficult. We sip, swish, and spit trying to extract every nuance of the wine and then puzzle over the whys and wherefores, all while comparing what we drink to other similar wines. Some of us even take copious notes to help us remember, for future reference, what this tasting experience was like.
In the meantime, we argue with each other on Twitter fighting over whether a wine is terroir-driven or a technological abomination, typical or atypical, over-oaked or under ripe. We scour Wine Spectator‘s Annual Top 100 looking for who’s up and who’s down and complain about inflated wine scores and overblown wine language.
In other words, we really seem to care about getting it right, identifying a wine’s essence and properly locating it in the wine firmament. We want our judgments to conform to the actual properties of a wine and its relations. Read more »
One problem plaguing contemporary anti-Cartesians (pragmatists, Wittgensteinians, hermeneutic philosophers, etc.) is that it can seem that we are competing against each other, trying to do better than everyone else what we all want to do: get past the dualisms and other infelicities of the modern picture while at the same time absorbing its lessons and retaining its good aspects. We waste our time fighting each other instead of our common enemy. Why is it so hard to see ourselves as all on the same team?
One reason is that when push comes to shove, or even before that, we simply follow traditional philosophical practice by providing arguments to show that we are right and they are wrong, thus construing the differences among our views as constituting differences in belief rather than, for example, the practical differences between different tools or perspectives. It is as if we have internalized the traditional criticisms: that we have abandoned objective truth and the objective world it represents in favor of our own subjective purposes. No, we say, watch us talk among ourselves! We care about truth just as much as you! Phenomenology is false and pragmatism is true, as my fully rigorous and entirely professional argument shows! Assent is required, on pain of irrationality!
Even when we’re not fighting among ourselves in this way, that same metaphilosophical ideal can still cause trouble. For instance, I have chosen to present my particular brand of anti-Cartesianism as a characteristically pragmatist philosophy. Naturally I draw inspiration and/or ideas from philosophers who do not identify as pragmatists (after all, we all reject the Cartesian mirror of nature). But in practice this can lead to some discomfort. If while pushing a pragmatist line I help myself to a Wittgensteinian (or Davidsonian or Nietzschean) insight, the question will naturally arise: what entitles me to enlist these people in my cause? Am I saying Wittgenstein or Davidson was a pragmatist? What should I make of the differences between these very different philosophers? Read more »
It is becoming increasingly common to talk about technological systems in agential terms. We routinely hear about facial recognition algorithms that can identify individuals, large language models (such as GPT-3) that can produce text, and self-driving cars that can, well, drive. Recently, Forbes magazine even awarded GPT-3 “person” of the year for 2020. In this piece I’d like to take some time to reflect on GPT-3. Specifically, I’d like to push back against the narrative that GPT-3 somehow ushers in a new age of artificial intelligence.
GPT-3 (Generative Pre-trained Transformer) is a third-generation, autoregressive language model. It makes use of deep learning to produce human-like texts, such as sequences of words (or code, or other data) after being fed an initial “prompt” which it then aims to complete. The language model itself is trained on Microsoft’s Azure Supercomputer, uses 175 billion parameters (its predecessor used a mere 1.5 billion) and makes use of unlabeled datasets (such as Wikipedia). This training isn’t cheap, with a price tag of $12 million. Once trained, the system can be used in a wide array of contexts: from language translation, summarization, question answering, etc.
Most of you will recall the fanfare that surrounded The Guardians publication of an article that was written by GPT-3. Many people were astounded at the text that was produced, and indeed, this speaks to the remarkable effectiveness of this particular computational system (or perhaps it speaks more to our willingness to project understanding where there might be none, but more on this later). How GPT-3 produced this particular text is relatively simple. Basically, it takes in a query and then attempts to offer relevant answers using the massive amounts of data at its disposal to do so. How different this is, in kind, from what Google’s search engine does is debatable. In the case of Google, you wouldn’t think that it “understands” your searches. With GPT-3, however, people seemed to get the impression that it really did understand the queries, and that its answers, therefore, were a result of this supposed understanding. This of course lends far more credence to its responses, as it is natural to think that someone who understands a given topic is better placed to answer questions about that topic. To believe this in the case of GPT-3 is not just bad science fiction, it’s pure fantasy. Let me elaborate. Read more »
Epistenology: Wine as Experience is a peculiar name for a peculiar book, although its peculiarities make it worth reading. Coined by the author, Nicola Perullo, Professor of Aesthetics at University of Gastronomic Science near Bra, Italy, the term “Epistenology” is a portmanteau blending enology, the study of wine, with epistemology, the philosophical study of knowledge. The book is hard to categorize, which is precisely its point. Although a philosophy book about wine, it is not so much about wine as it is an attempt to think with wine, using wine as a catalyst for making connections to persons, atmospheres, and imaginative play within pregnant moments of immediate, lived experience. Although a serious work of philosophy, it only occasionally names other philosophers and refers to no previous work in the philosophy of wine or aesthetics, while advancing an intriguing alternative to professional wine evaluation and conventional wine education. It is avowedly a narrative of the author’s personal journey with wine and the lessons to be drawn from it. Derrida’s idea that every philosophy is a way of “justifying our lives in the world” is the book’s guiding light. Read more »
I have been thinking a lot about change recently. 2020 seemed like a good year to do this, for several reasons. There was the political turmoil in the United States where I live. There was the global pandemic. There was the birth of our daughter. There were a few projects I worked on related to long term change on evolutionary timescales. All of these issues gave me the opportunity to think about change and some of the paradoxes associated with it. Everybody defines change in their own way, and some changes may be more important to some of us than to others, so how we react to, adapt to and enable change is ultimately very subjective. And yet we all have to deal with some very objective measures of change, at the very least those pertaining to life and death. So the paradox of change is that while it impacts us on a very subjective, personal level and each of us perceives it very differently, on another level it also unites us because of its universal aspects, aspects that can help us define our common humanity.
There was of course the pandemic that forced great changes. A way of life which we took for granted was suddenly and irrevocably changed. Careers and lives ended, we hunkered down in our homes, stopped traveling and started looking inward. For some of us who had been caught up in immediate matters of family, the pandemic even came as a welcome respite in which we got to spend more time with our significant others and children. We stepped back and reevaluated our life on the treadmill. For others, it posed a constant challenge to get work done, especially with kids whose schools were closed. For my wife and me, the pandemic was a chance to spend more time with our newborn daughter and avoid the stresses and boredom of the commute and stresses of physical meetings in the office. What can be unwelcome change for one can be unexpectedly welcome for another. In this particular case we were privileged, but the tables could well be turned. Read more »
In the media it is relatively easy to find examples of new technologies that are going “revolutionize” this or that industry. Self-driving cars will change the way we travel and mitigate climate change, genetic engineering will allow for designer babies and prevent disease, superintelligent AI will turn the earth into an intergalactic human zoo. As a reader, you might be forgiven for being in a constant state of bewilderment as to why we do not currently live in a communist utopia (or why we are not already in cages). We are incessantly badgered with lists of innovative technologies that are going to uproot the way we live, and the narrative behind these innovations is overwhelmingly positive (call this a “pro-innovation bias”). What is often missing in such “debates”, however, is a critical voice. There is a sense in which we treat “innovation” as a good in itself, but it is important that we innovate responsibly. Or so I will argue. Read more »
Philosophy has been an ongoing enterprise for at least 2500 years in what we now call the West and has even more ancient roots in Asia. But until the mid-2000’s you would never have encountered something called “the philosophy of wine.” Over the past 15 years there have been several monographs and a few anthologies devoted to the topic, although it is hardly a central topic in philosophy. About such a discourse, one might legitimately ask why philosophers should be discussing wine at all, and why anyone interested in wine should pay heed to what philosophers have to say.
This philosophical discourse about wine did not emerge in a vacuum. Prior to the mid-20th century, one would never have encountered “philosophy of economics,” “philosophy of law,” “philosophy of science,” “philosophy of social science,” or the “philosophy of art” either, each of which has become a standard part of the philosophical canon. Philosophers have always had much to say about these practices but not as organized into discrete sub-disciplines with their own subject matters.
The assumption behind the emergence of these sub-disciplines is that the study of philosophy brings something to them—particular skills or insights—that immersion in the disciplines themselves would struggle to employ. Thus, in trying get clear on what the philosophy of wine can contribute to the community of wine lovers, we quickly run up against the question of what distinctive skills or insights characterize philosophy. Read more »
If one enters the name “Ellen Page” into the search box at en.wikipedia.org, it redirects to an entry entitled “Elliot Page”(and informs you that it has done this). This is because on December 1, 2020, as the entry itself tells us in the section marked “Personal life,” that person, an accomplished and popular actor nominated for the 2007 Best Actress Oscar for their performance in the film Juno, “came out as transgender on his social media accounts, revealed his pronouns as he/him and they/them, and revealed his new name, Elliot Page.” Naturally this led to a flurry of activity on the interwebs, much of it, not surprisingly, about gender politics. This post, however, will be not be about that, except incidentally; instead, it concerns the much sexier topic of the semantics and metaphysics of naming, and will most likely (you have been warned) finish up with lengthy citations from the relevant sections of Philosophical Investigations.
My immediate reaction, that is, when I heard this, was to wonder whether and in what sense “Ellen Page” is still a referring expression, and who gets to decide this, and on what grounds. Naturally Elliot himself has a unique and in some ways authoritative perspective on this, but a) he’s only one of an entire community of English speakers; and b) if he wants to give us a theory of the reference of proper names he’s entirely welcome to do so, but in that context his own perspective, as in (a), is, I think, less authoritative than on the rather narrower question of what he should now be called, which I grant is up to him.
So I’m thinking about sentences like
1) The star of Juno is Ellen Page. 2) Ellen Page is the star of Juno. 3) The first-billed actor in the credits of season 1 of The Umbrella Academy is Ellen Page.
Are these true? False? Nonsensical? Rude to Elliot but otherwise okay? What do they mean? Did they change their meanings on December 1, 2020? What else might we say about them? Read more »
In my first column for 3 Quarks Daily I wrote that we are still fighting both the Civil War and WWII. As Henry Louis Gates Jr. puts it: “two hideous demons slumber under the floorboards of Western culture: anti-Semitism and anti-Black racism.” We have learned that any steps forward will be met with enormous resistance and backwards pressure. Gates quotes Ernst Cassirer: “every developmental step [of modern societies] can be reversed.” We saw this clearly post-Reconstruction, when everything possible was done to limit the lives of Black[i] people, and again following the two terms of the first Black president, when Americans chose an openly racist birther backed by the Ku Klux Klan and Neo-Nazis. A leap backwards was true as well for European Jews, who before the Second World War believed they had successfully assimilated into secular society.
Any reader of history cannot escape the echoes, back and forth, of racism, white nationalism, German and American ideas of purity. For example, jazz was reviled by the Nazis, and listening to it was a crime. Americans loved jazz, but as late as the 1950s Lena Horne couldn’t go into the dining room in the Sahara Hotel in Las Vegas. Black musicians had to reach and leave the stage through a separate enclosed corridor. Artie Shaw, who by all accounts was not at all racist and performed early on with Billie Holiday when most musical groups were segregated, at the same time hid the fact that he was Jewish. Ava Gardner reports that he sat silent at a table of bigwigs making antisemitic[ii] comments and even joined in rather than speak up and give himself away[iii].
Pressure Point, a film made in 1962 and directed by Hubert Cornfield, is a mostly unknown but quite brilliant dissection of both race and Nazism in America. Sidney Poitier portrays a psychiatrist who (in a flashback) has been given a job in 1942 in a federal penitentiary. He has purposely been assigned a patient who is openly a white supremacist, played vividly by Bobby Darin. (Neither character is named in the film so I will refer to the roles by the names of the actors.) Read more »
“How do you get a philosophy major away from your front door? You pay them for the pizza.”
As a doctoral candidate in philosophy people often ask me what I am going to “do” with my degree. That is, how will I get a job and be a good, productive little bourgeoisie worker. How will I contribute to society, and how will my degree (which of course was spent thinking about the meaning of “meaning”, whether reality is real, and how rigid designation works) benefit anybody. I have heard many variations on the theme of the apparent uselessness of philosophy. Now, I think philosophy has a great many uses, both in itself and pragmatically. Of concern here, however, is whether not just philosophy, but education in general might be (mostly) useless.
If you are like me, then you think education matters. Education is important, should be funded and encouraged, and it generally improves the well-being of individuals, communities, and countries. It is with this preconception that I went head-first into Bryan Caplan’s well written (and often wonderfully irreverent) The Case Against Education, where he argues that we waste trillions in taxpayer revenue when we throw it at our mostly inefficient education system. Caplan does not take issue with education as such, but rather the very specific form that education has taken in the 21st century. Who hasn’t sat bored in a class and wondered whether circle geometry would have any bearing on one’s employability?
As the title suggests, this is not a book that is kind in its assessment of the current state of education. While standard theory in labour economics argues that education has large positive effects on human capital, Caplan claims that its effect is meagre. In contrast to “human capital purists”, Caplan argues that the function of education is to signal three things: intelligence, conscientiousness, and conformity. Education does not develop students’ skills to a great degree, but rather seeks to magnify their ability to signal the aforementioned traits to potential employers effectively. Read more »
Before the COVID pandemic, travel to academic conferences and colloquia was a large part of the job of being a professor at a research-focused university. The last few months have given us the opportunity to reflect on the hurly burly of academic travel. We’ve keenly missed many things about those in-person events. Yet there were things we don’t miss very much at all. While academic conferences are still paused, we wanted to make a note about what’s worth our time and not, and then make some resolutions about what we can do better.
The bloom of online conferences since last Spring provides a key point of comparison. The online conference has many of the same problems that beset the in-person conference: the schedules are overfull with interesting papers at conflicting times, presenters go over their allotted times and thereby leave no time for discussion, and the Q&A sessions tend to go off the rails with people asking questions that have more to do with their own views than with the presentation. But we were still pleased that the move online allowed younger scholars the opportunity to shine and get uptake with their work. And we were able still to hear a few presentations that provided some real insight. In these respects, online conferences are much like their in-person counterparts.
But there are differences. A unique feature of in-person conferences lies in the unplanned sociality that they make possible. The in-person setting allows for the possibility of passing some luminary in the hall between sessions, or meeting someone whose work you just read. In fact, it’s a piece of unacknowledged common wisdom that the true value of in-person conferences lies in unstructured time when one is not attending sessions. Read more »
COVID-19 has forced populations into lockdown, seen the restriction of rights, and caused widespread economic, social, and psychological harm. With only 11 countries having no confirmed cases of COVID-19 (as of this writing), we are globally beyond strategies that aim solely at containment. Most resources are now being directed at mitigation strategies. That is, strategies that aim to curtail how quickly the virus spreads. These strategies (such as physical and social distancing, increased hand-washing, mask-wearing, and proper respiratory etiquette) have been effective in delaying infection rates, and therefore reducing strain on healthcare workers and facilities. There has also been a wave of techno-solutionism (not unusual in times of crisis), which often comes with the unjustified belief that technological solutions provide the best (and sometimes only) ways to deal with the crisis in question.
Such perspectives, in the words of Michael Klenk, ask “what technology”, instead of asking “why technology”, and therefore run the risk of creating more problems than they solve. Klenk argues that such a focus is too narrow: it starts with the presumption that there should be technological solutions to our problems, and then stamps some ethics on afterwards to try and constrain problematic developments that may occur with the technology. This gets things exactly backwards. What we should instead be doing is asking whether we need a given technology, and then proceed from there. It is with this critical perspective in mind that I will investigate a new technological kid on the block: digital contact tracing. Basically, its implementation involves installing a smartphone app that, via Bluetooth, registers and stores the individual’s contacts. Should a user become infected, they can update their app with this information, which will then automatically ping all of their registered contacts. While much attention has been focused on privacy and trust concerns, this will not be my focus (see here for a good example of an analysis that looks specifically at these factors, drawn up by the team at Ethical Intelligence). I will instead focus on the question of whether digital contact tracing is fair. Read more »
Last time, in part 1, I distinguished two strategies for combating philosophical modernism of a certain dated kind: a pluralistic post-empiricism (the exact nature of which I left open for now), and a more narrowly focused post-phenomenological approach which regards the former (and/or its main components) as merely another form of the supposedly mutually rejected picture. In sections I and II, I discussed Charles Taylor’s and Hubert Dreyfus’s phenomenological criticisms of Richard Rorty and John McDowell; today I continue with a look at Taylor’s analogous criticism of Donald Davidson. As before, the point is not to reject phenomenological approaches, but instead merely to understand why Davidson looks to Taylor even less like an anti-Cartesian ally than do Rorty and McDowell, and thus why Taylor will not be impressed by a pragmatist strategy of multiple philosophical tools in which Davidsonian semantics plays a major role. Let me also say that in reading a lot of Taylor’s work recently, I was quite impressed with the scope and rigor of his overall project, and I think that what I present as his drastic misreading of Davidson’s philosophy may most likely be detached and discarded without threatening that project. Or so it seems to me at present. Read more »
A life in which the pleasures of food and drink are not important is missing a crucial dimension of a good life. Food and drink are a constant presence in our lives. They can be a constant source of pleasure if we nurture our connection to them and don’t take them for granted.
Because food and drink are an easily accessible source of pleasure, barring poverty or disease, to care little for them is a moral failure with consequences not only for the self but for others around us. However, to nurture that connection to everyday pleasure requires thought and restraint. Pleasure can be dangerous when pursued without reason and self-control. Addictive pleasures damage us and everyone around us. Addicts, in fact, cannot feel pleasure as readily as the non-addicted and require increasing levels of stimulation to find satisfaction. Addictions and compulsions are pathological and are no model for the genuine pursuit of pleasure. Thus, we need to make a distinction between pleasure that we get from thoughtless, compulsive consumption, and pleasure that is freely chosen. Pleasure freely chosen is actually a good guide to what is good for us and what should matter to us.
This emphasis on freely chosen pleasure is important not only for keeping us healthy but because certain kinds of pleasures are deeply connected to our sense of control and independence. Some of the pleasures in life come from the satisfaction of needs. When we are cold, warm air feels good. When we are hungry even very ordinary food will taste good. But such enjoyment tends to be unfocused and passive. We don’t have to bring our attention or knowledge to the table to enjoy experiences that satisfy basic needs. We are hard-wired to care about them and our response is compelled.
However, many pleasures are not a response to need or deprivation. We have to eat several times a day, but we don’t have to eat well several times a day. Pleasure freely chosen is essential to a good life because it expresses our independence from need. Read more »
By the beginning of the 20th century, it had become clear to an influential minority of philosophers that something was badly amiss with modern philosophy. (There had been gripes of innumerable sorts since the beginning of modernity in the 17th century; but our subject today is the present.) “Modern” here means something like “Lockean and/or Cartesian,” where this means … well, it’s not immediately clear what exactly this means, nor what exactly is wrong with it, and therein lies the tale of a good deal of 20th-century philosophy. As with every broken thing, we have two choices: fix it, or throw it out and get a new one; and many philosophers have advertised their projects as doing one or the other. However, as we might expect, unclarity about the old results in corresponding unclarity about the supposedly better new. What’s the actual difference, philosophically speaking, between rehabilitation and replacement?
Let’s start with what two important groups of contemporary anti-modern philosophers (again, let’s leave pre-moderns out of it for today) say about what they’re doing. We can all agree that (in Wittgenstein’s words, but quoted by all and sundry) “a picture held us captive,” and even, in his continuation, that the way it did this was that “it lay in our language and language seemed to repeat it to us endlessly.” That is, it’s not simply a philosophical theory, the conclusion of an argument we have come to regard as unsound. Even in such relatively straightforward cases, of course, there may be plenty of disagreement about how to continue; but here part of our task is not simply to outline a better view, but also to diagnose and escape this characteristic feature of the old one. Such a treatment would explain how such captivity was possible, and how our very language could turn against us, as well as (naturally) what to do about it. Read more »
Human beings are agents. I take it that this claim is uncontroversial. Agents are that class of entities capable of performing actions. A rock is not an agent, a dog might be. We are agents in the sense that we can perform actions, not out of necessity, but for reasons. These actions are to be distinguished from mere doings: animals, or perhaps even plants, may behave in this or that way by doing things, but strictly speaking, we do not say that they act.
It is often argued that action should be cashed out in intentional terms. Our beliefs, what we desire, and our ability to reason about these are all seemingly essential properties that we might cite when attempting to figure out what makes our kind of agency (and the actions that follow from it) distinct from the rest of the natural world. For a state to be intentional in this sense it should be about or directed towards something other than itself. For an agent to be a moral agent it must be able to do wrong, and perhaps be morally responsible for its actions (I will not elaborate on the exact relationship between being a moral agent and moral responsibility, but there is considerable nuance in how exactly these concepts relate to each other).
In the debate surrounding the potential of Artificial Moral Agency (AMA) this “Standard View” presented above is often a point of contention. The ubiquity of artificial systems in our lives can often lead to us believing that these systems are merely passive instruments. However, this is not always necessarily the case. It is becoming increasingly clear that intuitively “passive” systems, such as recommender algorithms (or even email filter bots), are very receptive to inputs (often by design). Specifically, such systems respond to certain inputs (user search history, etc.) in order to produce an output (a recommendation, etc.). The question that emerges is whether such kinds of “outputs” might be conceived of as “actions”. Moreover, what if such outputs have moral consequences? Might these artificial systems be considered moral agents? This is not to necessarily claim that recommender systems such as YouTube’s are in fact (moral) agents, but rather to think through whether this might be possible (now or in the future). Read more »
I often hear it said that, despite all the stories about family and cultural traditions, winemaking ideologies, and paeans to terroir, what matters is what’s in the glass. If a wine has flavor it’s good. Nothing else matters. And, of course, the whole idea of wine scores reflects the idea that there is single scale of deliciousness that defines wine quality.
For many people who drink wine as a commodity beverage, I suppose the platitude “it’s only what’s in the glass matters” is true. But many of the people who talk this way are wine lovers and connoisseurs. For many of them, there is something self-deceptive about this full focus on what is in the glass. Although flavor surely matters, it is not all that matters, and these stories, traditions, and ideologies are central to genuine wine appreciation.
Burnham and Skilleås, in their book The Aesthetics of Wine, engage in a thought experiment that shows the questionable nature of “it’s only what’s in the glass that matters”. They ask us to imagine a scenario in 2030 in which wine science has advanced to such a point that any wine can be thoroughly analyzed, not only into its constituent chemical components (which we can already do up to a point), but with regard to a wine’s full development as well.
In 2019 Buckey Wolf, a 26-year-old man from Seattle, stabbed his brother in the head with a four-foot long sword. He then called the police on himself, admitting his guilt. Another tragic case of mindless violence? Not quite, as there is far more going on in the case of Buckey Wolf: he committed murder because he believed his brother was turning into a lizard. Specifically, a kind of shape-shifting reptile that lives among us and controls world events. If this sounds fabricated, it’s unfortunately not. Over 12 million Americans believe (“know”) that such lizard people exist, and that they are to be found at the highest levels of government, controlling the world economy for their own cold-blooded interests. This reptilian conspiracy theory was first made popular by well-known charlatan David Icke.
What emerged from further investigation into the Wolf murder case was an interesting trend in his YouTube “likes” over the years. Here it was noted that his interests shifted from music to martial arts, fitness, media criticism, firearms and other weapons, and video games. From here it seems Wolf was thrown into the world of alt-right political content.
In a recent paper Alfano et al. study whether YouTube’s recommender system may be responsible for such epistemically retrograde ideation. Perhaps the first case of murder by algorithm? Well, not quite.
In their paper, the authors aim to discern whether technological scaffolding was at least partially involved in Wolf’s atypical cognition. They make use of a theoretical framework known as technological seduction, whereby technological systems try to read user’s minds and predict what they want. In these scenarios, such as when Google uses predictive text, we as users are “seduced” into believing that Google knows our thoughts, especially when we end up following the recommendations of such systems. Read more »