Why even a “superhuman AI” won’t destroy humanity

by Ashutosh Jogalekar

Photo credit

AGI is in the air. Some think it’s right around the corner. Others think it will take a few more decades. Almost all who talk abut it agree that it applies to a superhuman AI which embodies all the unique qualities of human beings, only multiplied a thousand fold. Who are the most interesting people writing and thinking about AGI whose words we should heed? Although leaders of companies like OpenAI and Anthropic suck up airtime, I would put much more currency in the words of Kevin Kelly, a superb philosopher of technology who has been writing about AI and related topics for decades; among his other accomplishments, he is the founding editor of Wired magazine, and his book “Out of Control” was one of the key inspirations for the Matrix movies. A few years ago he wrote a very insightful piece in Wired about four reasons why he believes fears of an AI that will “take over humanity” are overblown. He casts these reasons in the form of misconceptions about AI which he then proceeds to question and dismantle. The whole thing should be dusted off and is eminently worth reading.

The first and second misconceptions: Intelligence is a single dimension and is “general purpose”.

This is a central point that often gets completely lost when people talk about AI. Most applications of machine intelligence that we have so far are very specific, but when AGI proponents hold forth they are talking about some kind of overarching single intelligence that’s good at everything. The media almost always mixes up multiple applications of AI in the same sentence, as in “AI did X, so imagine what it would be like when it could do Y”; lost is the realization that X and Y could refer to very different dimensions of intelligence, or significantly different in any case. As Kelly succinctly puts it, Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions.” Even humans are not good at optimizing along every single of these dimensions, so it’s unrealistic to imagine that AI will. In other words, intelligence is horizontal, not vertical. The more realistic vision of AI is thus what it already has been; a form of augmented, not artificial, intelligence that helps humans with specific tasks, not some kind of general omniscient God-like entity that’s good at everything. Some tasks that humans do will indeed be replaced by machines, but in the general scheme of things humans and machines will have to work together to solve the tough problems. Read more »

Tuesday, February 4, 2025

Your Doctor Is Like Shakespeare (And That’s A Problem)

by Kyle Munkittrick

When I think about AI, I think about poor Queen Elizabeth.

Imagine being her: you have access to Shakespeare — in his prime! You get to see a private showing of A Midsummer Night’s Dream at the height of the players’ skill and the Bard’s craft. And then… that’s it. You’ve hit the entertainment ceiling for the month. Bored? Your other options include plays by not Shakespeare, your jester, and watching animals fight to the death.

Shakespeare and his audiences were limited not by his genius but by physics. One stage, one performance, one audience at a time. Even at their peak his plays probably reached fewer people in his entire lifetime than a mediocre TikTok does before lunch.

Today we have an embarrassment of entertainment. I’m not saying Dune – Part 2 or Succession or Taylor Swift: The Eras Tour are the same as Shakespeare, but I am going to make the bold claim that they are, in fact, better than bear-baiting. My second, and perhaps bolder claim, is that AI is going to let ‘knowledge workers’ scale like entertainers can today.

Consider this tweet from Amanda Askell, the “philosopher & ethicist trying to make AI be good at Anthropic AI”:

If you can have a single AI employee, you can have thousands of AI employees. And yet the mental model for human-level AI assistants is often “I have a personal helper” rather than “I am now the CEO of a relatively large company”.

Askell is correct (she very often is, especially when you disagree with her). “I am going to be a CEO” is the mental model we should have, but it isn’t the mental model most of us have. Our mental models for human-level AI don’t quite work. There are lots of very practical predictions out there about what scaled intelligence means. I aim to make weirder ones. Read more »

Monday, January 20, 2025

Rather than OpenAI, let’s Open AI

by Ashutosh Jogalekar

In October last year, Charles Oppenheimer and I wrote a piece for Fast Company arguing that the only way to prevent an AI arms race is to open up the system. Drawing on a revolutionary early Cold War proposal for containing the spread of nuclear weapons, the Acheson-Lilienthal report, we argued that the foundational reason why security cannot be obtained through secrecy is because science and technology claim no real “secrets” that cannot be discovered if smart scientists and technologists are given enough time to find them. That was certainly the case with the atomic bomb. Even as American politicians and generals boasted that the United States would maintain nuclear supremacy for decades, perhaps forever, Russia responded with its first nuclear weapon merely four years after the end of World War II. Other countries like the United Kingdom, China and France soon followed. The myth of secrecy was shattered.

As if on cue after our article was written, in December 2024, a new large-language model (LLM) named DeepSeek v3 came out of China. DeepSeek v3 is a completely homegrown model built by a homegrown Chinese entrepreneur who was educated in China (that last point, while minor, is not unimportant: China’s best increasingly no longer are required to leave their homeland to excel). The model turned heads immediately because it was competitive with GPT-4 from OpenAI which many consider the state-of-the-art in pioneering LLM models. In fact, DeepSeek v3 is far beyond competitive in terms of critical parameters: GPT-4 used about 1 trillion training parameters, DeepSeek v3 used 671 billion; GPT-4 had 1 trillion tokens, DeepSeek v3 used almost 15 trillion. Most impressively, DeepSeek v3 cost only $5.58 million to train, while GPT-4 cost about $100 million. That’s a qualitatively significant difference: only the best-funded startups or large tech companies have $100 million to spend on training their AI model, but $5.58 million is well within the reach of many small startups.

Perhaps the biggest difference is that DeepSeek v3 is open-source while GPT-4 is not. The only other open source model from the United States is Llama, developed by Meta. If this feature of DeepSeek v3 is not ringing massive alarm bells in the heads of American technologists and political leaders, it should. It’s a reaffirmation of the central point that there are very few secrets in science and technology that cannot be discovered sooner or later by a technologically advanced country.

One might argue that DeepSeek v3 cost a fraction of the best LLM models to train because it stood on the shoulders of these giants, but that’s precisely the point: like other software, LLM models follow the standard rule of precipitously diminishing marginal cost. More importantly, the open-source, low-cost nature of DeepSeek v3 means that China now has the capability of capturing the world LLM market before the United States as millions of organizations and users make DeepSeek v3 the foundation on which to build their AI. Once again, the quest for security and technological primacy through secrecy would have proved ephemeral, just like it did for nuclear weapons.

What does the entry of DeepSeek v3 indicate in the grand scheme of things? It is important to dispel three myths and answer some key questions. Read more »

Monday, October 28, 2024

What Would An AI Treaty Between Countries Look Like?

by Ashutosh Jogalekar

A stamp commemorating the Atoms for Peace program inaugurated by President Dwight Eisenhower. An AI For Peace program awaits (Image credit: International Peace Institute)

The visionary physicist and statesman Niels Bohr once succinctly distilled the essence of science as “the gradual removal of prejudices”. Among these prejudices, few are more prominent than the belief that nation-states can strengthen their security by keeping critical, futuristic technology secret. This belief was dispelled quickly in the Cold War, as nine nuclear states with competent scientists and engineers and adequate resources acquired nuclear weapons, leading to the nuclear proliferation that Bohr, Robert Oppenheimer, Leo Szilard and other far-seeing scientists had warned political leaders would ensue if the United States and other countries insisted on security through secrecy. Secrecy, instead of keeping destructive nuclear technology confined, had instead led to mutual distrust and an arms race that, octopus-like, had enveloped the globe in a suicide belt of bombs which at its peak numbered almost sixty thousand.

But if not secrecy, then how would countries achieve the security they craved? The answer, as it counterintuitively turned out, was by making the world a more open place, by allowing inspections and crafting treaties that reduced the threat of nuclear war. Through hard-won wisdom and sustained action, politicians, military personnel and ordinary citizens and activists realized that the way to safety and security was through mutual conversation and cooperation. That international cooperation, most notably between the United States and the Soviet Union, achieved the extraordinary reduction of the global nuclear stockpile from tens of thousands to about twelve thousand, with the United States and Russia still accounting for more than ninety percent.

A similar potential future of promise on one hand and destruction on the other awaits us through the recent development of another groundbreaking technology: artificial intelligence. Since 2022, AI has shown striking progress, especially through the development of large language models (LLMs) which have demonstrated the ability to distill large volumes of knowledge and reasoning and interact in natural language. Accompanied by their reliance on mountains of computing power, these and other AI models are posing serious questions about the possibility of disrupting entire industries, from scientific research to the creative arts. More troubling is the breathless interest from governments across the world in harnessing AI for military applications, from smarter drone targeting to improved surveillance to better military hardware supply chain optimization. 

Commentators fear that massive interest in AI from the Chinese and American governments in particular, shored up by unprecedented defense budgets and geopolitical gamesmanship, could lead to a new AI arms race akin to the nuclear arms race. Like the nuclear arms race, the AI arms race would involve the steady escalation of each country’s AI capabilities for offense and defense until the world reaches an unstable quasi-equilibrium that would enable each country to erode or take out critical parts of their adversary’s infrastructure and risk their own. Read more »

Monday, March 11, 2024

Failed American Startups: The Pony Express and Pets.Com

by Mark Harvey

Mark Twain’s two rules for investing: 1) Don’t invest when you can’t afford to. 2) Don’t invest when you can.

Stamp commemorating the Pony Express

Hemorrhaging money and high burn rates on startups is not something new in American culture. We’ve been doing it for a couple hundred years. Take the pony express, for example. That celebrated mail delivery company–a huge part of western lore–only lasted about eighteen months. The idea was to deliver mail across the western side of the US from Missouri to California, where there was still no contiguous telegraph connection or railway connection. In some ways the pony express was a huge success, even if in only showing the vast amount of country wee brave men could cover on a horse in a short amount of time. I say wee because pony express riders were required to weigh less than 125 pounds, kind of like modern jockeys.

In just a few months, three business partners, William Russell, Alexander Majors, and Wiliam Waddell, established 184 stations, purchased about 400 horses, hired 80 riders, and set the thing into motion. On April 3, 1860, the first express rider left St. Joseph Missouri with a mail pouch containing 50 letters and five telegrams. Ten days later, the letters arrived in Sacramento, some 1,900 miles away. The express riders must have been ridiculously tough men, covering up to 100 miles in single rides using multiple horses staged along the route. Anyone who’s ever ridden just 30 miles in a day knows how tired it makes a person.

But the company didn’t last. For one thing, the continental-length telegraph system was completed in October of 1861 when the two major telegraph companies, the Overland and the Pacific, joined lines in Salt Lake City. You’d think that the messieurs who started the pony express and who were otherwise very successful businessmen would have seen this disruptive technology on the horizon. Maybe they did and they just wanted to open what was maybe the coolest startup on the face of the earth, even if it only lasted a year and a half. Read more »

Monday, November 27, 2023

The case for American scientific patriotism

by Ashutosh Jogalekar

Hans Bethe receiving the Enrico Fermi Award – the country’s highest award in the field of nuclear science – from President John F. Kennedy in 1961. His daughter, Monica, is standing at the back. To his right is Glenn Seaborg, Chairman of the Atomic Energy Commission.

John von Neumann emigrated from Hungary in 1933 and settled in Princeton, NJ. During World War 2, he contributed a key idea to the design of the plutonium bomb at Los Alamos. After the war he became a highly sought-after government consultant and did important work kickstarting the United States’s ICBM program. He was known for his raucous parties and love of children’s toys.

Enrico Fermi emigrated from Italy in 1938 and settled first in New York and then in Chicago, IL. At Chicago he built the world’s first nuclear reactor. He then worked at Los Alamos where there was an entire division devoted to him. After the war Fermi worked on the hydrogen bomb and trained talented students at the University of Chicago, many of whom went on to become scientific leaders. After coming to America, in order to improve his understanding of colloquial American English, he read Li’l Abner comics.

Hans Bethe emigrated from Germany in 1935 and settled in Ithaca, NY, becoming a professor at Cornell University. He worked out the series of nuclear reactions that power the sun, work for which he received the Nobel Prize in 1967. During the war Bethe was the head of the theoretical physics division of the Manhattan Project. He spent the rest of his long life working extensively on arms control, advising presidents to make the best use of the nuclear genie he and his colleagues had unleashed, and advocating peaceful uses of nuclear energy. He was known for his hearty appetite and passion for stamp collecting.

Victor Weisskopf, born in Austria, emigrated from Germany in 1937 and settled in Rochester, NY. After working on the Manhattan Project, he became a professor at MIT and the first director-general of CERN, the European particle physics laboratory that discovered many new fundamental particles including the Higgs boson. He was also active in arms control. A gentle humanist, he would entertain colleagues through his rendition of Beethoven sonatas on the piano.

Von Neumann, Fermi, Bethe and Weisskopf were all American patriots. Read more »

Monday, May 29, 2023

Responsibility Gaps: A Red Herring?

by Fabio Tollon

What should we do in cases where increasingly sophisticated and potentially autonomous AI-systems perform ‘actions’ that, under normal circumstances, would warrant the ascription of moral responsibility? That is, who (or what) is responsible when, for example, a self-driving car harms a pedestrian? An intuitive answer might be: Well, it is of course the company who created the car who should be held responsible! They built the car, trained the AI-system, and deployed it.

However, this answer is a bit hasty. The worry here is that the autonomous nature of certain AI-systems means that it would be unfair, unjust, or inappropriate to hold the company or any individual engineers or software developers responsible. To go back to the example of the self-driving car; it may be the case that due to the car’s ability to act outside of the control of the original developers, their responsibility would be ‘cancelled’, and it would be inappropriate to hold them responsible.

Moreover, it may be the case that the machine in question is not sufficiently autonomous or agential for it to be responsible itself. This is certainly true of all currently existing AI-systems and may be true far into the future. Thus, we have the emergence of a ‘responsibility gap’: Neither the machine nor the humans who developed it are responsible for some outcome.

In this article I want to offer some brief reflections on the ‘problem’ of responsibility gaps. Read more »

Monday, February 6, 2023

Technology: Instrumental, Determining, or Mediating?

by Fabio Tollon

DALL·E generated image with the prompt "Impressionist oil painting disruptive technology"
DALL·E generated image with the prompt “Impressionist oil painting disruptive technology”

We take words quite seriously. We also take actions quite seriously. We don’t take things as seriously, but this is changing.

We live in a society where the value of a ‘thing’ is often linked to, or determined by, what it can do or what it can be used for. Underlying this is an assumption about the value of “things”: their only value consists in the things they can do. Call this instrumentalism. Instrumentalism, about technology more generally, is an especially intuitive idea. Technological artifacts (‘things’) have no agency of their own, would not exist without humans, and therefore are simply tools that are there to be used by us. Their value lies in how we decide to use them, which opens up the possibility of radical improvement to our lives. Technology is a neutral means with which we can achieve human goals, whether these be good or evil.

In contrast to this instrumentalist view there is another view on technology, which claims that technology is not neutral at all, but that it instead has a controlling or alienating influence on society. Call this view technological determinism. Such determinism regarding technology is often justified by, well, looking around. The determinist thinks that technological systems take us further away from an ‘authentic’ reality, or that those with power develop and deploy technologies in ways that increase their ability to control others.

So, the instrumentalist view sees some promise in technology, and the determinist not so much. However, there is in fact a third way to think about this issue: mediation theory. Dutch philosopher Peter-Paul Verbeek, drawing on the postphenomenological work of Don Ihde, has proposed a “thingy turn” in our thinking about the philosophy of technology. This we can call the mediation account of technology. This takes us away from both technological determinism and instrumentalism. Here’s how. Read more »

Monday, July 4, 2022

Robots, Emotions, and Relational Capacities

by Fabio Tollon

The Talon Bomb Disposal Robot is used by U.S. Army Special Forces teams for remote-controlled explosive ordnance disposal.

I take it as relatively uncontroversial that you, dear reader, experience emotions. There are times when you feel sad, happy, relieved, overjoyed, pessimistic, or hopeful. Often it is difficult to know exactly which emotion we are feeling at a particular point in time, but, for the most part, we can be fairly confident that we are experiencing some kind of emotion. Now we might ask, how do you know that others are experiencing emotions? While, straightforwardly enough, they could tell you. But, more often than not, we read into their body language, tone, and overall behaviour in order to figure out what might be going on inside their heads. Now, we might ask, what is stopping a machine from doing all of these things? Can a robot have emotions? I’m not really convinced that this question makes sense, given the kinds of things that robots are. However, I have the sense whether or not robots can really have emotions is independent of whether we will treat as if they have emotions. So, the metaphysics seems to be a bit messy, so I’m going to do something naughty and bracket the metaphysics. Let’s take the as if seriously, and consider social robots.

Taking this pragmatic approach means we don’t need to have a refined theory of what emotions are, or whether agents “really” have them or not. Instead, we can ask questions about how likely it is that humans will attribute emotions or agency to robots. Turns out, we do this all the time! Human beings seem to have a natural propensity to attribute consciousness and agency (phenomena that are often closely linked to the ability to have emotions) to entities that look and behave as if they have those properties. This kind of tendency seems to be a product of our pattern tracking abilities: if things behave in a certain way, we put them in a certain category, and this helps us keep track of and make sense of the world around us.

While this kind of strategy makes little sense if we are trying to explain and understand the inner workings of a system, it makes a great deal of sense if all we are interested in is trying to predict how an entity might behave or respond. Consider the famous case of bomb-defusing robots, which are modelled on stick insects. Read more »

Monday, April 11, 2022

The ‘Soft’ Impacts of Emerging Technology

by Fabio Tollon

Getting a handle on the various ways that technology influences us is as important as it is difficult. The media is awash with claims of how this or that technology will either save us or doom us. And in some cases, it does seem as though we have a concrete grasp on the various costs and benefits that a technology provides. We know that CO2 emissions from large-scale animal agriculture are very damaging for the environment, notwithstanding the increases in food production we have seen over the years. However, such a ‘balanced’ perspective usually emerges after some time has passed and the technology has become ‘stable’, in the sense that its uses and effects are relatively well understood. We now understand, better than we did in the 1920s, for example, the disastrous effects of fossil fuels and CO2 emissions. We can see that the technology at some point provided a benefit, but that now the costs outweigh those benefits. For emerging technologies, however, such a ‘cost-benefit’ approach might not be possible in practice.

Take a simple example: imagine a private company is accused of polluting a river due to chemical runoff from a new machine they have installed (unfortunately this probably does not require much imagination and can be achieved by looking outside, depending on where you live). In order to determine whether the company is guilty or not we would investigate the effects of their activities. We could take water samples from the river and attempt to show that the chemicals used in the company’s manufacturing process are indeed present in the water. Further, we could make an argument where we show how there is a causal relationship between the presence of these chemicals and certain detrimental effects that might be observed in the area, such as loss of biodiversity, the pollution of drinking water, or an increase in diseases associated with the chemical in question. Read more »

Monday, March 14, 2022

Virtue Ethics, Technology, and the Situationist Challenge

by Fabio Tollon

In a previous article I argued that, when it comes to our moral appraisal of emerging technologies, the best normative framework to use is that of virtue ethics. The reasons for this were that virtue ethics succeeds in ways that consequentialist or deontological theories fail. Specifically, these other theories posit fixed principles that seem incapable of accommodating the unpredictable effects that emerging technologies will have not only on how we view ourselves, but also on the ways in which they will interact with our current social and cultural practices

However, while virtue ethics might be superior in the sense that it is able to be context sensitive in way that these other theories are not, it is not without problems of its own. The most significant of these is what is known as the ‘situationist challenge’, which targets the heart of virtue ethics, and argues that situational influences trump dispositional ones. In this article I will defend virtue ethics from this objection and in the process show that it remains our best means for assessing the moral effects of emerging technologies.

So, what exactly is the situationist challenge contesting? In order for any fleshed-out theory of virtue to make sense, it must be the case that something like ‘virtues’ exist and are attainable by human beings, and that they are reliably expressed by agents. For example, traits such as generosity, arrogance, and bravery are dispositions to react in particular ways to certain trait-eliciting circumstances. If agents do not react reliably in these circumstances, it makes little sense to traffic in the language of the virtues. Calling someone ‘generous’ makes no sense if they only acted the way that they did out of habit or because someone happened to be watching them. Read more »

Virtue Ethics and Emerging Technologies

by Fabio Tollon

In 2007 Wesley Autrey noticed a young man, Cameron Hollopeter, having a seizure on a subway station in Manhattan. Autrey borrowed a pen and used it to keep Hollopeter’s jaw open. After the seizure, Hollopeter stumbled and fell from the platform onto the tracks. As Hollopeter lay there, Autry noticed the lights from an oncoming train, and so he jumped in after him. However, after getting to the tracks, he realized there would not be enough time to get Hollopeter out of harm’s way. Instead, he protected Hollopeter by moving him to a drainage trench between the tracks, throwing his body over Hollopeter’s. Both of them narrowly avoided being hit by the train, and the call was close enough that Autrey had grease on his hat afterwards. For this Autrey was awarded the Bronze Medallion, New York City’s highest award for exceptional citizenship and outstanding achievement.

In 2011, Karl-Theodore zu Guttenberg, a member of the Bundestag, was found guilty of plagiarism after a month-long public outcry.  He had plagiarized large parts of his doctoral dissertation, where it was found that he had copied whole sections of work from newspapers, undergraduate papers, speeches, and even from his supervisor. About half of his entire dissertation was stuffed with uncited work. Thousands of doctoral students and professors in Germany signed a letter lambasting then-chancellor Angela Merkel’s weak response, and eventually his degree was revoked, and he ended up resigning from the Bundestag.

Now we might ask: what explains this variation in human behaviour? Why did Guttenberg plagiarize his PhD, and why did Autrey put his life in danger to save a stranger? Read more »

Monday, August 30, 2021

Irrationality, Artificial Intelligence, and the Climate Crisis

by Fabio Tollon

Human beings are rather silly creatures. Some of us cheer billionaires into space while our planet burns. Some of us think vaccines cause autism, that the earth is flat, that anthropogenic climate change is not real, that COVID-19 is a hoax, and that diamonds have intrinsic value. Many of us believe things that are not fully justified, and we continue to believe these things even in the face of new evidence that goes against our position. This is to say, many people are woefully irrational. However, what makes this state of affairs perhaps even more depressing is that even if you think you are a reasonably well-informed person, you are still far from being fully rational. Decades of research in social psychology and behavioural economics has shown that not only are we horrific decision makers, we are also consistently horrific. This makes sense: we all have fairly similar ‘hardware’ (in the form of brains, guts, and butts) and thus it follows that there would be widely shared inconsistencies in our reasoning abilities.

This is all to say, in a very roundabout way, we get things wrong. We elect the wrong leaders, we believe the wrong theories, and we act in the wrong ways. All of this becomes especially disastrous in the case of climate change. But what if there was a way to escape this tragic epistemic situation? What if, with the use of an AI-powered surveillance state, we could simply make it impossible for us to do the ‘wrong’ things? As Ivan Karamazov notes in the tale of The Grand Inquisitor (in The Brothers Karamzov by Dostoevsky), the Catholic Church should be praised because it has “vanquished freedom… to make men happy”. By doing so it has “satisfied the universal and everlasting craving of humanity – to find someone to worship”. Human beings are incapable of managing their own freedom. We crave someone else to tell us what to do, and, so the argument goes, it would be in our best interest to have an authority (such as the Catholic Church, as in the original story) with absolute power ruling over us. This, however, contrasts sharply with liberal-democratic norms. My goal is to show that we can address the issues raised by climate change without reinventing the liberal-democratic wheel. That is, we can avoid the kind of authoritarianism dreamed up by Ivan Karamazov. Read more »

Monday, July 5, 2021

How Can We Be Responsible For the Future of AI?

by Fabio Tollon 

Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.  Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.

AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers. Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.

Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation. Read more »

Monday, June 7, 2021

Can Technology Undermine Character?

by Fabio Tollon

What is “character”? In general, we might say that the character of something is what distinguishes it from other things. Sedimentary rocks have a certain “character” that distinguishes them from igneous rocks, for example. Rocks, however, do not have personality (so far as I can tell). Human beings have personality, and it is thought that there is some connection between personality and character, but my interest does not lie in how exactly this relation works, as here I will be concerned with character specifically. To that end, we might specify that character is a collection of properties that distinguishes one individual person from another. When we say “she is wise” we are saying something about her personality, but we are also judging her character: we are, in effect, claiming that we admire her, due to some feature of her character. There could be myriad reasons for this. Perhaps she takes a keen interest in the world around her, has well-formed beliefs, reads many books, etc. In the case where she indeed displays the virtues associated with being wise, we would say that our assessment of her character is fitting, that is, such an assessment correctly identifies the kinds of things she stands for and values. The question I want to consider is whether the value laden nature of technology undermines our ability to make such character assessments. Read more »

Monday, May 10, 2021

Is Tesla the Future of the Auto Industry?

by Fabio Tollon

Tesla Model S

Elon Musk. Either you love him or you love to hate him. He is glorified by some as a demi-god who will lead humanity to the stars (because if it’s one thing we need is more planets to plunder) and vilified by others as a Silicon Valley hack who is as hypocritical as he is wealthy (very). When one is confronted by such contradictory and binary views the natural intuition is to take a step back and assess the available evidence. Usually this leads to a more nuanced understanding of the subject matter, often resulting in a less binary, and somewhat more coherent narrative. Usually.

The idea to write something about Musk was the result of the reality bending adventure that was Edward Niedermeyer’s Ludicrous: The Unvarnished Story of Tesla Motors.

Let us take a look at the basics. Musk is a Silicon Valley entrepreneur who made a fortune by helping to found PayPal. Using the capital gained from this venture, he invested $30 million into Tesla Motors, and became chairmen of its board of directors in 2004. He also eventually ousted the founders of the company Martin Eberhard and Marc Tarpenning. He is currently CEO of Tesla, Inc. (the name was officially changed from Tesla Motors to Tesla in 2017) and is in regular competition with human rights champion Jeff Bezos for the glamorous title of “world’s most successful hoarder of capital”. I don’t want to spend too much time on the psychology of Elon Musk, as Nathan Robinson has already done a fine job in this regard. Rather, I want to focus on how Tesla is not the market disrupting company many think it is.  Here I will be concerned with the mismatch between Silicon Valley’s software driven innovation versus the kind of innovation that exists in the auto industry. Read more »

Monday, March 15, 2021

More Than Just Design: Affordances as Embodying Value in Technological Artifacts

by Fabio Tollon

It is natural to assume that technological artifacts have instrumental value. That is, the value of given technology lies in the various ways in which we can use it, no more, and no less. For example, the value of a hammer lies in our ability to make use of it to hit nails into things. Cars are valuable insofar as we can use them to get from A to B with the bare minimum of physical exertion. This way of viewing technology has immense intuitive appeal, but I think it is ultimately unconvincing. More specifically, I want to argue that technological artifacts are capable of embodying value. Some argue that this value is to be accounted for in terms of the designed properties of the artifact, but I will take a different approach. I will suggest that artifacts can come to embody values based on their affordances.

Before doing so, however, I need to convince you that the instrumental view of technology is wrong. While some technological artifacts are perhaps merely instrumentally valuable, there are others that are clearly not so There are two ways to see this. First, just reflect on all the ways which technologies are just tools waiting to be used by us but are rather mediators in our experience of reality. Technological artifacts are no longer simply “out there” waiting to be used but are rather part of who we are (or at least, who we are becoming). Wearable technology (such as fitness trackers or smart watches) provides us with a stream of biometric information. This information changes the way in which we experience ourselves and the world around us. Bombarded with this information, we might use such technology to peer pressure ourselves into exercising (Apple allows you to get updates, beamed directly to your watch, of when your friends exercise. It is an open question whether this will encourage resentment from those who see their friends have run a marathon while they spent the day on the couch eating Ritter Sport.), or we might use it to stay up to date with the latest news (by enabling smart notifications). In either case, the point is that these technologies do not merely disclose the world “as it is” to us, but rather open up new aspects of the world, and thus come to mediate our experiences. Read more »

GPT-3 Understands Nothing

by Fabio Tollon

It is becoming increasingly common to talk about technological systems in agential terms. We routinely hear about facial recognition algorithms that can identify individuals, large language models (such as GPT-3) that can produce text, and self-driving cars that can, well, drive. Recently, Forbes magazine even awarded GPT-3 “person” of the year for 2020. In this piece I’d like to take some time to reflect on GPT-3. Specifically, I’d like to push back against the narrative that GPT-3 somehow ushers in a new age of artificial intelligence.

GPT-3 (Generative Pre-trained Transformer) is a third-generation, autoregressive language model. It makes use of deep learning to produce human-like texts, such as sequences of words (or code, or other data) after being fed an initial “prompt” which it then aims to complete. The language model itself is trained on Microsoft’s Azure Supercomputer, uses 175 billion parameters (its predecessor used a mere 1.5 billion) and makes use of unlabeled datasets (such as Wikipedia). This training isn’t cheap, with a price tag of $12 million. Once trained, the system can be used in a wide array of contexts: from language translation, summarization, question answering, etc.

Most of you will recall the fanfare that surrounded The Guardians publication of an article that was written by GPT-3. Many people were astounded at the text that was produced, and indeed, this speaks to the remarkable effectiveness of this particular computational system (or perhaps it speaks more to our willingness to project understanding where there might be none, but more on this later). How GPT-3 produced this particular text is relatively simple. Basically, it takes in a query and then attempts to offer relevant answers using the massive amounts of data at its disposal to do so. How different this is, in kind, from what Google’s search engine does is debatable. In the case of Google, you wouldn’t think that it “understands” your searches. With GPT-3, however, people seemed to get the impression that it really did understand the queries, and that its answers, therefore, were a result of this supposed understanding. This of course lends far more credence to its responses, as it is natural to think that someone who understands a given topic is better placed to answer questions about that topic. To believe this in the case of GPT-3 is not just bad science fiction, it’s pure fantasy. Let me elaborate. Read more »

Monday, January 18, 2021

Towards Responsible Research and Innovation

by Fabio Tollon

In the media it is relatively easy to find examples of new technologies that are going “revolutionize” this or that industry. Self-driving cars will change the way we travel and mitigate climate change, genetic engineering will allow for designer babies and prevent disease, superintelligent AI will turn the earth into an intergalactic human zoo. As a reader, you might be forgiven for being in a constant state of bewilderment as to why we do not currently live in a communist utopia (or why we are not already in cages). We are incessantly badgered with lists of innovative technologies that are going to uproot the way we live, and the narrative behind these innovations is overwhelmingly positive (call this a “pro-innovation bias”). What is often missing in such “debates”, however, is a critical voice. There is a sense in which we treat “innovation” as a good in itself, but it is important that we innovate responsibly. Or so I will argue. Read more »

Monday, October 26, 2020

What John von Neumann really did at Los Alamos

by Ashutosh Jogalekar

John von Neumann (Image: Life Magazine)

During a wartime visit to England in early 1943, John von Neumann wrote a letter to his fellow mathematician Oswald Veblen at the Institute for Advanced Study in Princeton, saying:

“I think I have learned a great deal of experimental physics here, particularly of the gas dynamical variety, and that I shall return a better and impurer man. I have also developed an obscene interest in computational techniques…”

This seemingly mundane communication was to foreshadow a decisive effect on the development of two overwhelmingly important aspects of 20th and 21st century technology – the development of computing and the development of nuclear weapons.

Johnny von Neumann was the multifaceted intellectual diamond of the 20th century. He contributed so many seminal ideas to so many fields so quickly that it would be impossible for any one person to summarize, let alone understand them. He may have been the last universalist in mathematics, having almost complete command of both pure and applied mathematics. But he didn’t stop there. After making fundamental contributions to operator algebra, set theory and the foundations of mathematics, he revolutionized at least two different and disparate fields – economics and computer science – and made contributions to a dozen others, each of which would have been important enough to enshrine his name in scientific history.

But at the end of his relatively short life which was cut down cruelly by cancer, von Neumann had acquired another identity – that of an American patriot who had done more than almost anyone else to make sure that his country was well-defended and ahead of the Soviet Union in the rapidly heating Cold War. Like most other contributions of this sort, this one had a distinctly Faustian gleam to it, bringing both glory and woe to humanity’s experiments in self-elevation and self-destruction. Read more »