What Would An AI Treaty Between Countries Look Like?

by Ashutosh Jogalekar

A stamp commemorating the Atoms for Peace program inaugurated by President Dwight Eisenhower. An AI For Peace program awaits (Image credit: International Peace Institute)

The visionary physicist and statesman Niels Bohr once succinctly distilled the essence of science as “the gradual removal of prejudices”. Among these prejudices, few are more prominent than the belief that nation-states can strengthen their security by keeping critical, futuristic technology secret. This belief was dispelled quickly in the Cold War, as nine nuclear states with competent scientists and engineers and adequate resources acquired nuclear weapons, leading to the nuclear proliferation that Bohr, Robert Oppenheimer, Leo Szilard and other far-seeing scientists had warned political leaders would ensue if the United States and other countries insisted on security through secrecy. Secrecy, instead of keeping destructive nuclear technology confined, had instead led to mutual distrust and an arms race that, octopus-like, had enveloped the globe in a suicide belt of bombs which at its peak numbered almost sixty thousand.

But if not secrecy, then how would countries achieve the security they craved? The answer, as it counterintuitively turned out, was by making the world a more open place, by allowing inspections and crafting treaties that reduced the threat of nuclear war. Through hard-won wisdom and sustained action, politicians, military personnel and ordinary citizens and activists realized that the way to safety and security was through mutual conversation and cooperation. That international cooperation, most notably between the United States and the Soviet Union, achieved the extraordinary reduction of the global nuclear stockpile from tens of thousands to about twelve thousand, with the United States and Russia still accounting for more than ninety percent.

A similar potential future of promise on one hand and destruction on the other awaits us through the recent development of another groundbreaking technology: artificial intelligence. Since 2022, AI has shown striking progress, especially through the development of large language models (LLMs) which have demonstrated the ability to distill large volumes of knowledge and reasoning and interact in natural language. Accompanied by their reliance on mountains of computing power, these and other AI models are posing serious questions about the possibility of disrupting entire industries, from scientific research to the creative arts. More troubling is the breathless interest from governments across the world in harnessing AI for military applications, from smarter drone targeting to improved surveillance to better military hardware supply chain optimization. 

Commentators fear that massive interest in AI from the Chinese and American governments in particular, shored up by unprecedented defense budgets and geopolitical gamesmanship, could lead to a new AI arms race akin to the nuclear arms race. Like the nuclear arms race, the AI arms race would involve the steady escalation of each country’s AI capabilities for offense and defense until the world reaches an unstable quasi-equilibrium that would enable each country to erode or take out critical parts of their adversary’s infrastructure and risk their own. Read more »



Monday, March 11, 2024

Failed American Startups: The Pony Express and Pets.Com

by Mark Harvey

Mark Twain’s two rules for investing: 1) Don’t invest when you can’t afford to. 2) Don’t invest when you can.

Stamp commemorating the Pony Express

Hemorrhaging money and high burn rates on startups is not something new in American culture. We’ve been doing it for a couple hundred years. Take the pony express, for example. That celebrated mail delivery company–a huge part of western lore–only lasted about eighteen months. The idea was to deliver mail across the western side of the US from Missouri to California, where there was still no contiguous telegraph connection or railway connection. In some ways the pony express was a huge success, even if in only showing the vast amount of country wee brave men could cover on a horse in a short amount of time. I say wee because pony express riders were required to weigh less than 125 pounds, kind of like modern jockeys.

In just a few months, three business partners, William Russell, Alexander Majors, and Wiliam Waddell, established 184 stations, purchased about 400 horses, hired 80 riders, and set the thing into motion. On April 3, 1860, the first express rider left St. Joseph Missouri with a mail pouch containing 50 letters and five telegrams. Ten days later, the letters arrived in Sacramento, some 1,900 miles away. The express riders must have been ridiculously tough men, covering up to 100 miles in single rides using multiple horses staged along the route. Anyone who’s ever ridden just 30 miles in a day knows how tired it makes a person.

But the company didn’t last. For one thing, the continental-length telegraph system was completed in October of 1861 when the two major telegraph companies, the Overland and the Pacific, joined lines in Salt Lake City. You’d think that the messieurs who started the pony express and who were otherwise very successful businessmen would have seen this disruptive technology on the horizon. Maybe they did and they just wanted to open what was maybe the coolest startup on the face of the earth, even if it only lasted a year and a half. Read more »

Monday, November 27, 2023

The case for American scientific patriotism

by Ashutosh Jogalekar

Hans Bethe receiving the Enrico Fermi Award – the country’s highest award in the field of nuclear science – from President John F. Kennedy in 1961. His daughter, Monica, is standing at the back. To his right is Glenn Seaborg, Chairman of the Atomic Energy Commission.

John von Neumann emigrated from Hungary in 1933 and settled in Princeton, NJ. During World War 2, he contributed a key idea to the design of the plutonium bomb at Los Alamos. After the war he became a highly sought-after government consultant and did important work kickstarting the United States’s ICBM program. He was known for his raucous parties and love of children’s toys.

Enrico Fermi emigrated from Italy in 1938 and settled first in New York and then in Chicago, IL. At Chicago he built the world’s first nuclear reactor. He then worked at Los Alamos where there was an entire division devoted to him. After the war Fermi worked on the hydrogen bomb and trained talented students at the University of Chicago, many of whom went on to become scientific leaders. After coming to America, in order to improve his understanding of colloquial American English, he read Li’l Abner comics.

Hans Bethe emigrated from Germany in 1935 and settled in Ithaca, NY, becoming a professor at Cornell University. He worked out the series of nuclear reactions that power the sun, work for which he received the Nobel Prize in 1967. During the war Bethe was the head of the theoretical physics division of the Manhattan Project. He spent the rest of his long life working extensively on arms control, advising presidents to make the best use of the nuclear genie he and his colleagues had unleashed, and advocating peaceful uses of nuclear energy. He was known for his hearty appetite and passion for stamp collecting.

Victor Weisskopf, born in Austria, emigrated from Germany in 1937 and settled in Rochester, NY. After working on the Manhattan Project, he became a professor at MIT and the first director-general of CERN, the European particle physics laboratory that discovered many new fundamental particles including the Higgs boson. He was also active in arms control. A gentle humanist, he would entertain colleagues through his rendition of Beethoven sonatas on the piano.

Von Neumann, Fermi, Bethe and Weisskopf were all American patriots. Read more »

Monday, May 29, 2023

Responsibility Gaps: A Red Herring?

by Fabio Tollon

What should we do in cases where increasingly sophisticated and potentially autonomous AI-systems perform ‘actions’ that, under normal circumstances, would warrant the ascription of moral responsibility? That is, who (or what) is responsible when, for example, a self-driving car harms a pedestrian? An intuitive answer might be: Well, it is of course the company who created the car who should be held responsible! They built the car, trained the AI-system, and deployed it.

However, this answer is a bit hasty. The worry here is that the autonomous nature of certain AI-systems means that it would be unfair, unjust, or inappropriate to hold the company or any individual engineers or software developers responsible. To go back to the example of the self-driving car; it may be the case that due to the car’s ability to act outside of the control of the original developers, their responsibility would be ‘cancelled’, and it would be inappropriate to hold them responsible.

Moreover, it may be the case that the machine in question is not sufficiently autonomous or agential for it to be responsible itself. This is certainly true of all currently existing AI-systems and may be true far into the future. Thus, we have the emergence of a ‘responsibility gap’: Neither the machine nor the humans who developed it are responsible for some outcome.

In this article I want to offer some brief reflections on the ‘problem’ of responsibility gaps. Read more »

Monday, February 6, 2023

Technology: Instrumental, Determining, or Mediating?

by Fabio Tollon

DALL·E generated image with the prompt "Impressionist oil painting disruptive technology"
DALL·E generated image with the prompt “Impressionist oil painting disruptive technology”

We take words quite seriously. We also take actions quite seriously. We don’t take things as seriously, but this is changing.

We live in a society where the value of a ‘thing’ is often linked to, or determined by, what it can do or what it can be used for. Underlying this is an assumption about the value of “things”: their only value consists in the things they can do. Call this instrumentalism. Instrumentalism, about technology more generally, is an especially intuitive idea. Technological artifacts (‘things’) have no agency of their own, would not exist without humans, and therefore are simply tools that are there to be used by us. Their value lies in how we decide to use them, which opens up the possibility of radical improvement to our lives. Technology is a neutral means with which we can achieve human goals, whether these be good or evil.

In contrast to this instrumentalist view there is another view on technology, which claims that technology is not neutral at all, but that it instead has a controlling or alienating influence on society. Call this view technological determinism. Such determinism regarding technology is often justified by, well, looking around. The determinist thinks that technological systems take us further away from an ‘authentic’ reality, or that those with power develop and deploy technologies in ways that increase their ability to control others.

So, the instrumentalist view sees some promise in technology, and the determinist not so much. However, there is in fact a third way to think about this issue: mediation theory. Dutch philosopher Peter-Paul Verbeek, drawing on the postphenomenological work of Don Ihde, has proposed a “thingy turn” in our thinking about the philosophy of technology. This we can call the mediation account of technology. This takes us away from both technological determinism and instrumentalism. Here’s how. Read more »

Monday, July 4, 2022

Robots, Emotions, and Relational Capacities

by Fabio Tollon

The Talon Bomb Disposal Robot is used by U.S. Army Special Forces teams for remote-controlled explosive ordnance disposal.

I take it as relatively uncontroversial that you, dear reader, experience emotions. There are times when you feel sad, happy, relieved, overjoyed, pessimistic, or hopeful. Often it is difficult to know exactly which emotion we are feeling at a particular point in time, but, for the most part, we can be fairly confident that we are experiencing some kind of emotion. Now we might ask, how do you know that others are experiencing emotions? While, straightforwardly enough, they could tell you. But, more often than not, we read into their body language, tone, and overall behaviour in order to figure out what might be going on inside their heads. Now, we might ask, what is stopping a machine from doing all of these things? Can a robot have emotions? I’m not really convinced that this question makes sense, given the kinds of things that robots are. However, I have the sense whether or not robots can really have emotions is independent of whether we will treat as if they have emotions. So, the metaphysics seems to be a bit messy, so I’m going to do something naughty and bracket the metaphysics. Let’s take the as if seriously, and consider social robots.

Taking this pragmatic approach means we don’t need to have a refined theory of what emotions are, or whether agents “really” have them or not. Instead, we can ask questions about how likely it is that humans will attribute emotions or agency to robots. Turns out, we do this all the time! Human beings seem to have a natural propensity to attribute consciousness and agency (phenomena that are often closely linked to the ability to have emotions) to entities that look and behave as if they have those properties. This kind of tendency seems to be a product of our pattern tracking abilities: if things behave in a certain way, we put them in a certain category, and this helps us keep track of and make sense of the world around us.

While this kind of strategy makes little sense if we are trying to explain and understand the inner workings of a system, it makes a great deal of sense if all we are interested in is trying to predict how an entity might behave or respond. Consider the famous case of bomb-defusing robots, which are modelled on stick insects. Read more »

Monday, April 11, 2022

The ‘Soft’ Impacts of Emerging Technology

by Fabio Tollon

Getting a handle on the various ways that technology influences us is as important as it is difficult. The media is awash with claims of how this or that technology will either save us or doom us. And in some cases, it does seem as though we have a concrete grasp on the various costs and benefits that a technology provides. We know that CO2 emissions from large-scale animal agriculture are very damaging for the environment, notwithstanding the increases in food production we have seen over the years. However, such a ‘balanced’ perspective usually emerges after some time has passed and the technology has become ‘stable’, in the sense that its uses and effects are relatively well understood. We now understand, better than we did in the 1920s, for example, the disastrous effects of fossil fuels and CO2 emissions. We can see that the technology at some point provided a benefit, but that now the costs outweigh those benefits. For emerging technologies, however, such a ‘cost-benefit’ approach might not be possible in practice.

Take a simple example: imagine a private company is accused of polluting a river due to chemical runoff from a new machine they have installed (unfortunately this probably does not require much imagination and can be achieved by looking outside, depending on where you live). In order to determine whether the company is guilty or not we would investigate the effects of their activities. We could take water samples from the river and attempt to show that the chemicals used in the company’s manufacturing process are indeed present in the water. Further, we could make an argument where we show how there is a causal relationship between the presence of these chemicals and certain detrimental effects that might be observed in the area, such as loss of biodiversity, the pollution of drinking water, or an increase in diseases associated with the chemical in question. Read more »

Monday, March 14, 2022

Virtue Ethics, Technology, and the Situationist Challenge

by Fabio Tollon

In a previous article I argued that, when it comes to our moral appraisal of emerging technologies, the best normative framework to use is that of virtue ethics. The reasons for this were that virtue ethics succeeds in ways that consequentialist or deontological theories fail. Specifically, these other theories posit fixed principles that seem incapable of accommodating the unpredictable effects that emerging technologies will have not only on how we view ourselves, but also on the ways in which they will interact with our current social and cultural practices

However, while virtue ethics might be superior in the sense that it is able to be context sensitive in way that these other theories are not, it is not without problems of its own. The most significant of these is what is known as the ‘situationist challenge’, which targets the heart of virtue ethics, and argues that situational influences trump dispositional ones. In this article I will defend virtue ethics from this objection and in the process show that it remains our best means for assessing the moral effects of emerging technologies.

So, what exactly is the situationist challenge contesting? In order for any fleshed-out theory of virtue to make sense, it must be the case that something like ‘virtues’ exist and are attainable by human beings, and that they are reliably expressed by agents. For example, traits such as generosity, arrogance, and bravery are dispositions to react in particular ways to certain trait-eliciting circumstances. If agents do not react reliably in these circumstances, it makes little sense to traffic in the language of the virtues. Calling someone ‘generous’ makes no sense if they only acted the way that they did out of habit or because someone happened to be watching them. Read more »

Virtue Ethics and Emerging Technologies

by Fabio Tollon

In 2007 Wesley Autrey noticed a young man, Cameron Hollopeter, having a seizure on a subway station in Manhattan. Autrey borrowed a pen and used it to keep Hollopeter’s jaw open. After the seizure, Hollopeter stumbled and fell from the platform onto the tracks. As Hollopeter lay there, Autry noticed the lights from an oncoming train, and so he jumped in after him. However, after getting to the tracks, he realized there would not be enough time to get Hollopeter out of harm’s way. Instead, he protected Hollopeter by moving him to a drainage trench between the tracks, throwing his body over Hollopeter’s. Both of them narrowly avoided being hit by the train, and the call was close enough that Autrey had grease on his hat afterwards. For this Autrey was awarded the Bronze Medallion, New York City’s highest award for exceptional citizenship and outstanding achievement.

In 2011, Karl-Theodore zu Guttenberg, a member of the Bundestag, was found guilty of plagiarism after a month-long public outcry.  He had plagiarized large parts of his doctoral dissertation, where it was found that he had copied whole sections of work from newspapers, undergraduate papers, speeches, and even from his supervisor. About half of his entire dissertation was stuffed with uncited work. Thousands of doctoral students and professors in Germany signed a letter lambasting then-chancellor Angela Merkel’s weak response, and eventually his degree was revoked, and he ended up resigning from the Bundestag.

Now we might ask: what explains this variation in human behaviour? Why did Guttenberg plagiarize his PhD, and why did Autrey put his life in danger to save a stranger? Read more »

Monday, August 30, 2021

Irrationality, Artificial Intelligence, and the Climate Crisis

by Fabio Tollon

Human beings are rather silly creatures. Some of us cheer billionaires into space while our planet burns. Some of us think vaccines cause autism, that the earth is flat, that anthropogenic climate change is not real, that COVID-19 is a hoax, and that diamonds have intrinsic value. Many of us believe things that are not fully justified, and we continue to believe these things even in the face of new evidence that goes against our position. This is to say, many people are woefully irrational. However, what makes this state of affairs perhaps even more depressing is that even if you think you are a reasonably well-informed person, you are still far from being fully rational. Decades of research in social psychology and behavioural economics has shown that not only are we horrific decision makers, we are also consistently horrific. This makes sense: we all have fairly similar ‘hardware’ (in the form of brains, guts, and butts) and thus it follows that there would be widely shared inconsistencies in our reasoning abilities.

This is all to say, in a very roundabout way, we get things wrong. We elect the wrong leaders, we believe the wrong theories, and we act in the wrong ways. All of this becomes especially disastrous in the case of climate change. But what if there was a way to escape this tragic epistemic situation? What if, with the use of an AI-powered surveillance state, we could simply make it impossible for us to do the ‘wrong’ things? As Ivan Karamazov notes in the tale of The Grand Inquisitor (in The Brothers Karamzov by Dostoevsky), the Catholic Church should be praised because it has “vanquished freedom… to make men happy”. By doing so it has “satisfied the universal and everlasting craving of humanity – to find someone to worship”. Human beings are incapable of managing their own freedom. We crave someone else to tell us what to do, and, so the argument goes, it would be in our best interest to have an authority (such as the Catholic Church, as in the original story) with absolute power ruling over us. This, however, contrasts sharply with liberal-democratic norms. My goal is to show that we can address the issues raised by climate change without reinventing the liberal-democratic wheel. That is, we can avoid the kind of authoritarianism dreamed up by Ivan Karamazov. Read more »

Monday, July 5, 2021

How Can We Be Responsible For the Future of AI?

by Fabio Tollon 

Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future.  Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer. An important question to consider is whether we ought to bear some moral responsibility for future states of affairs (known as forward-looking, or prospective, responsibility). In the case of climate change, it does seem as though we have a moral obligation to do something, and that should we fail, we are on the hook. One significant reason for this is that we can foresee that our actions (or inactions) now will lead to certain desirable or undesirable consequences. When we try and apply this way of thinking about prospective responsibility to AI, however, we might run into some trouble.

AI-driven systems are often by their very nature unpredictable, meaning that engineers and designers cannot reliably foresee what might occur once the system is deployed. Consider the case of machine learning systems which discover novel correlations in data. In such cases, the programmers cannot predict what results the system will spit out. The entire purpose of using the system is so that it can uncover correlations that are in some cases impossible to see with only human cognitive powers. Thus, the threat seems to come from the fact that we lack a reliable way to anticipate the consequences of AI, which perhaps make us being responsible for it, in a forward-looking sense, impossible.

Essentially, the innovative and experimental nature of AI research and development may undermine the relevant control required for reasonable ascriptions of forward-looking responsibility. However, as I hope to show, when we reflect on technological assessment more generally, we may come to see that just because we cannot predict future consequences does not necessary mean there is a “gap” in forward looking obligation. Read more »

Monday, June 7, 2021

Can Technology Undermine Character?

by Fabio Tollon

What is “character”? In general, we might say that the character of something is what distinguishes it from other things. Sedimentary rocks have a certain “character” that distinguishes them from igneous rocks, for example. Rocks, however, do not have personality (so far as I can tell). Human beings have personality, and it is thought that there is some connection between personality and character, but my interest does not lie in how exactly this relation works, as here I will be concerned with character specifically. To that end, we might specify that character is a collection of properties that distinguishes one individual person from another. When we say “she is wise” we are saying something about her personality, but we are also judging her character: we are, in effect, claiming that we admire her, due to some feature of her character. There could be myriad reasons for this. Perhaps she takes a keen interest in the world around her, has well-formed beliefs, reads many books, etc. In the case where she indeed displays the virtues associated with being wise, we would say that our assessment of her character is fitting, that is, such an assessment correctly identifies the kinds of things she stands for and values. The question I want to consider is whether the value laden nature of technology undermines our ability to make such character assessments. Read more »

Monday, May 10, 2021

Is Tesla the Future of the Auto Industry?

by Fabio Tollon

Tesla Model S

Elon Musk. Either you love him or you love to hate him. He is glorified by some as a demi-god who will lead humanity to the stars (because if it’s one thing we need is more planets to plunder) and vilified by others as a Silicon Valley hack who is as hypocritical as he is wealthy (very). When one is confronted by such contradictory and binary views the natural intuition is to take a step back and assess the available evidence. Usually this leads to a more nuanced understanding of the subject matter, often resulting in a less binary, and somewhat more coherent narrative. Usually.

The idea to write something about Musk was the result of the reality bending adventure that was Edward Niedermeyer’s Ludicrous: The Unvarnished Story of Tesla Motors.

Let us take a look at the basics. Musk is a Silicon Valley entrepreneur who made a fortune by helping to found PayPal. Using the capital gained from this venture, he invested $30 million into Tesla Motors, and became chairmen of its board of directors in 2004. He also eventually ousted the founders of the company Martin Eberhard and Marc Tarpenning. He is currently CEO of Tesla, Inc. (the name was officially changed from Tesla Motors to Tesla in 2017) and is in regular competition with human rights champion Jeff Bezos for the glamorous title of “world’s most successful hoarder of capital”. I don’t want to spend too much time on the psychology of Elon Musk, as Nathan Robinson has already done a fine job in this regard. Rather, I want to focus on how Tesla is not the market disrupting company many think it is.  Here I will be concerned with the mismatch between Silicon Valley’s software driven innovation versus the kind of innovation that exists in the auto industry. Read more »

Monday, March 15, 2021

More Than Just Design: Affordances as Embodying Value in Technological Artifacts

by Fabio Tollon

It is natural to assume that technological artifacts have instrumental value. That is, the value of given technology lies in the various ways in which we can use it, no more, and no less. For example, the value of a hammer lies in our ability to make use of it to hit nails into things. Cars are valuable insofar as we can use them to get from A to B with the bare minimum of physical exertion. This way of viewing technology has immense intuitive appeal, but I think it is ultimately unconvincing. More specifically, I want to argue that technological artifacts are capable of embodying value. Some argue that this value is to be accounted for in terms of the designed properties of the artifact, but I will take a different approach. I will suggest that artifacts can come to embody values based on their affordances.

Before doing so, however, I need to convince you that the instrumental view of technology is wrong. While some technological artifacts are perhaps merely instrumentally valuable, there are others that are clearly not so There are two ways to see this. First, just reflect on all the ways which technologies are just tools waiting to be used by us but are rather mediators in our experience of reality. Technological artifacts are no longer simply “out there” waiting to be used but are rather part of who we are (or at least, who we are becoming). Wearable technology (such as fitness trackers or smart watches) provides us with a stream of biometric information. This information changes the way in which we experience ourselves and the world around us. Bombarded with this information, we might use such technology to peer pressure ourselves into exercising (Apple allows you to get updates, beamed directly to your watch, of when your friends exercise. It is an open question whether this will encourage resentment from those who see their friends have run a marathon while they spent the day on the couch eating Ritter Sport.), or we might use it to stay up to date with the latest news (by enabling smart notifications). In either case, the point is that these technologies do not merely disclose the world “as it is” to us, but rather open up new aspects of the world, and thus come to mediate our experiences. Read more »

GPT-3 Understands Nothing

by Fabio Tollon

It is becoming increasingly common to talk about technological systems in agential terms. We routinely hear about facial recognition algorithms that can identify individuals, large language models (such as GPT-3) that can produce text, and self-driving cars that can, well, drive. Recently, Forbes magazine even awarded GPT-3 “person” of the year for 2020. In this piece I’d like to take some time to reflect on GPT-3. Specifically, I’d like to push back against the narrative that GPT-3 somehow ushers in a new age of artificial intelligence.

GPT-3 (Generative Pre-trained Transformer) is a third-generation, autoregressive language model. It makes use of deep learning to produce human-like texts, such as sequences of words (or code, or other data) after being fed an initial “prompt” which it then aims to complete. The language model itself is trained on Microsoft’s Azure Supercomputer, uses 175 billion parameters (its predecessor used a mere 1.5 billion) and makes use of unlabeled datasets (such as Wikipedia). This training isn’t cheap, with a price tag of $12 million. Once trained, the system can be used in a wide array of contexts: from language translation, summarization, question answering, etc.

Most of you will recall the fanfare that surrounded The Guardians publication of an article that was written by GPT-3. Many people were astounded at the text that was produced, and indeed, this speaks to the remarkable effectiveness of this particular computational system (or perhaps it speaks more to our willingness to project understanding where there might be none, but more on this later). How GPT-3 produced this particular text is relatively simple. Basically, it takes in a query and then attempts to offer relevant answers using the massive amounts of data at its disposal to do so. How different this is, in kind, from what Google’s search engine does is debatable. In the case of Google, you wouldn’t think that it “understands” your searches. With GPT-3, however, people seemed to get the impression that it really did understand the queries, and that its answers, therefore, were a result of this supposed understanding. This of course lends far more credence to its responses, as it is natural to think that someone who understands a given topic is better placed to answer questions about that topic. To believe this in the case of GPT-3 is not just bad science fiction, it’s pure fantasy. Let me elaborate. Read more »

Monday, January 18, 2021

Towards Responsible Research and Innovation

by Fabio Tollon

In the media it is relatively easy to find examples of new technologies that are going “revolutionize” this or that industry. Self-driving cars will change the way we travel and mitigate climate change, genetic engineering will allow for designer babies and prevent disease, superintelligent AI will turn the earth into an intergalactic human zoo. As a reader, you might be forgiven for being in a constant state of bewilderment as to why we do not currently live in a communist utopia (or why we are not already in cages). We are incessantly badgered with lists of innovative technologies that are going to uproot the way we live, and the narrative behind these innovations is overwhelmingly positive (call this a “pro-innovation bias”). What is often missing in such “debates”, however, is a critical voice. There is a sense in which we treat “innovation” as a good in itself, but it is important that we innovate responsibly. Or so I will argue. Read more »

Monday, October 26, 2020

What John von Neumann really did at Los Alamos

by Ashutosh Jogalekar

John von Neumann (Image: Life Magazine)

During a wartime visit to England in early 1943, John von Neumann wrote a letter to his fellow mathematician Oswald Veblen at the Institute for Advanced Study in Princeton, saying:

“I think I have learned a great deal of experimental physics here, particularly of the gas dynamical variety, and that I shall return a better and impurer man. I have also developed an obscene interest in computational techniques…”

This seemingly mundane communication was to foreshadow a decisive effect on the development of two overwhelmingly important aspects of 20th and 21st century technology – the development of computing and the development of nuclear weapons.

Johnny von Neumann was the multifaceted intellectual diamond of the 20th century. He contributed so many seminal ideas to so many fields so quickly that it would be impossible for any one person to summarize, let alone understand them. He may have been the last universalist in mathematics, having almost complete command of both pure and applied mathematics. But he didn’t stop there. After making fundamental contributions to operator algebra, set theory and the foundations of mathematics, he revolutionized at least two different and disparate fields – economics and computer science – and made contributions to a dozen others, each of which would have been important enough to enshrine his name in scientific history.

But at the end of his relatively short life which was cut down cruelly by cancer, von Neumann had acquired another identity – that of an American patriot who had done more than almost anyone else to make sure that his country was well-defended and ahead of the Soviet Union in the rapidly heating Cold War. Like most other contributions of this sort, this one had a distinctly Faustian gleam to it, bringing both glory and woe to humanity’s experiments in self-elevation and self-destruction. Read more »

Can We Ensure Fairness with Digital Contact Tracing?

by Fabio Tollon

COVID-19 has forced populations into lockdown, seen the restriction of rights, and caused widespread economic, social, and psychological harm. With only 11 countries having no confirmed cases of COVID-19 (as of this writing), we are globally beyond strategies that aim solely at containment. Most resources are now being directed at mitigation strategies. That is, strategies that aim to curtail how quickly the virus spreads. These strategies (such as physical and social distancing, increased hand-washing, mask-wearing, and proper respiratory etiquette) have been effective in delaying infection rates, and therefore reducing strain on healthcare workers and facilities. There has also been a wave of techno-solutionism (not unusual in times of crisis), which often comes with the unjustified belief that technological solutions provide the best (and sometimes only) ways to deal with the crisis in question.

Such perspectives, in the words of Michael Klenk, ask “what technology”, instead of asking “why technology”, and therefore run the risk of creating more problems than they solve. Klenk argues that such a focus is too narrow: it starts with the presumption that there should be technological solutions to our problems, and then stamps some ethics on afterwards to try and constrain problematic developments that may occur with the technology. This gets things exactly backwards. What we should instead be doing is asking whether we need a given technology, and then proceed from there. It is with this critical perspective in mind that I will investigate a new technological kid on the block: digital contact tracing. Basically, its implementation involves installing a smartphone app that, via Bluetooth, registers and stores the individual’s contacts. Should a user become infected, they can update their app with this information, which will then automatically ping all of their registered contacts. While much attention has been focused on privacy and trust concerns, this will not be my focus (see here for a good example of an analysis that looks specifically at these factors, drawn up by the team at Ethical Intelligence). I will instead focus on the question of whether digital contact tracing is fair. Read more »

Monday, October 5, 2020

Analogia: A Conversation with George Dyson

by Ashutosh Jogalekar

George Dyson is a historian of science and technology who has written books about topics ranging from the building of a native kayak (“Baidarka”) to the building of a spaceship powered by nuclear bombs (“Project Orion”). He is the author of the bestselling books “Turing’s Cathedral” and “Darwin Among the Machines” which explore the multifaceted ramifications of intelligence, both natural and artificial. George is also the son of the late physicist, mathematician and writer Freeman Dyson, a friend whose wisdom and thinking we both miss.

George’s latest book is called “Analogia: The Emergence of Technology Beyond Programmable Human Control”. It is in part a fascinating and wonderfully eclectic foray into the history of diverse technological innovations leading to the promises and perils of AI, from the communications network that allowed the United States army to gain control over the Apache Indians to the invention of the vacuum tube to the resurrection of analog computing. It is also a deep personal exploration of George’s own background in which he lived in a treehouse and gained mastery over the ancient art of Aleut baidarka building. I am very pleased to speak with George about these ruminations. I would highly recommend that readers listen to the entire conversation, but if you want to jump to snippets of specific topics, you can click on the timestamps below, after the video.

7:51 We talk about lost technological knowledge. George makes the point that it’s really the details that matter, and through the gradual extinction of practitioners and practice we stand in real danger of losing knowledge that can elevate humanity. Whether it’s the art of building native kayaks or building nuclear bombs for peaceful purposes, we need ways to preserve the details of knowledge of technology.

12:49 Digital versus analog computing. The distinction is fuzzy: As George says, “You can have digital computers made out of wood and you can have analog computers made out of silicon.” We talk about how digital computing became so popular in part because it was so cheap and made so much money. Ironically, we are now witnessing the growth of giant analog network systems built on a digital substrate.

21:22 We talk about Leo Szilard, the pioneering, far-sighted physicist who was the first to think of a nuclear chain reaction while crossing a traffic light in London in 1933. Szilard wrote a novel titled “The Voice of the Dolphins” which describes a group of dolphins trying to rescue humanity from its own ill-conceived inventions, an oddly appropriate metaphor for our own age. George talks about the formative influence of Trudy Szilard, Leo’s wife, who used to snatch him out of boring school lessons and take him to lunch, where she would have a pink martini and they would talk. Read more »

Monday, June 8, 2020

Von Neumann in 1955 and 2020: Musings of a cheerful pessimist on technological survival

by Ashutosh Jogalekar

Johnny von Neumann enjoying some of the lighter aspects of technology. The cap lights up when its wearer blows into the tube.

“All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships. Experience also shows that these transformations are not a priori predictable and that most contemporary “first guesses” concerning them are wrong.” – John von Neumann

Is the coronavirus crisis political or technological? All present analysis would seem to say that this pandemic was a result of gross political incompetence, lack of preparedness and impulsive responses by world leaders and government. But this view would be narrow because it would privilege the proximate cause over the ultimate one. The true, deep cause underlying the pandemic is technological. The coronavirus arose as a result of a hyperconnected world that made human reaction times much slower than global communication and the transport of physical goods and people across international borders. For all our skill in creating these technologies, we did not equip ourselves to manage the network effects and sudden failures in social, economic and political systems created by them. An even older technology, the transfer of genetic information between disparate species, was what enabled the whole crisis in the first place.

This privileging of political forces over technological ones is typical of the mistakes that we often make in seeking the root cause of problems. Political causes, greatly amplified by the twenty-four hour news cycle and social media, are illusory and may even be important in the short-term, but there is little doubt that the slow but sure grind of technological change that penetrates deeper and deeper into social and individual choices will be responsible for most of the important transformations we face during our lifetimes and beyond. On scales of a hundred to five hundred years, there is little doubt that science and technology rather than any political or social event cause the biggest changes in the fortunes of nations and individuals: as Richard Feynman once put it, a hundred years from now, the American Civil War would pale into provincial insignificance compared to that other development from the 1860s – the crafting of the basic equations of electromagnetism by James Clerk Maxwell. The former led to a new social contract for the United States; the latter underpins all of modern civilization – including politics, war and peace.

The question, therefore, is not whether we can survive this or that political party or president. The question is, can we survive technology? Read more »