A Trustworthy Remedy in the Google Ad Tech Trial

by Jerry Cayford

From Plaintiffs’ Demonstrative C, DOJ Remedies Hearing Exhibits

The Google advertising technology trial is a very big deal. While we are waiting for the final decision from Judge Leonie Brinkema of the U.S. District Court for Eastern Virginia, I want to present some thoughts on the least resolved of the case’s many issues, the hard parts the judge will be pondering. Actually, one hard part: trust. But I need to tell you a little about the case to make the trust issue clear. I’ll give a brief introduction, but will refer you for more detail to the daily reports I wrote for Big Tech on Trial (BTOT) last September-October.

The ad tech trial is the third of three cases in which Google was accused of illegal monopolization, all of which Google lost. The first (brought by Epic Games) was for monopolizing the Android app distribution market. The second was for monopolizing internet search. This one, the third, found Google guilty of a series of dirty tricks to monopolize two advertising technology markets: publisher ad servers, and advertising exchanges. That was in the liability phase, and what we’re waiting for now is the conclusion of the remedy phase of the trial, when we’ll find out what price Google will have to pay for its malfeasance.

I intended to describe for you how a trial on obscure technical issues could possibly be so important, but found I couldn’t. I kept running into unknowably vast questions. This case is about advertising technology, and advertising funds our whole digital world. So it is about who controls the flow of money to businesses. But it is also about the big data and technologies that enable advertising to target you so well, so privacy and autonomy, efficiency and manipulation, democracy and political power are all implicated. Then there’s artificial intelligence; Google’s monopoly on the technologies in this case gives it a big head start in monopolizing AI in coming years. And the rule of law: are the biggest corporations too powerful for law to constrain? This case, perhaps more than any other of our time, will guide the cases against corporate crime lined up behind it. More big issues start crowding in, and I can’t describe it all. So, I’m settling for the most general of summaries, and leaving it at that: the Google ad tech trial lies at the intersection of most of the forces shaping our society’s present and future. That’ll have to do because I’m moving on to the nuts and bolts. Read more »

Monday, June 20, 2022

Exorcising a New Machine

by David Kordahl

A.I.-generated image (from DALL-E Mini), given the text prompt, “computer with a halo, an angel, but digital”

Here’s a brief story about two friends of mine. Let’s call them A. Sociologist and A. Mathematician, pseudonyms that reflect both their professions and their roles in the story. A few years ago, A.S. and A.M. worked together on a research project. Naturally, A.S. developed the sociological theories for their project, and A.M. developed the mathematical models. Yet as the months passed, they found it difficult to agree on the basics. Each time A.M. showed A.S. his calculations, A.S. would immediately generate stories about them, spinning them as illustrations of social concepts he had just now developed. From A.S.’s point of view, of course, this was entirely justified, as the models existed to illustrate his sociological ideas. But from A.M.’s point of view, this pushed out far past science, into philosophy. Unable to agree on the meaning or purpose of their shared efforts, they eventually broke up.

This story was not newsworthy (it’d be more newsworthy if these emissaries of the “two cultures” had actually managed to get along), but I thought of it last week while I read another news story—that of the Google engineer who convinced himself a company chatbot was sentient.

Like the story of my two friends, this story was mostly about differing meanings and purposes. The subject of said meanings and purposes was a particular version of LaMDA (Language Models for Dialog Applications), which, to quote Google’s technical report, is a family of “language models specialized for dialog, which have up to 137 [billion] parameters and are pre-trained on 1.56 [trillion] words of public dialog data and web text.”

To put this another way, LaMDA models respond to text in a human-seeming way because they are created by feeding literal human conversations from online sources into a complex algorithm. The problem with such a training method is that humans online interact with various degrees of irony and/or contempt, which has required Google engineers to further train their models not to be assholes. Read more »

Monday, September 28, 2020

From Nudge to Hypernudge: Big Data and Human Autonomy

by Fabio Tollon

We produce data all the time. This is a not something new. Whenever a human being performs an action in the presence of another, there is a sense in which some new data is created. We learn more about people as we spend more time with them. We can observe them, and form models in our minds about why they do what they do, and the possible reasons they might have for doing so. With this data we might even gather new information about that person. Information, simply, is processed data, fit for use. With this information we might even start to predict their behaviour. On an inter-personal level this is hardly problematic. I might learn over time that my roommate really enjoys tea in the afternoon. Based on this data, I can predict that at three o’clock he will want tea, and I can make it for him. This satisfies his preferences and lets me off the hook for not doing the dishes.

The fact that we produce data, and can use it for our own purposes, is therefore not a novel or necessarily controversial claim. Digital technologies (such as Facebook, Google, etc.), however, complicate the simplistic model outlined above. These technologies are capable of tracking and storing our behaviour (to varying degrees of precision, but they are getting much better) and using this data to influence our decisions. “Big Data” refers to this constellation of properties: it is the process of taking massive amounts of data and using computational power to extract meaningful patterns. Significantly, what differentiates Big Data from traditional data analysis is that the patterns extracted would have remained opaque without the resources provided by electronically powered systems. Big Data could therefore present a serious challenge to human decision-making. If the patterns extracted from the data we are producing is used in malicious ways, this could result in a decreased capacity for us to exercise our individual autonomy. But how might such data be used to influence our behaviour at all? To get a handle on this, we first need to understand the common cognitive biases and heuristics that we as humans display in a variety of informational contexts. Read more »

Monday, April 13, 2020

Smitten by Fitbit

by Carol Westbrook

If all the data from the 70 million Fitbits and other wearables in the U.S. were analyzed for clusters of flu-like symptoms, we might have known about the coronavirus epidemic, traced the contacts and perhaps slow its spread, even before widespread testing was available. This is the power of wearable health technology.

Did you know your Fitbit could do that?

What sparked my interest in Fitbit health trackers was the recent news that Google acquired Fitbit, Inc., for $2.1 billion! I thought that wearables were old news, just another fad in consumer electronics that has already passed its time. What value did Google see in wearables?

Wearables are devices used to improve fitness and overall health by promoting and increasing activity. These small electronic devices are worn as wristbands or watches that detect and analyze some of the body’s physical parameters such as heart rate, motion, and GPS location; some can measure temperature and oxygen level, or even generate an electrocardiogram. What is unique about wearables is that they transmit this data to the wearer’s cell phone, and via the cell phone to the company’s secure database in the cloud. For example, the owner inputs height, weight, gender and age, and algorithms provide realtime distance and speed of a run, calories expended, heart rate, or even duration and quality of sleep. Fitness goals are set by the wearer or by default. The activities are tracked, and the program will send messages to the wearer about whether their goals were achieved, and and prompts to surpass these goals. Fitness achievements can be shared with friends of your choice–or with Fitbits’ related partners, even without your express consent. Read more »

Monday, September 17, 2012

Saintly Simulation

by Evan Selinger

ScreenHunter_02 Sep. 17 08.14My colleague Thomas Seager and I recently co-wrote “Digital Jiminy Crickets,” an article that proposed a provocative thought experiment. Imagine an app existed that could give you perfect moral advice on demand. Should you use it? Or, would outsourcing morality diminish our humanity? Our think piece merely raised the question, leaving the answer up to the reader. However, Noûs—a prestigious philosophy journal—published an article by Robert J. Howell that advances a strong position on the topic, Google Morals, Virtue, and the Asymmetry of Deference”. To save you the trouble of getting a Ph.D. to read this fantastic, but highly technical piece, I’ll summarize the main points here.

It isn’t easy to be a good person. When facing a genuine moral dilemma, it can be hard to know how to proceed. One friend tells us that the right thing to do is stay, while another tells us to go. Both sides offer compelling reasons—perhaps reasons guided by conflicting but internally consistent moral theories, like utilitarianism and deontology. Overwhelmed by the seeming plausibility of each side, we end up unsure how to solve the riddle of The Clash.

Now, Howell isn’t a cyber utopian, and he certainly doesn’t claim technology will solve this problem any time soon, if ever. Moreover, Howell doesn’t say much about how to solve the debates over moral realism. Based on this article alone, we don’t know if he believes all moral dilemmas can be solved according to objective criteria. To determine if—as a matter of principle—deferring to a morally wise computer would upgrade our humanity, he asks us to imagine an app called Google Morals: “When faced with a moral quandary or deep ethical question we can type a query and the answer comes forthwith. Next time I am weighing the value of a tasty steak against the disvalue of animal suffering, I’ll know what to do. Never again will I be paralyzed by the prospect of pushing that fat man onto the trolley tracks to prevent five innocents from being killed. I’ll just Google it.”

Let’s imagine Google Morals is infallible, always truthful, and 100% hacker-proof. The government can’t mess with it to brainwash you. Friends can’t tamper with it to pull a prank. Rivals can’t adjust it to gain competitive advantage. Advertisers can’t tweak it to lull you into buying their products. Under these conditions, Google Morals is more trustworthy than the best rabbi or priest. Even so, Howell contends, depending on it is a bad idea.

Read more »