Baker/No-Baker, Thinker/No-Thinker

by Mark R. DeLong

An English baker in 1944 pours dough from a very large metal bowl. The bowl is about 2 meters in diameter and is tilting on a rack designed to make moving the bowl and pouring its contents easier.
A Modern Bakery – the Work of Wonder Bakery, Wood Green, London, England, UK, 1944.

“Computerized baking has profoundly changed the balletic physical activities of the shop floor,” Richard Sennett wrote about a Boston bakery he had visited and much later revisited. The old days (in the early 1970s) featured “balletic” ethnic Greek bakers who thrusted their hands into dough and water and baked by sight and smell. But in the 1990s, Sennett’s Boston bakers “baked” bread with the click of a mouse.1Richard Sennett reported about visits he made to the bakery about 25 years apart. The first visits took place when he and Jonathan Cobb were working on The Hidden Injuries of Class (Knopf, 1972), though Sennett and Cobb do not specifically recount the visits in their book. The second visits took place when Sennett was working on The Corrosion of Character: The Personal Consequences of Work in the New Capitalism (W.W. Norton, 1998). “Now the bakers make no physical contact with the materials or the loaves of bread, monitoring the entire process via on-screen icons which depict, for instance, images of bread color derived from data about the temperature and baking time of the ovens; few bakers actually see the loaves of bread they make.” He concludes: “As a result of working in this way, the bakers now no longer actually know how to bake bread.” [My emphasis.]

The stark contrast of Sennett’s visits, which I do not think he anticipated when he first visited in the 1970s, are stunning, and at the center of the changes are automation, changes in ownership of the bakery, and the organization of work that resulted. Technological change and organizational change—interlocked and mutually supportive, if not co-determined—reconfigured the meaning of work and the human skills that “baking” required, making the work itself stupifyingly illegible to the workers even though their tasks were less physically demanding than they had been 25 years before.

Sennett’s account of the work of baking focuses on the “personal consequences” of work in the then-new circumstances of the “new capitalism.” But I find the role of technology in the 1990s, when Microsoft Windows was remaking worklife, a particularly important feature of the story. Along with relentless consolidation of business ownership, computer technologies reset the rules of labor processes and re-centered skills. Of course, the story is not even new; the interplay of technology and work has long pressed human labor into new forms and configurations, allowing certain freedoms and delights along with new oppressions and horrors. One hopes providing more delight than horror.

Artificial intelligence will be no different, except that the panorama of action will shift. The shop floor will certainly see changes, but other changes, less focused on place, will also come about. For the Boston bakers, if they’re still at it, it may mean fewer, if any, clicks on icons, though those who “bake” may still have to empty trash cans of discarded burnt loaves (which Sennett, in the 1990s, considered “apt symbols of what has happened to the art of baking”).

In the past few weeks, researchers at Microsoft and Carnegie Mellon University reported results of a study that laid out some markers of how the use of AI influences “critical thinking” or, as I wish the authors had phrased it, how AI influences those whose job requires thinking critically. Other recent studies have received less attention, though they, too, have zeroed in on the relationship of AI use and people’s critical thinking. This study, coming from a leader of AI, drew special attention.

A shift of emphasis in critical thinking

“The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers” (Lee, H-P, et al.) appeared online as a pre-print of a presentation planned for this year’s CHI Conference on Human Factors in Computing Systems and was widely reported in ways that Microsoft generally wouldn’t welcome, since the company is looking to AI for future profits and has invested billions of dollars in AI development and products. Hot takes tended to mildly emphasize the negative: “While the researchers hedge against saying that generative AI tools make you dumber, the study shows that overreliance on generative AI tools can weaken our capacity for independent problem-solving,” one brief post concluded.

Hot takes are often misleading, of course, simply because they lack nuance—perfect for cocktail party talk, dinner table sighs, or hand-wavy dismissals. In the article, the Microsoft and Carnegie Mellon team laid out their design, methods (with rationale), showed their data and the instruments they used (the complete, text version of their survey is included in an appendix), and drew conclusions that “highlight how the use of GenAI [i.e., generative AI] tools creates new challenges for critical thinking.” Using a well established definition of critical thinking from Benjamin Bloom, they took on two research questions to address empirically: “When and how do knowledge workers perceive the enaction of critical thinking when using GenAI?” and “When and why do knowledge workers perceive increased/decreased effort for critical thinking due to GenAI?”

Note that the researchers pose the questions to learn about subjects’ perception of the “enaction” and the “effort” of critical thinking. That is a subtle, but important, focus. It is different to watch something happen than to actually do it, and in the case of this study, it is wise to remember that its findings have to do with perceptions of its subjects. That focus in itself aims the study toward identifying correlation rather than nailing down causation—that is, why and how an effect of generative AI occurs. In large measure, the study maps the relationships of subjects’ attitudes toward generative AI, the depth of their own experience and confidence in work tasks, the immediate circumstances of tasks, and kinds of interactions with GenAI.

That “generative AI tools make you dumber” is not what the researchers conclude—at least it’s not true in certain circumstances. If you have confidence in AI, you’re less likely to kick your own critical thinking into action, and the opposite is true when you have confidence in your own ability: “In line with recent projections that more accessible GenAI tools may exacerbate the risks of technology over-reliance,” the researchers report, “our results provide empirical evidence that knowledge workers’ confidence in AI doing the tasks indeed negatively correlates with their enaction of critical thinking (𝛽=-0.69, 𝑝 < 0.001).” And, conversely, “knowledge workers’ confidence in doing the task themselves (𝛽=0.26, 𝑝 = 0.026) and evaluating AI responses (𝛽=0.31, 𝑝 = 0.046) both positively correlate with their enaction of critical thinking.”

As for the question of effort, the researchers found something similar: “Confidence in AI is associated with reduced critical thinking effort, while self-confidence is associated with increased critical thinking effort.” Of course, that makes sense. If you don’t even start (i.e., “enact”) thinking critically because you’re just fine with what the AI has cooked up, the whole process is pretty effortless. But if you are confident of your own abilities, you’re likely to check things out, setting in motion the processes of critical thinking about the AI outputs, which, interestingly, the critical thinking subjects perceived as more difficult or at least at a higher level of abstraction on Bloom’s taxonomy.

But, the researchers also found that, when there’s a generative AI in the mix, processes of critical thinking differ from regular ol’ human critical thinking.

Section 6 of the paper—its “discussion” section—explores some of the implications of thinking critically “with” generative AI. The researchers identify “shifts in critical thinking due to generative AI” (section 6.2). The “shift” acknowledges that critical thinking with generative AI does not fit the traditional “collaborator” relationship. Although the researchers do not characterize the relationship so baldly, unlike the kind of thinking that people share in conversation, exchange, and even disagreement, one party in the new AI relationship “thinks” and the other does not (even though it appears to). You know which is which, of course.2Professor Harry Frankfurt’s revered notion of “bullshit” complicates matters even more, since people have noted that LLMs like ChatGPT have exquisitely automated the production of “bullshit.” Producers of “bullshit” pay no heed to truth or falsehood. Even the concepts or truth and falsehood are alien to the machinery of LLMs. It’s valid to question whether a human critical thinker can use a producer of bullshit as any sort of partner in thinking.

The researchers propose “stewardship” as the “metaphor” for the kind of thinking relationship that people using generative AI have with the AI:

With GenAI, knowledge workers also shift from task execution to oversight, requiring them to guide and monitor AI to produce high-quality outputs—a role we describe as “stewardship.” It is not that execution has disappeared altogether, nor is having high level oversight on a task an entirely new cognitive role, but there is a shift from the former to the latter. Unlike in human-human collaboration, in a human-AI “collaboration”, the responsibility and accountability for the work still resides with the human user despite the labour of material production being delegated to the GenAI tool, which makes stewardship strike us as a more appropriate metaphor for what the human user is doing, than teammate, collaborator, or supervisor.

In addition to adapting the role of steward to AI, the researchers identify areas to develop in knowledge workers: “Training programs should emphasise the importance of cross-referencing AI outputs, assessing the relevance and applicability of AI-generated content, and continuously refining and guiding AI processes.” These tasks do make up “oversight” but they also seem to identify the generative AI as originator—through its “labor of material production”—of human tasks, mostly of follow up: confirmation, certainly, but also a higher level assessment and evaluation of what the AI has wrought.

Generative AI as automation restrains skill

Erik Hoel pointed out in his consideration of the study that its “data fits the intuitive idea that positive use of AI tools is when they shift cognitive tasks upward in terms of their level of abstraction“—the higher level or more expert level of thinking in Bloom’s taxonomy that the researchers adopted as the model of critical thinking. Such elevation of critical thinking is a good thing, Hoel says, because it can assist critical thinkers. But that is only part of the story. Hoel is concerned about when people should begin to use generative AI. Is generative AI a tool for the development of critical thinking skills or a tool best used by those who have already developed those skills?

Hoel believes that “parents (and schools) will need to be careful about whether kids (and students) rely too much on AI early on. I personally am not worried about a graduate student using ChatGPT to code up eye-catching figures to show off their gathered data. . . .” But, he writes, “I am, however, very worried about a 7th grader using AI to do their homework, and then, furthermore, coming to it with questions they should be thinking through themselves, because inevitably those questions are going to be about more and more minor things. People already worry enough about a generation of ‘iPad kids.’ I don’t think we want to worry about a generation of brain-drained ‘meat puppets’ next.”

Technology has long changed the course of the development and the utility of human skill. Recall Plato’s critique of writing, which he claimed undermined human memory and thought. Within the past half-century, Harry Braverman’s monumental work on labor process traced the degradation of skill to “the incessant breakdown of labor processes into simplified operations taught to workers as tasks. This leads to the conversion of the greatest possible mass of labor into work of the most elemental form, labor from which all conceptual elements have been removed and along with them most of the skill, knowledge, and understanding of production processes.”

During the industry push to automate manufacturing in the mid-twentieth century, James R. Bright similarly asked, “In what way does machinery supplement man’s muscles, his mental processes, his judgment, and his degree of control?” And from that question, he developed seventeen levels of mechanization that focused on automation of manufacturing and categorized the ceding of human control to the machine—and the contraction of human skill. For both Bright and Braverman, automation meant higher productivity—whether in metal widgets or data entry punch-cards—and diminishing requirements of skills.

As Richard Sennett considered his two visits to the Boston bakery, he noted that “contemporary capitalism’s new tool [the already ubiquitous computer] is a far more intelligent machine than the mechanical devices of the past. Its own intelligence can substitute for that of its users, and thus take [Adam] Smith’s nightmare of mindless labor to a new extremes [sic].” In addition to experiencing tedium and mindlessness, the “bakers” of the 1990s were flummoxed and morose when the machines went awry. “I was fortunate to be in the bakery when one of the dough-kneading machines blew up,” Sennett wrote. The electricity was shut off, technicians were called, and the staff milled around helplessly.

The detachment and confusion I found among the bakers in Boston is a response to these peculiar properties of computer use in a flexible workspace. It wouldn’t be news to any of these men and women that resistance and difficulty are important sources of mental stimulation, that when we struggle to know something, we know it well. But these truths have no home. Difficulty and flexibility are contraries in the bakery’s ordinary productive process. At moments of breakdown, the bakers suddenly found themselves shut out from dealing with their work—and this rebounded to their sense of working self.

With the rise of generative AI in the workplace and its possible “shifting” of human critical thinking skills, I wonder if we are heading to a future like the bakers of Boston a quarter century ago, who produced loaves without knowing how to bake. Perhaps that future circumstance will rebound more heavily on our sense of working self, for the skill we will have forfeited will be our ability to think.


For the bibliographically curious: Microsoft’s critical thinking and AI paper, freely downloadable: Lee, Hao-Ping (Hank), Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers,” 2025. https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/. Sennett, Richard. The Corrosion of Character: The Personal Consequences of Work in the New Capitalism. 1st ed. New York: Norton, 1998. Sennett has been remarkably productive. For more about the bakery mentioned in this article, see the chapter named “Illegible.” His earlier work with Jonathan Cobb is The Hidden Injuries of Class. New York, Vintage Books, 1973. http://archive.org/details/hiddeninjuriesof00sennrich. Hoel places the Microsoft/Carnegie Mellon study into a larger context, including its implications for parenting and education. Hoel, Erik. “brAIn drAIn,” March 16, 2022. https://www.theintrinsicperspective.com/p/brain-drain. The study of labor process and its transformations by organizational restructuring and automation is worth looking at closely, since it might prod some new thinking about how to manage emerging technologies. Harry Braverman was a Marxist historian, who work is listed at the top of a (very capitalist) Forbes review of the most important books on labor: Braverman, Harry. Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. 25th anniversary ed. New York: Monthly Review Press, 1998. James Bright’s classification of “levels of mechanization” is also worthy of a second look (and I don’t think it’s been looked at much lately!): Bright, James R. “How to Evaluate Automation.” Harvard Business Review 33, no. 4 (August 1955): 101–11; “Thinking Ahead.” Harvard Business Review 33, no. 6 (December 1955): 27–166; and “Does Automation Raise Skill Requirements?” Harvard Business Review 36, no. 4 (August 1958): 85–98.

Enjoying the content on 3QD? Help keep us going by donating now.

Footnotes

  • 1
    Richard Sennett reported about visits he made to the bakery about 25 years apart. The first visits took place when he and Jonathan Cobb were working on The Hidden Injuries of Class (Knopf, 1972), though Sennett and Cobb do not specifically recount the visits in their book. The second visits took place when Sennett was working on The Corrosion of Character: The Personal Consequences of Work in the New Capitalism (W.W. Norton, 1998).
  • 2
    Professor Harry Frankfurt’s revered notion of “bullshit” complicates matters even more, since people have noted that LLMs like ChatGPT have exquisitely automated the production of “bullshit.” Producers of “bullshit” pay no heed to truth or falsehood. Even the concepts or truth and falsehood are alien to the machinery of LLMs. It’s valid to question whether a human critical thinker can use a producer of bullshit as any sort of partner in thinking.