AI: Seven Ways to Challenge Philosophy

by Alexandre Gefen and Philippe Huneman

Philosophical reflection on artificial intelligence (AI) has been a feature of the early days of cybernetics, with Alan Turing’s famous proposals on the notion of intelligence in the 1950s rearming old philosophical debates on the man-system or man-machine and the possibly mechanistic nature of cognition. However, AI raises questions on spheres of philosophy with the contemporary advent of connectionist artificial intelligence based on deep learning through artificial neural networks and the prodigies of generative foundation models. One of the most prominent examples is the philosophy of mind, which seeks to reflect on the benefits and limits of a computational approach to mind and consciousness. Other spheres of affected philosophies are ethics, which is confronted with original questions on agency and responsibility; political philosophy, which is obliged to think afresh about augmented action and algorithmic governance; the philosophy of language; the notion of aesthetics, which has to take an interest in artistic productions emerging from the latent spaces of AIs and where its traditional categories malfunction; and metaphysics, which has to think afresh about the supposed human exception or the question of finitude.

In this text we want to indicate what are the new frontiers of philosophical speculation about artificial intelligence, now that GPT and other kinds of LLMs went public.

Knowing and Thinking: What Do AIs Tell Us?

If the currently established link between AI, cognitive science, and philosophy of mind is new, then philosophically questioning artificial intelligence requires us to place many questions in the long term. The project to improve human life by automating cognitive tasks, as radically original as it seems to us since the arrival of ChatGPT, develops one of Aristotle’s old intuitions about automata that would solve our routine tasks and replace our slaves. The milestones are famous automata such as Vaucanson’s duck and the mechanical Turk, right up to the exuberant robots of Boston Dynamics. To take just two examples, the congruences between the pragmatic philosophy of language proposed by Wittgenstein and how Large Language Models (LLMs) synthesize usages to generate thought probabilistically is patent, as is the link between modern cybernetics, which separates software and hardware, and the idea that thought is realizable in multiple ways, a notion formulated in the 1950s by Hilary Putnam and Jerry Fodor (sometimes called functionalism). One of these realizations would be human thought, often located “inside of” the brain, while the other would be a machine-implemented thought. Modern artificial intelligence has its roots in a long history of formalizing thought and logic. One of Leibniz’s significant theses is the idea that a judgment can be true simply in virtue of its form, creating the basis of the formalist trend in logic. This fundamental notion, which separates Leibniz from Descartes, as Yvon Belaval insisted long ago in Leibniz Critique de Descartes (Paris: Gallimard, 1978), makes it conceivable that a non-living, non-knowing machine capable of producing accurate results could perhaps do a better job producing true statements than we do. Alongside this thesis, the Hobbesian intuition that to think is, in a way, to compute, which Leibniz did not deny, can give substance to this project of studying thought outside of life; it is enough for a machine to compute for it to feel. We’ve been building computing machines since Pascal, then later with Ada Lovelace, the 19th-century inventor of what would become modern algorithmic programming, and so on. If foundation models, i.e., the massive models of reality produced by deep learning, “think” from the computation of node weights updated according to input data (i.e. training texts) rather than behaving like a computational machine computing judgments based on symbols of the world in its own language, this mathematical foundation does not necessarily break with the Hobbesian thesis of the calculability of thought, which would then become probabilistic.

Such a probabilistic approach leads us to think afresh about causality in the context of philosophies of science confronted with the problem of the explainability of results obtained by the computational matrices of artificial intelligences, which are difficult to relate to humanly observable and falsifiable experiences. The relationship between statistics and causality is an old problem for scientific methodology. Sewall Wright and Ronald Fisher, two founders of population genetics, laid the foundations in the 1920s-30s for our current ways of inferring causality from statistics, to the exact extent that two assertions must be held together here: statistics are our ordinary entry into causality, and statistical correlation is insufficient to establish causality by itself. In ecology and genetics, however, batteries of data that can be extended indefinitely in real-time (e.g., genome sequencing on an industrial scale, extended time series on the fine-scale behavior of organisms, or constantly updated tables of georeferenced species occurrences in GBIF) offer a glimpse of a science capable of predicting phenomena without theoretical models of causal processes. This prospect suggests undeniable societal utility if we consider the climate, the intrinsic complexity of the processes involved, and the crucial nature of the prediction imperative. As expressed in Chris Anderson’s Wired article, “The End of Theory,” which caused a sensation some fifteen years ago, the inflation of data and the emergence of algorithms to process it without any underlying theoretical modeling framework have made the dream of science without theorists possible. Thus, foundation models echo an epistemological shift in how science is done.

Whatever the case, given the provocative nature of Anderson’s slogan and the risks of epistemic opacity that accompany it, the prospect of totally predictive science, freed from mathematical models of the causal processes involved, has become widespread in the sciences. The idea here is to go directly from statistically-established correlations to predictions without going through the causes. Similarly, GPT and its companions produce “new” propositions based on extremely fine-grained statistical correlations exactly like the algorithms that drive, for example, recommendations for new contacts on Instagram based on a modeling of links between accounts, without capturing any causal links (e.g. between you and the music or products you’re recommended), or meanings (the recommendation algorithm doesn’t “know” the content of movies or the characteristic musical properties of songs), or like the image-generating AIs that produce new visual representations from texts.

One of the possibilities offered by current philosophy of science to account for these opaque operations would then be to revise the very notion of justification, which is classically included in our definition of “knowledge,” generally held since Plato to be a “true belief justified.” So, in a way, to relax the requirement that a justification be transparent, i.e., that I know the reason why what we know to justify a true belief justifies it, we would then have to turn to epistemologies that are often externalist, which weakens the very notion of justification.

(Ir)relevant Machines?

Epistemology, i.e., theory of knowledge, is undoubtedly primarily concerned with truth since knowledge is always truth. However, all knowledge must also respect both the norm of truth, that is, “a belief proven untrue must be abandoned,” and what is sometimes called the norm of relevance. The norm of relevance states that even if a proposition is true and justified, it is not necessarily satisfactory knowledge, at least not in all contexts. To the question, “Who is the common ancestor of hominids and canids?” the answer “an animal” is true but generally irrelevant. And if someone comes into my class and asks ‘how many students do attend the class?’, I can answer “less than two billions”, and it will be true but irrelevant.

Determining the criteria under which a proposition is relevant or irrelevant is typically a complex consideration, perhaps even more complicated than the theory of knowledge, because the domain of pragmatics to which relevance belongs is notably more blurred or ill-defined. The fact remains that, in interlocutory situations, we usually avoid saying irrelevant things. Learning a given scientific discipline also means always learning relevant answers. AIs produce true statements – in a sense, these statements are true because they deal statistically with texts that are usually relatively true. But are they always relevant? Do they reflect the standards of relevance found in texts?

But, what’s more, novelty in science and art often implies a redefinition of the domain of relevance. All of a sudden, seemingly irrelevant questions become relevant. For example, the birth of modern, anatomical-clinical medicine in the early 19th century was primarily the result of a movement whereby doctors considered it appropriate to autopsy the bodies of their sick patients. It was Bichat’s decision to “open up a few cadavers, and the mysteries of disease will be dispelled”, a statement that Foucault enthused about in Naissance de la clinique (Paris, 1964).

To sum up: too relevant, you don’t invent anything; never relevant, you don’t have anything new to say. To what extent can LLMs fit into this fine dialectic? Let’s note that an a priori negative answer is not admissible. AIs that played chess, then played go – we know the gap between Deep Blue defeating Kasparov and Alpha Go beating the go masters (i.e., the 20 years that separate computationalist AI from LLMs, sometimes called “stochastic parrots”) – and surprised us with totally unusual, almost extraterrestrial move registers. It was as if deep learning – the habit of playing millions of games against oneself in the blink of an eye – was taking AI to the regions of space of possible chess boards that no human had ever traversed. However, relevance engages another massive question insofar as it constitutes a dimension of the “language game,” as Wittgenstein put it, that we play between humans. If I ask you to tell me a story about an animal, and you reply, “There’s an animal,” this is logically correct but irrelevant. Why is that? Because I’m waiting for you to tell me specific things about a particular animal; after all, that’s what “telling a story” is all about. We could analyze this using sophisticated, pragmatic concepts or Paul Grice’s conversational implicature, but without going that far, let’s note that relevance is determined by the expectations I have and can lend to others. Therefore, questioning LLM’s standards of relevance means questioning the possible interlocutors they may have. LLM mimics politeness, but is it in a language game with someone? What kind of author of sentences is it? Is it really asserting anything, given that I can have it rephrase indefinitely what has been answered? Who is it talking to? Who am I talking to when I speak to an LLM? To an autonomous, conscious machine, the language itself speaking through the model? To a collective intelligence derived from the aggregation of vast quantities of data? To a sort of ‘cultural unconscious’ reflecting our biases and prejudices, and finally, the manifestation of the values and culture of the language model creators? These hesitations highlight the epistemological, ethical, and social challenges posed by the advance of artificial intelligence while questioning the relationship between humans and machines and how this interaction redefines our understanding of consciousness, identity, and knowledge creation. Pulling out the threads then leads us to problems relating to, on the one hand, the philosophy of action – “what is an author?” also means “who acts?” – and on the other hand, to aesthetics, since creator and author are related notions, both partly conventional, but probably irreducible to social conventions.

Philosophy of Action: Agency and Imputation

Let’s start with the question of action. If LLMs don’t assert, they can’t be held accountable for what they say. So, we’re faced with the problem of LLM accountability if we broaden it from the responsibility of statements to actions, as well as the imputability and responsibility that go with it. Ethics and epistemics are not far apart.

An LLM cannot be blamed for saying the wrong thing, whereas anyone who knows the truth and says the wrong thing is held accountable for the consequences of that utterance. Thus, if the lifeguard answers the question, “Can we swim?” with, “The ocean is safe today,” when he has just planted a red flag, he will be held responsible for the swimmer’s drowning. This scenario does not apply to LLMs, for whom we won’t wonder what they intended by producing such a response to the writer’s prompt.

Indeed, it’s because of this lack of accountability that autonomous cars still arouse distrust. Even if I know that leaving the steering wheel to a machine is objectively safer than taking it myself, it’s likely that my resistance is also based on the fact that the algorithmic driver at my side would strictly be irresponsible during the journey. Therefore, driving a car or writing in Science come up against the same deficiency. Ultimately, we return to the tenuous, transcendental property of accountability for what I say and do, whereas LLMs can’t stand for anything.

Who speaks, and who acts? These questions cannot be resolved by arbitrary measures, such as those taken by academic journals (which today proscribe LLM as an author) or regulators in the automobile industry, whose effectiveness is temporary. On the contrary, the efficacy and facilities of LLMs forms of pseudo-affirmation or pseudo-accountability will gradually infiltrate our exchanges and actions (for example, peer-reviewing is invested by LLMs). How should we recognize this evolution?

Since the introduction of the Open AI Store, offering thousands of applications to the public developed from GPT, we can anticipate the emergence of novel LLM-based practices of pseudo-assertions. For instance, Cary is a young woman who sold to the public her own LLM, Cary AI, based on her past conversations and likely to produce her putative conversations with customers. She was the first ‘influencer’ to adopt this approach, but many of them are starting to use GPT-generated avatars of themselves, whose exchanges they monetize. But who am I talking to when facing Cary AI, knowing that Cary is simultaneously interacting with hundreds of other users in identical ways? What concept of personal identity can be applied in this case? Does the multiplication of Cary AIs addressing their followers and admirers necessitate revisiting a more flexible conception of individual identity, similar to Derek Parfit’s ideas in Reasons and Persons, which draws on skepticism about the existence of a coherent Self behind our thoughts, ultimately reducible to the functions of memory? Or should the emergence of these multiple AIs lead us to question the majority of identity theories, which associate assertiveness, agency, and identity?

The debate on the presence of agency both in AI and of AI goes back to the philosophers of the 60s, who questioned the conditions of this capacity, dividing between realists and anti-realists on the notion of the self. Dennett introduced the idea of “intentional stance,” which asserts that intentionality is a projection we apply to behaviors that resemble human actions. At the same time, Valentino Braitenberg’s Vehicles demonstrated how simple robot behaviors could be interpreted intentionally. This approach is all the more relevant for GPT, designed for conversation, raising the question of the existence of true agency beyond the simple, intentional posture attributed by users. It also echoes contemporary philosophies that, following Bruno Latour, aim to escape the Western dualism that excludes other living beings from the field of agency or at least to grant them a dignified ontological status.

And Consciousness?

In today’s philosophical debate, AIs refer to a possible naturalization of consciousness and the stormy debates surrounding it. Since William James’ famous Principles of Psychology (1909), the aim has been to remove the question of consciousness from the realm of metaphysics and to describe its various dimensions (continuity, the private experience of perceptions known as “qualia”) and the sense of self in its different dimensions. In 1995, philosopher David Chalmers attempted to define how neuroscience should approach the problem of consciousness. He divided the problem into two categories: the “easy problem” and the “hard problem.” The first concerns our cognitive functions, as defined by Chalmers (the ability to discriminate, categorize, and respond to environmental stimuli, for example). The second concerns private experience, which is the unity of perceptions, feelings, memories, and actions that cannot be deduced from cognitive functions. Experience – sometimes called the “what it’s like,” following Thomas Nagel’s famous 1974 paper untitled What it’s like to be a bat ?- has characteristics that these cognitive functions do not. Self-awareness is, therefore, the difficult problem, and the theories postulated by the neurosciences are designed to describe consciousness with a biological substrate. While the capabilities of artificial neural network architectures can be likened to the brain’s routines and subroutines for recognizing shapes and faces, stringing words together, or organizing motor movements, and while human reasoning can, in some cases, be reduced to Bayesian inference (statistical methods that use Bayes’ theorem to update the probability of a hypothesis as new information becomes available), consciousness is neither cognition nor equivalent to intelligence.

The most mathematical of current theories of consciousness, Giulio Tononi’s Integrated Information Theory (IIT), is not, however, uninteresting for approaching artificial consciousnesses. It argues that a conscious experience is characterized by a maximum level of integrated information, reaching a state that is irreducible and differentiated from other alternatives, our experience remaining coherent without having to make any effort to unify it, and thus bringing human consciousness closer to an informational model. Human consciousness, however, remains rooted in a self that is present in the world in a body and whose flow can still be referred to the continuous perception of sensory phenomena, which can be thought of as a mere illusion, but which is difficult to transpose to a physical system other than the biological body.

While our cognitive capacities may be enhanced by the capabilities of artificial intelligences, consciousness remains isolated within our bodies, at least until brain implants like those promised by Elon Musk’s Neuralink company are deployed. The private, embodied nature of consciousness also makes it hard to imagine the idea of posthuman immortality enabled by the transfer of an individual to a digital network where they could live indefinitely. Even if there is more to consciousness than an informational machine, which makes this form of personal eternity a little illusory, we must acknowledge the power of artificial intelligence to revive a face, a body, or a thought in the form of a perfect illusion, as the dystopian Netflix series Black Mirror accustomed us to it. Artificial intelligence is an extraordinary technology for producing the survival of the individual that human culture had hitherto promised through oral or written memory. What the innumerable digital traces left by each of us now capture does not, of course, guarantee resurrection in the religious sense of the term, but the ability to revive specters and reactivate the past predicts new disturbances in the individual’s relationship with time, which philosophy will need to grasp.

The Status of Freedom in the Coming Society

These reflections on the naturalness of consciousness find a powerful echo in how AI questions the notion of freedom in a fresh way. Given the exceptional predictive capacity of artificial intelligences trained on a huge corpus of varied data, what remains of freedom? And what do these autonomous machines, capable of making independent decisions, have to say about our fragile human free will? The question then becomes not only metaphysical but political. The links between artificial intelligence and utilitarian political philosophy, which holds that actions should be evaluated in terms of their capacity to produce the greatest good for the most significant number of people, are patently obvious. These are not only staged in a dystopian way by works of fiction, such as Steven Spielberg’s Minority Report (based on a short story by Philip K. Dick), but have already been updated by artificial intelligence. Dick’s short story has indeed been actualized by the extension of AI into China, for example, where the social value of the individual tends to be calculated by a “computer” algorithmic credit system, favoring the majority, even if this means harming a minority.

More than a simple tool for cognitive augmentation, artificial intelligences therefore embody an ideal of large-scale social optimization, made possible by the seamless transformation of the world’s data into digital format. These technologies tend to privilege a collective vision at the expense of individual experiences, exceptions, and personal nuances, using the data of our behaviors to steer our habits towards improving general well-being. Like the algorithms that guide us from song-to-song on Spotify or video-to-video on YouTube, creating an illusion of personalization, AIs have the potential to help us fight our addictions and avoid our excesses, marking the beginning of an era in which we entrust them with managing our diaries through virtual assistants like Siri or selecting our romantic partners through apps like Tinder.

Not only could the utilitarian orientation of AIs be challenged in the name of an ethics of duties, virtues, rights, or care, but AI prescriptions remain dependent both on data content examples of learning, subjecting them to the possibility of numerous biases, as well as on the variety of so-called “alignment” rules of AIs (i.e., the rules by which AIs are backed ex-post to local human norms and preferences, from Chinese neo-communism to Silicon Valley’s fashionable long-termism, and have little in common between the moral rules of the monotheist religions or Confucianism).

Art and AI: The Future of Creation and the Derealization of Images

GPT talks to us but does not assert anything. Yet in general generative AI creates things – videos, texts, music, images. Questions in the field of aesthetics are therefore significant and record all of these debates in their way. By exploring the relationship between text and image in new ways, generative AIs such as Dall-E or Midjourney confront us with how we conceive the relationship between different media and modes of representing the world. For example, they have a massive impact on the idea of photography as the imprinted image of reality; with “promptography,” the photographic medium is more closely aligned with the compositional regime of painting or text, whose representation is never guaranteed. More profoundly, they destabilize traditional notions of art by calling into question the singularity and originality of works, transforming the artist’s role into that of a “text engineer.” Deriving an image from a text based on a prompt – which is not exactly a description, but rather an instruction to the machine – leads us to observe the correspondences and disconnections between the two forms of representation, to be attentive to the concordances or to the “betrayal of images,” or, to use the title of another famous work by Magritte from 1928, the renowned, “Ceci n’est pas une pipe.” To make the artist an engineer of text is to lead them to question how the image captures a text or, on the contrary, how the image possesses a life of its own, retains an autonomy vis-à-vis discourse, and manifests its resistance to abstract intention.

But very often, the images we generate on Dall-E or Midjourney fail to surprise or disappoint us without moving us aesthetically. Despite the joy of manipulating codes, the mechanics of prompt-generated images accentuate the opacity of representations and make visible the biases of human descriptions and perceptions. Indeed, not everything that makes up an image can be captured by a description, and there is a considerable limit to the space that can be explored by image generators based on existing human descriptions.

This development highlights the tension between artistic intention and machine-generated results, revealing the complexity of interactions between human descriptors and machine production, and thus, leading to a reflection on the dissociation between creator and performer, as well as on the idea of art as a product of automation. The production of art by AI generates debates about the value of art and calls into question the distinctions between original and copy and between intention and execution. Works become reproducible, modifiable and independent of their creator, which calls into question their intrinsic value. Further, the ability of these works to continue after their creator’s death raises questions about durability and artistic property. What is the value of a work whose creator is immortal? The inability of AI-generated images to be protected by copyright underlines this ambiguous position in the art world, where they can be perceived as a decline in authenticity and originality.

The use of AI in art also calls into question the traditional value attributed to the manual effort of the craftsmanship or the genius of the artist, as well as questions the notion of originality. Not only is the work of art stripped of its aura of uniqueness, being that it is now reproducible since it is natively digital (as Walter Benjamin already pointed out in the 20th century), but the art also exists in a potentially infinite number of different iterations, and can be varied at will. Its value no longer depends on its execution, and its interest depends solely on the art of manipulating a kind of robot painter, to whom it is a matter of giving instructions from which derives a potential infinity of works, continually generable beyond the person of their initiator, after their death if you like, and whose originality is much more linked to the configuration of generative artificial intelligences and their corpus of training than to the work of their instigator. The fact that the artificial images created by generative AIs cannot be protected by copyright, and conversely, that they are attacked as infringing copyright, is a clear sign of the degraded art-ness in relation to the classical criteria of literature and painting.

In a world saturated with images, these digital tools also call into question our ability to create contemporary art to generate new expressive forms that go beyond the conventions uncovered by AIs. It is significant in this respect that many works produced with artificial intelligence avoid the tools made available to the general public and their censored, corseted style, seeking to deregulate their representations, deconstruct their processes, or combine them into original flows in order to reintroduce singularity and effort into the device. Generative AIs may not introduce new representations but rather, expected idiomatic images, as Italo Calvino predicted as long ago as 1967 in his lecture, “Cybernetics and Fantasies, or Literature as a Combinatorial Process,” when he announced that cybernetic works would respond to a new “classicism.” And yet, just because we know they come from a machine doesn’t mean we don’t enter what Masahiro Mori famously called the “uncanny valley,” a deep-seated disquiet, even aversion, to representations that have become too similar to human artifacts. This singular origin prevents us from deploying our traditional mechanisms of reflection on the author’s intention, their psychology, and the historical and cultural context of the work’s production. If there is any authorial intent, it’s hard to reconstruct behind the machine interface. Hence, our reluctance to admit that we could be moved by a machine work, to enter into resonance and relationship with it, forcing us to suspend a cautious distance towards it.

Generative AIs raise questions about the nature of artistic representation since, instead of reflecting the world, they offer a vision that is mediated and constructed by the machine in its latent space. They thus call into question the relationship between reality and representation, introducing a dimension of statistical truth that defies our traditional understanding of art as human mediation. What do the images created by these machines actually show? They don’t capture the world directly as a non-computational photographer would, nor do they translate it through the subjectivity of a human gaze. A generative AI reorganizes millions of images according to its own algorithms, establishing unique associations between these images and their textual descriptions. It transforms these representations not into ideal concepts and values embodied in experience, as a human artist would, but into its own statistical structures. It offers a vision of the world filtered by its internal parameters, capable of probabilistically determining what a smile or a sunrise is. The enigma of AI-produced art lies in this mediation by a machine, introducing a layer of abstraction between reality and its representation and attributing a certain statistical truth to the images produced, making them simultaneously credible and predictable. It is up to the artist and the viewer to determine the value of this interpretation of the world, detached from personal history, indifferent to individual emotions, historical events, and subjective desires, and proposing visions that sometimes give the unsettling impression of strangely transcending the simple assembly of data on which they have been formed.

Conclusion. On the Continuation of Philosophy

In a famous phrase taken up by Douglas Hofstadter in his famous Gödel, Escher, Bach, computer engineer Larry Tesler defined AI as “everything that hasn’t been done yet.” These reflections, and the sometimes-ancient philosophies they mobilize, by no means exhaust the philosophical questions opened up by AI. Some of them even call for new thought experiments, speculations on the possibility in which the most unbridled fictions would be helpful to to philosophical reflection. For example, they prompt us to think about what we can expect from a “General Artificial Intelligence” that would have aggregated all human knowledge.

At the same time, another part freshly reworks ancient philosophical issues. Whether we’re thinking about the ethics of autonomous cars or artificial creativity, the robustness of these ethics in confronting new matters means that we can continue to have confidence in philosophy, which is ultimately potentially enhanced by generative AI.

***

Alexandre Gefen, Directeur de Recherche (Full Research Professor) at the CNRS Theory and History of Modern Art and Literature Laboratory (UMR7172, THALIM, CNRS / University Sorbonne Nouvelle – Paris 3- Ecole Normale Supérieure), is a historian of ideas and literature.  He is the author of numerous articles and essays on culture, contemporary literature and literary theory. He was one of the pioneers of Digital Humanities in France. He is the director of the Culturia IA research project, which focuses on the history and cultural issues of artificial intelligence and member of the steering committee of the CNRS AISSAI center for Artificial Intelligence. Latest books: La littérature, une infographie, CNRS éditions, 2022. Créativités artificielles, Les Presses du réel, 2023. Vivre avec ChatGPT, L’Observatoire, 2023.

Trained first in mathematics and then in philosophy (PhD Paris I, 2000), Philippe Huneman is Director of Research at the Institut d’Histoire et de Philosophie des Sciences et des Techniques (CNRS/Université Paris I Panthéon Sorbonne) in Paris. As a philosopher of biology, working on issues related to evolution and ecology, especially the ideas of organism and of natural selection, and generally on the nature and modes of biological explanation, he published more than 140 papers and chapters in international journals of philosophy or biology. His published books include, as editor, From groups to individuals (with Frédéric Bouchard, MIT Press, 2013), Handbook of Evolutionary Thinking in the Sciences (with T. Heams, G. Lecointre, M. Silberstein, Springer, 2015), Challenging the Modern Synthesis (with D. Walsh, Oxford UP 2017). After having published Métaphysique et biology. Kant et la constitution du concept d’organisme (Paris: Kimé 2008), he recently publishes Death: Perspectives from the Philosophy of Biology (Palgrave-McMillan, 2022) and coauthored From evolutionary biology to evolution and back (Springer, 2022), with five philosophers or biologists and economists. His last published book in french is :Les sociétés du profilage. Évaluer, optimiser, prédire (Paris: Payot, 2023. He is also a coeditor of the book series History, Philosophy and Theory in Biology (Springer).

***

Enjoying the content on 3QD? Help keep us going by donating now.