top of page


Areas of Research Specialization
Philosophy of Artificial Intelligence, Philosophy of Language, Logic, Ethics, Philosophy of Science, History of Analytic Philosophy

My overall research profile can be characterized along six themes:

Theme 1. Much of my recent research energy has gone into work on artificial intelligence and machine learning in particular. I have had expertise in computer science for decades at this point, having written the data compression algorithm used by the spacecraft in NASA’s Mars Observer Mission in 1991. For the past few years I have developed coding skills and am investigating the potential to use machine learning algorithms as part of philosophical methodology (e.g., by creating an artificial philosopher). I am most interested in reinforcement learning algorithms (e.g., partially observable markov decision processes), unsupervised algorithms (e.g., Brian Weatherson’s recent topic modeling project on philosophical journals --, and Bayesian networks as they have been utilized in projects like Bayesian Theory of Mind (BToM). I am currently finishing a book on philosophy and machine learning, which brings many of these themes together and emphasizes the role of explainable artificial intelligence (XAI) in philosophical applications of these algorithms. I have recently published a paper, “The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence,” (with Alison Duncan Kerr), Minds and Machines 2022, in which we argue that machine learning algorithms are already decreasing vagueness in human languages. Moreover, because of my groundbreaking work on reasons in metaethics, I write about explainable AI in light of the fundamental kinds of reasons humans utilize. Finally, I am also currently working on the Ethics of Socially Disruptive Technology (ESDIT) project at the University of Twente, which involves writing about a huge range of topics in applied ethics related to artificial intelligence and other technologies. I contributed to the forthcoming EDSiT book, have given a recent talk, “Artificial What? Disrupting the Concept of Intelligence,” at the University of Illinois Urbana-Champaign, and have several works in progress on applied ethics of technology.

Theme 2. The philosophical discussion about reasons in metaethics and how they are linked to rationality, morality, explanation, deliberation, and much else is one of the defining characteristics of analytic philosophy (the dominant version of Western philosophy in the English-speaking world). The Semantics for Reasons book (coauthored with Bryan Weaver in 2019) aims to fundamentally change debates in metaethics by exposing major common assumptions that are mistaken and pointing the way toward new significant issues for study. Moreover, it has a sequel, Reasons in the Normative Realm, which is now under contract at Oxford, and ready for submission in the next couple of months. It focuses on presenting and defending a reasons first approach in metaethics, which is the view that all normative phenomena can be explained in terms of reasons. We use ‘normative’ in a broad way to include phenomena like ‘reason’, ‘ought’, ‘fitting’, ‘obligated’, ‘must’, ‘permitted’, ‘may’, ‘because’, ‘might’, as well as ‘good’, ‘bad’, ‘right’, and ‘wrong’.

Theme 3. My work has played a large role in popularizing the term ‘conceptual engineering’ and the methodology it names. The idea behind conceptual engineering is that instead of just analysing our concepts, philosophers should be assessing their worth and replacing those that are defective. This idea has turned into a movement that continues to grow and expand across the contemporary philosophical scene. See my “Philosophy as the Study of Defective Concepts,” for an overview. I am currently editing a three-volume collection on conceptual engineering with Springer Press (with Manuel Gustavo Isaac and Steffen Koch as co-editors) and working on a book on conceptual engineering called Replacing Philosophy along with a number of papers on this topic.

Theme 4. I am internationally renowned for my work on truth and the liar paradox. My book, Replacing Truth, appeared in 2013 with Oxford University Press, and a book symposium on it was recently published. In addition, I have published nine additional papers on truth, including the central one in Philosophical Review, since 2007, with the most recent in 2021 (“Conceptual Engineering for Truth”—

Theme 5. I also work on philosophy of science, most significantly on measurement theory and scientific change. Measurement theory is the study of how mathematics applies to the natural world in science. In a paper, “On the Indeterminacy of the Meter,” (Synthese, 2017) I use measurement theory to show that if there is a minimal length (e.g., Planck length), then there is only about a 1 in 21 million chance that ‘meter’ is determinate. Measurement theory plays a crucial role in my theory of metrological naturalism (introduced in Replacing Truth), which is a general philosophical methodology; I suggest that a philosophical theory of some concept X ought to be cast as a measurement system for X. Measurement systems are the basis for the scientific study of measurement, as in measurement theory and metrology. A recent paper, “Philosophy as the Study of Defective Concepts,” explores this theme.

Theme 6. I have written on the history of analytic philosophy, focusing on figures like Wilfrid Sellars, Rudolph Carnap, and Donald Davidson. I edited a collection of Wilfrid Sellars’ papers called In the Space of Reasons with Robert Brandom, which was published by Harvard University Press in 2007. I also published a paper, “Wilfrid Sellars’ Anti-Descriptivism” on Sellars, and his idea of the space of reasons figures prominently in my book Reasons in the Normative Realm (with Bryan Weaver), which is under contract with Oxford University Press (see above). I also recently published “Pragmatism without Idealism,” (with Robert Kraut) which identifies key insights from classical pragmatists like William James and distinguishes them from mistakes or less plausible views that are often confused with the insights. Finally, much of my work on conceptual engineering (see above) is inspired by Rudolph Carnap’s method of explication. My book (in preparation), Replacing Philosophy, explores this idea in detail. I have also published on other areas in the history of philosophy, like “Locke on Reflection”, and this expertise finds its way into my teaching (e.g., the course on Russell’s History of Western Philosophy).



  • The Study of Truth. Oxford University Press. Under contract.

Abstract: The book provides the reader with an inclusive understanding of the work on truth in the analytic tradition. It contains discussions of the major theories of the nature of truth (e.g., correspondence theories, epistemic theories, deflationist theories, and pluralist theories). The approaches to the liar and other paradoxes are divided into philosophical approaches, which focus on features of natural language truth predicates (e.g., contextualist theories, indeterminacy theories, and inconsistency theories), and logical approaches, which specify logical and aletheic principles for artificial languages that are intended to model natural language (e.g., inductive strong Kleene theories, inductive supervaluation theories, revision theories, paracomplete theories, and paraconsistent theories). The reader is led to a comprehensive view by considering combinations of philosophical and logical approaches and then to unified theories of truth, which have all three components. The goal is to give the reader a detailed background and the technical tools needed to understand all the major innovations in this vast literature.


  • Reasons in the Normative Realm (with Bryan Weaver), Oxford University Press, under contract.

Abstract: The focus of the book is the reasons first approach. This is the idea that reasons are primary in a certain sense within the realm of normative phenomena. The reasons literature dominates the philosophy of normativity, so it should be no surprise that some have suggested that reasons are in some way the key to understanding normativity in general. The phrase ‘reasons first’ has come to be used in the literature for the view that reasons occupy a special place among all normative phenomena. Normative phenomena includes things like those denoted by ‘reason’, ‘ought’, ‘fitting’, ‘obligated’, ‘must’, ‘permitted’, ‘may’, ‘because’, ‘might’, as well as ‘good’, ‘bad’, ‘right’, and ‘wrong’. The reasons-first theorist claims that all these normative phenomena can somehow be understood in terms of reasons. To start, the members of the family of reasons-first views are classified by the shared idea that all other normative concepts somehow substantively depend on the concept of a reason, but the concept of a reason does not depend on any other normative concept. We develop a novel version of the reasons-first approach that focuses on supervenience as the relation between reasons and the rest of the normative realm and on the semantics for reasons locutions defended in Semantics for Reasons (OUP, in press).

  • New Perspectives on Conceptual Engineering I: Foundational Issues. Editor (with Manuel Gustavo Isaac and Steffen Koch). Springer Press, forthcoming.

Collection based on talks given in the Arche Conceptual Engineering Research Seminar. Contributors include: Derek Ball & Bryan Pickel, Esa Díaz León, Paul Égré & Cathal O’Madagain, Patrick Greenough, Allison Koslow, Ethan Landes, Manolo Martínez, Sarah Sawyer, and Amie Thomasson. All contributions based on talks given in the Conceptual Engineering Seminar at the University of St Andrews.


  • New Perspectives on Conceptual Engineering II: Across Philosophy. Editor (with Manuel Gustavo Isaac and Steffen Koch). Springer Press, forthcoming.

Collection based on talks given in the Arche Conceptual Engineering Research Seminar. Contributors include: Simon Blackburn, Hans-Johann Glock, Sanford Goldberg, Frank Jackson, Teresa Marques, Tristram McPherson & David Plunkett, Kevin Scharp, and Jonah N. Schupbach. All contributions based on talks given in the Conceptual Engineering Seminar at the University of St Andrews.


  • New Perspectives on Conceptual Engineering III: Applications. Editor (with Manuel Gustavo Isaac and Steffen Koch). Springer Press, forthcoming.

Collection based on talks given in the Arche Conceptual Engineering Research Seminar. Contributors include: Robin Andreasen, Elizabeth Cantalamessa, Roberto Casati, Rachel Cooper, Manuel Gustavo Isaac, David Ludwig, Édouard Marchery, Genoveva Martí, Jennifer Nado, Kevin Reuter, and Mona Simion. All contributions based on talks given in the Conceptual Engineering Seminar at the University of St Andrews.


Abstract: The focus of the book is the semantics of reasons locutions. Given the leading role that talk of reasons plays in many different kinds of explanation, the book will deal with issues in the theory of reasons, metaethics, epistemology, the philosophy of language, and linguistics. The primary aim of the book is to present and defend a contextualist semantics of reasons locutions. We call this view Reasons Contextualism. In the course of this presentation and defense we pursue a secondary aim consolidating insights from the theory of reasons from different philosophical subfields, and weighing in on a series of debates in the theory of reasons. In the introduction, we set up the topic with a brief discussion of the traditional emphasis on the meanings of moral terms and the contemporary emphasis on reasons in metaethics. In the first chapter, we organize the disparate discussions of the different distinctions between reasons into the most systematic set of definitions to date. In the second chapter, we explicate the logical form of a reasons sentence and argue contrary to conventional wisdom that ‘reason’ is not ambiguous in any way, but rather it is context dependent. We introduce our view, reasons contextualism, and offer an initial defense of it against the contrastivism of Justin Snedegar and John Skorupski’s view. In the third chapter, we lay out the basic structure of a semantic theory for reasons locutions, describe different contexts of utterance for reasons locutions, and explain the semantic relevance or irrelevance of each of the six reasons distinctions explicated in the first chapter. In the fourth chapter, we show why our reasons contextualism is preferable to four competing views on the topic from Simon Blackburn’s expressivism, Stephen Finlay’s conceptual analysis, Tim Henning’s contextualism, and Niko Kolodny’s relativism. In the fifth chapter, we draw the implications of our reasons contextualism for several central issues in the theory of reasons: the ontology of reasons, indexical facts, reasons to be rational, moral reasons, and the reasons first program, which takes reasons to be the central normative phenomenon..


Abstract: I present and defend a theory of the nature and logic of truth on which truth is an inconsistent concept that should be replaced for certain theoretical purposes. The book opens with an overview of work on the nature of truth (e.g., correspondence, deflationism), work on the liar and related paradoxes, and a comprehensive scheme for combining these two literatures into a unified study of the concept of truth. Truth is best understood as an inconsistent concept, and I propose a detailed theory of inconsistent concepts that can be applied to the case of truth. Truth also happens to be a useful concept, but its inconsistency inhibits its utility; as such, it should be replaced with consistent concepts that can do truth’s job without giving rise to paradoxes. I offer a pair of replacements, which I dub ascending truth and descending truth, along with an axiomatic theory of them and a new kind of possible-worlds semantics for this theory. As for the nature of truth, I develop Davidson’s idea that it is best understood as the core of a measurement system for rational phenomena (e.g., belief, desire, meaning). The book finishes with a semantic theory that treats truth predicates as assessment-sensitive (i.e., their extension is relative to a context of assessment), and a demonstration of how this theory solves the problems posed by the liar and other paradoxes.


Abstract: Wilfrid Sellars is widely regarded as a major figure in twentieth century analytic philosophy. However, most of his writings are scattered and difficult to find. This collection brings together sixteen of Sellars most important and influential papers. It promises to be the definitive collection of Sellars work.





Published and Forthcoming Papers


A Defense of QUD Reasons Contextualism” (coauthored with Bryan Weaver) Inquiry, forthcoming.

In this article, we defend the semantic theory, Question Under Discussion (QUD) Contextualism
about Reasons that we develop in our monograph Semantics for Reasons against a series of objections
that focus on whether our semantics can deliver predictions for some common examples, how we
defend the semantic theory, and how we assess it compared to its competitors.

The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence," (coauthored with Alison Duncan Kerr), Minds & Machines 32: 585–611, 2022.

Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—Explainable Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled surveillance capitalism has resulted in humans quickly gaining the capability to identify and classify most of the occasions in which languages are used. We show that the knowability of this information is incompatible with what a certain theory of vagueness—epistemicism—says about vagueness. We argue that one way the epistemicist could respond to this threat is to claim that this process brought about the end of vagueness. However, we suggest an alternative interpretation, namely that epistemicism is false, but there is a weaker doctrine we dub technological epistemicism, which is the view that vagueness is due to ignorance of linguistic usage, but the ignorance can be overcome. The idea is that knowing more of the relevant data and how to process it enables us to know the semantic values of our words and sentences with higher confidence and precision. Finally, we argue that humans are probably not going to believe what future AI algorithms tell us about the sharp boundaries of our vague words unless the AI involved can be explained in terms understandable by humans. That is, if people are going to accept that AI can tell them about the sharp boundaries of the meanings of their words, then it is going to have to be XAI.

Conceptual Engineering and Replacements for Truth,” in The Nature of Truth: Classic and Contemporary Perspectives 2nd ed., Michael Lynch, Jeremy Wyatt, Junyeol Kim, and Nathan Kellen (eds.), MIT Press. 2020.

I defend a replacement strategy for the concept of truth. We ought to replace our concept of
truth, for certain purposes, with a team of two concepts as a way of addressing the problems caused
by the liar and other paradoxes. This might seem like a lot of work, but it turns out that by ‘we’, I
mean only those theorists engaged in doing semantics for expressively rich languages (like English)
that have the resources to formulate the paradoxes. This replacement project involves multiple parts. There is an evaluation of our concept of truth as defective – the concept itself is the source of the paradoxes. There is a characterization of the defect – the concept of truth has certain constitutive principles and these are inconsistent. By following these inconsistent constitutive principles, one can reason to a contradiction in the liar paradox. There is also a suggestion for replacement – when doing semantics for natural language, one ought to use the two replacement concepts instead of the concept of truth. I place this project in a more general context laying out a comprehensive case for the claim that the concept of truth is inconsistent and that there is no property of being true. I then look at some properties that are similar to what we thought the property of truth should have been like, and say a bit about what these properties might be like. Finally, I close with a discussion of various strategies for replacing the concept of truth.

Conceptual Engineering for Truth: Aletheic Properties and New Aletheic Concepts,” Synthese 198: 647–688, 2021.


Abstract: What is the property of being true like? To answer this question, begin with a Canberra-plan analysis of the concept of truth. That is, assemble the platitudes for the concept of truth, and then investigate which property might satisfy them. This project is aided by Friedman and Sheard’s groundbreaking analysis of twelve logical platitudes for truth. It turns out that, because of the paradoxes like the liar, the platitudes for the concept of truth are inconsistent. Moreover, there are so many distinct paradoxes that only small subsets of platitudes for truth are consistent. The result is that there is no property of being true. The failure of the Canberra plan analysis of the concept of truth, points the way toward a new methodology: a conceptual engineering project for the concept of truth. Conceptual engineering is assessing the quality of our concepts, and when they are found defective, offering new and better concepts to replace them for certain purposes. Still, there are many aletheic properties, which are properties satisfied by reasonably large subsets of platitudes for the concept of truth. We can treat these aletheic properties as a guide to the multitude of new aletheic concepts, which are concepts similar to but distinct from the concept of truth. Any new aletheic concept or team of concepts might be called on to replace the concept of truth. In particular, the concepts of ascending truth and descending truth are recommended, but the most important point is that we need a full-scale investigation into the space of aletheic properties and new aletheic concepts – that is, we need an Aletheic Principles Project (APP).

Philosophy as the Study of Defective Concepts,” in Conceptual Engineering and Conceptual Ethics, Burgess, Cappelen, and Plunkett (eds.), Oxford University Press, 2020.


Abstract: From familiar concepts like tall and table to exotic ones like gravity and genocide, they guide our lives and are the basis for how we represent the world. However, there is good reason to think that many of our most cherished concepts, like truth, freedom, knowledge, and rationality, are defective in the sense that the rules for using them are inconsistent. This defect leads those who possess these concepts into paradoxes and absurdities. Indeed, I argue that many of the central problems of contemporary philosophy should be thought of as having their source in philosophical concepts that are defective in this way. If that is right, then we should take a more active role in crafting and sculpting our conceptual repertoire. We need to explore various ways of replacing these defective concepts with ones that will still do the work we need them to do without leading us into contradictions.


Replies to Bacon, Eklund, and Greenough on Replacing Truth,” Inquiry 62: 422-475, 2019..

Abstract: Andrew Bacon, Matti Eklund, and Patrick Greenough have individually proposed objections to the project in my book, Replacing Truth. Briefly, the book outlines a conceptual engineering project – our defective concept of truth is replaced for certain purposes with a team of concepts that can do some of the jobs we thought truth could do. Here, I respond to their objections and develop the views expressed in Replacing Truth in various ways.. The papers can be found here:

Bacon, “Scharp on Replacing Truth,”,

Eklund, “Inconsistency and Replacement,”, and

Greenough, “Conceptual Marxism and Truth,”

On the Indeterminacy of the Meter,Synthese:  196: 2487–2517, 2019.


Abstract: In the International System of Units (SI), ‘meter’ is defined in terms of seconds and the speed of light, and ‘second’ is defined in terms of properties of cesium 133 atoms. I show that one consequence of these definitions is that: if there is a minimal length (e.g., Planck length), then the chances that ‘meter’ is completely determinate are only 1 in 21,413,747. Moreover, we have good reason to believe that there is a minimal length. Thus, it is highly probable that ‘meter’ is indeterminate. If the meter is indeterminate, then any unit in the SI system that is defined in terms of the meter is indeterminate as well. This problem affects most of the familiar derived units in SI. As such, it is highly likely that indeterminacy pervades the SI system. The indeterminacy of the meter is compared and contrasted with emerging literature on indeterminacy in measurement locutions (as in Eran Tal’s recent argument that measurement units are vague in certain ways). Moreover, the indeterminacy of the meter has ramifications for the metaphysics of measurement (e.g., problems for widespread assumptions about the nature of conventionality, as in Theodore Sider’s Writing the Book of the World) and the semantics of measurement locutions (e.g., undermining the received view that measurement phrases are absolutely precise as in Christopher Kennedy’s and Louise McNally’s semantics for gradable adjectives). Finally, it is shown how to redefine ‘meter’ and ‘second’ to completely avoid the indeterminacy.

Aletheic and Logical Pluralism,” in Pluralisms in Truth and Logic, edited by N. Pedersen, J. Wyatt, and N. Kellen, Palgrave-Macmillan, pp. 453-471, 2018.


Abstract: Aletheic pluralism is the view that there is more than one truth property, and logical pluralism is the view that there is more than one correct logic.  Usually the truth properties described by the aletheic pluralist are familiar ones advocated by parties debating the nature of truth (e.g., the correspondence property, the pragmatic property, and coherence property).  Likewise, the logics described by the logical pluralist are familiar ones advocated by parties debating the nature of logic (e.g., classical, intuitionistic, and relevant).  However, one can be an aletheic pluralist by focusing on properties of truth that result from different approaches to the aletheic paradoxes instead.  And one can be a logical pluralist by focusing on logics that result from different approaches to the aletheic paradoxes.  Moreover, one could combine these two alternative pluralisms into a single view according to which the logic and the truth property differ depending on the discourse, but they are coordinated so that in discourses with stronger logics, the truth property is weaker, and in discourses with weaker logics, the truth property is stronger.  I first formulate this combined theory of truth and logic and then evaluate it as a competitor with more traditional approaches to the aletheic paradoxes.  


Shrieking in the Face of Vengeance,Analysis 78: 3: 454–463,, 2018,


Abstract: Paraconsistent dialetheism is the view that some contradictions are true and that the inference rule ex falso quod libet (a.k.a. explosion) is invalid. A long-standing problem for paraconsistent dialetheism is that it has difficulty making sense of situations where people use locutions like ‘just true’ and ‘just false’. Jc Beall recently advocated a general strategy, which he terms shrieking, for solving this problem and thereby strengthening the case for paraconsistent dialetheism. However, Beall’s strategy fails, and seeing why it fails brings into greater focus just how daunting the just-true problem is for the dialetheist.

Revising Inconsistent Concepts,” (with Stewart Shapiro) in The Relevance of the Liar.  Edited by Bradley Armour-Garb. Oxford University Press. 2017.


Abstract: We aim to investigate the question of when it is reasonable to replace an inconsistent concept.  By ‘replace’ we do not mean ‘eliminate’.  Instead, we are interested in the question of when it makes sense to introduce a new concept or concepts that are designed to fill at least some of the roles played by the concept discovered to be inconsistent.  It might turn out that the concept in question is still used in certain situations even by those who recognize that it is inconsistent. The main application of our inquiry is to the concept of truth and the so called inconsistency approaches to the paradoxes that affect truth (e.g., the liar, Curry, and Yablo).  These approaches entail that truth is an inconsistent concept and that the paradoxes are symptoms of this inconsistency.  Our question is: if truth is an inconsistent concept, then does it need to be replaced?  Or more generally, when is the cure worse than the disease?

Analytic Pragmatism and Universal LX Vocabulary,” (with Richard Samuels), Philosophia, (2017).


Abstract: In his recent John Locke Lectures – published as Between Saying and Doing – Brandom extends and refines his views on the nature of language and philosophy by developing a position that he calls Analytic Pragmatism. Although Brandom’s project bears on an extraordinarily rich array of different philosophical issues, we focus here on the contention that certain vocabularies have a privileged status within our linguistic practices, and that when adequately understood, the practices in which these vocabularies figure can help furnish us with an account of semantic intentionality. Brandom’s claim is that such vocabularies are privileged because they are a species of what he calls universal LX vocabulary –roughly, vocabulary whose mastery is implicit in any linguistic practice whatsoever. We show that, contrary to Brandom’s claim, logical vocabulary per se fails to satisfy the conditions that must be met for something to count as universal LX vocabulary. Further, we show that exactly analogous considerations undermine his claim that modal vocabulary is universal LX. If our arguments are sound, then, contrary to what Brandom maintains, intentionality cannot be explicated as a “pragmatically mediated semantic phenomenon”, at any rate not of the sort that he proposes.



Tolerance and the Multi-range View of Vagueness,Philosophy and Phenomenological Research, 90: 467-474, 2015.

Abstract: Discussion of tolerance and Diana Raffman’s multi-range view of vagueness in her book Unruly Words.



Pragmatism without Idealism,” (with Robert Kraut) in The Palgrave Handbook of Philosophical Methods. Edited by Christopher Daly. Palgrave. 2015.


Abstract: Our goal is to examine three broadly pragmatist strategies which might be alleged to undermine realism by infecting it with unwanted subjectivism: one concerns "deflationist" views about properties, one concerns Carnap's pragmatism about ontology, and one concerns subjectivism about the notions of structure and structural similarity.  In each case critics allege that the intrusion of pragmatic and/or subjective elements into our ways of thinking about the world have the unwanted result that the realists' cherished contrasts between subjective vs. objective, or what is real vs. what linguistic forms are pragmatically expedient, or what is discovered vs. what is projected, are undermined.  We argue that these allegations are unfounded: the pragmatist strategies do not, in fact, threaten realism in the ways suggested.



Truth, Revenge, and Internalizability,Erkenntnis 79: 597-645, 2014.


Abstract: The vast majority of approaches to the liar paradox generate new paradoxes that are structurally similar to the liar (often called revenge paradoxes). There is a complex group of issues surrounding revenge paradoxes, the expressive powers of natural languages, and the adequacy of approaches to the liar. My goal is to provide a precise framework against which these issues can be formulated and discussed. The centerpiece of this framework is the notion of internalizability: a semantic theory is internalizable for a language if and only if there exists an extension of the language such that (i) the theory is expressible in that extended language, and (ii) the theory assigns meanings to all the relevant sentences of that extended language. The framework is applied to three examples from the literature: Reinhardt and McGee on theories that require expressively richer metalanguages, Field on revenge-immunity, and Gupta on semantic self-sufficiency.



Truth, the Liar, and Relativism,The Philosophical Review 122: 427-510, 2013.


Abstract: I propose a solution to the aletheic paradoxes on which truth predicates are assessment-sensitive. Truth is not an antecedently plausible topic for a semantic relativist treatment; nevertheless, the aletheic paradoxes give us good reason to think that truth is an inconsistent concept, and there are good reasons to think that semantic relativism is appropriate for inconsistent concepts, especially those that display what I call empirical inconsistency. Thus, I show that a promising version of the best approach to the paradoxes is an application of semantic relativism to truth itself arguing from results about the paradoxes and general considerations about language use to aletheic assessment-sensitivity. The paper is divided into two parts, the first on the aletheic paradoxes, and the second on assessment-sensitivity with respect to truth predicates. The first contains an overview of my preferred approach to the paradoxes, which entails that truth is an inconsistent concept that that should be replaced (for certain purposes) by a team of consistent concepts that can do its work without causing troubling paradoxes. The second part considers which treatment is most appropriate for inconsistent concepts in general and truth in particular. In it, I propose an assessment-sensitivity view of truth, discuss some prominent objections to semantic relativism, and review some issues that arise for approaches to the aletheic paradoxes.



On Richard’s When Truth Gives Out,” (with Stewart Shapiro) Philosophical Studies 160: 455-463, 2012.


Abstract: A discussion of Mark Richard’s When Truth Gives Out.



Robert Brandom: Inference and Meaning,” in Philosophical Profiles in the Theory of Communication. Edited by Jason Hannon and Robert Rutland. McGill-Queen’s University Press, 2012.


Abstract: This chapter covers some of Robert Brandom’s contributions to our understanding of communication. Topics discussed include his theory of discursive practice, his inferential semantics, his scorekeeping pragmatics, his views on the transmission model of communication, and his semantic perspectivism. I compare his scorekeeping pragmatic theory to other kinds of pragmatic theories, and I argue that his semantic perspectivism can be understood as a global indexical relativism.



Wilfrid Sellars’ Anti-Descriptivism,” in Categories of Being: Essays on Metaphysics and Logic. Edited by Leila Haaparanta and Heikki Koskinen. Oxford University Press, 2012.


Abstract: The work of Kripke, Putnam, Kaplan, and others initiated a tradition in philosophy that has come to be known as anti-descriptivism. I argue that when properly interpreted, Wilfrid Sellars is a staunch anti-descriptivist. Not only does he accept most of the conclusions drawn by the more famous anti-descriptivists, he goes beyond their critiques to reject the fundamental tenant of descriptivismthat understanding a linguistic expression consists in mentally grasping its meaning and associating that meaning with the expression. I show that Sellars alternative accounts of language and the mind provide novel justifications for the anti-descriptivists conclusions. Finally, I present what I take to be a Sellarsian analysis of an important anti-descriptivist issue: the relation between metaphysical modal notions (e.g., possibility) and epistemic modal notions (e.g., conceivability). The account I present involves extension of the strategy he uses to explain both the relation between physical object concepts (e.g., whiteness) and sensation concepts (e.g., the appearance of whiteness), and the relation between concepts that apply to linguistic activity (e.g., sentential meaning) and those that apply to conceptual activity (e.g., thought content).



Xeno Semantics for Ascending and Descending Truth,” in Foundational Adventures: Essays in Honor of Harvey M. Friedman. Edited by Neil Tennant. Templeton Press (Online) and College Publications, London. 2011.


Abstract: As part of an approach to the liar paradox and the other paradoxes affecting truth, I have proposed replacing our concept of truth with two concepts: ascending truth and descending truth. I am not going to discuss why I think this is the best approach or how it solves the paradoxes; instead, I concentrate on the theory of ascending and descending truth. I formulate an axiomatic theory of ascending truth and descending truth (ADT) and provide a possible-worlds semantics for it (which I dub xeno semantics). Xeno semantics is a generalization of the familiar neighborhood semantics, which itself is a generalization of the standard relational semantics. Once the details of ADT have been presented, it is easy to show that neither relational semantics nor neighborhood semantics will work for it; thus, the move to a more general framework is required. The main result is a fixed point theorem that guarantees the existence of an acceptable first-order constant-domain xeno model. From this result it follows that ADT is sound with respect to the class of such models. The upshot is that ADT is consistent relative to the background set theory.



Falsity,” in New Waves in Truth. Edited by Cory Wright and Nicolaj Pedersen. New York: Palgrave Macmillan, 2010.


Abstract: Although there is a massive amount of work on truth, there is very little work on falsity. Most philosophers probably think this is appropriate; after all, once we have a solid understanding of truth, falsity should not prove to be much of a challenge. However, there are several interesting and difficult issues associated with understanding falsity. After considering two prominent definitions of falsity and presenting objections to each one, I propose a definition that avoids their problems.



Truth’s Savior? Critical Study of Field’s Saving Truth From Paradox,The Philosophical Quarterly 60: 183-188, 2010.


Abstract: Hartry Field’s book Saving Truth From Paradox (OUP, 2008) offers an excellent survey of approaches to the liar paradox and a compelling defense of the paracomplete approach. In this critical study, I sketch Field’s approach and present several problems for it.



Truth and Expressive Completeness, in Reading Brandom. Edited by Bernhard Weiss and Jeremy Wanderer. Routledge, 2009.


Abstract: Robert Brandom claims that the theory of meaning he presents in Making It Explicit is expressively complete; i.e., it successfully applies to the language in which the theory of meaning is formulated. He also endorses a broadly Kripkean approach to the liar paradox. I show that these two commitments are incompatible, and I survey several options for resolving the problem.



Locke’s Theory of Reflection,British Journal for the History of Philosophy 16.1: 25-63, 2008.


Abstract: Those concerned with Locke’s Essay have largely ignored his account of reflection. I present and defend an interpretation of Locke’s theory of reflection on which reflection is not a variety of introspection; rather, for Locke, we acquire ideas of our mental operations indirectly. Furthermore, reflection is involuntary and distinct from consciousness. The interpretation I present also explains reflection’s role in the acquisition of non-sensory ideas (e.g., ideas of pleasure, existence, succession, etc.). I situate this reading within the secondary literature on reflection and discuss its consequences for interpretations of Locke’s views on empiricism, knowledge, and personal identity.



Aletheic Vengeance,” in The Revenge of the Liar: New Essays on the Paradox. Edited by JC Beall. Oxford University Press, 2008.


Abstract: One of the most frustrating and ubiquitous features of approaches to the liar paradox is that they tend to give rise to new paradoxes, which are called revenge paradoxes. I argue that there are two distinct kinds of revenge paradoxes. Once one distinguishes between two kinds of revenge paradoxes affecting theories of truth that offer an approach to the liar paradox, one can argue that any theory of truth that offers an approach to the liar paradox on which some basic inference rules governing truth are valid is either inconsistent, self-refuting, or restricted to avoid a revenge paradox. That is, there are no revenge-immune theories of truth that validate these rules. Moreover, this fact can be used to justify theories of truth on which truth is an inconsistent concept, where an inconsistent concept has incompatible rules governing the way in which it should be employed. I offer three arguments for theories of truth that imply that truth is an inconsistent concept, and I present an overview of the theory I endorse (which is not a version of dialetheism).



Replacing Truth,Inquiry 50: 606-621, 2007


Abstract: Of the dozens of purported solutions to the liar paradox published in the past fifty years, the vast majority are “traditional” in the sense that they reject one of the premises or inference rules that are used to derive the paradoxical conclusion.   Over the years, however, several philosophers have developed an alternative to the traditional approaches; according to them, our very competence with the concept of truth leads us to accept that the reasoning used to derive the paradox is sound.   That is, our conceptual competence leads us into inconsistency.  I call this alternative the inconsistency approach to the liar.  Although this approach has many positive features, I argue that several of the well-developed versions of it that have appeared recently are unacceptable.  In particular, they do not recognize that if truth is an inconsistent concept, then we should replace it with new concepts that do the work of truth without giving rise to paradoxes.   I outline an inconsistency approach to the liar paradox that satisfies this condition.



Scorekeeping in a Defective Language Game,Pragmatics and Cognition 13: 203-226, 2005 (a special issue devoted to Robert Brandom’s Making It Explicit).


Abstract: One common criticism of deflationism is that it does not have the resources to explain defective discourse (e.g., vagueness, referential indeterminacy, confusion, etc.). This problem is especially pressing for someone like Robert Brandom, who not only endorses deflationist accounts of truth, reference, and predication, but also refuses to use representational relations to explain content and propositional attitudes. To address this problem, I suggest that Brandom should explain defective discourse in terms of what it is to treat some portion of discourse as defective. To illustrate this strategy, I present an extension of his theory of content and use it to provide an explanation of confusion. The result is a theory of confusion based on Joseph Camp’s recent treatment. The extension of Brandom’s theory of content involves additions to his account of scorekeeping that allow members of a discursive practice to accept different standards of inferential correctness.



Communication and Content: Circumstances and Consequences of the Habermas-Brandom Debate,International Journal of Philosophical Studies 11: 43-61, 2003.


Reprinted in: Habermas II. Edited by David Rasmussen and James Swindal. Sage Publications, 2009.


Abstract: The recent exchange between Robert Brandom and Jürgen Habermas provides an opportunity to compare and contrast some aspects of their systems. Both present broadly inferential accounts of meaning, according to which the content of an expression is determined by its role in an inferential network. Several problems confront such theories of meaning one of which threatens the possibility of communication because content is relative to an individual’s set of beliefs. Brandom acknowledges this problem and provides a solution to it. The point of this paper is to argue that it arises for Habermas’s theory as well. I then present several solutions Habermas could adopt and evaluate their feasibility. The result is that Habermas must alter his theory of communicative action by contextualizing the standards for successful communication.






Review of Amie Thomasson’s Norms and Necessity (with Matthew Chrisman), Mind, Jan 2022.

Review of Graham Priest, Doubt Truth to be a Liar, Bulletin of Symbolic Logic 13: 541-544, 2007.





Interview with Kevin Scharp”, Marco Grossi, Rivista Italiana diFilosofia Analitica Junior 9: 12-20, 2018.

bottom of page