The philosophical approach of this volume is mainly structuralist, using logical tools to investigate the formal structure of various kinds of objects in our world, as characterised by language and as systematised by philosophy. This volume mainly analyses the structural properties of collections or pluralities (with applications to the philosophy of set theory), homogeneous objects like water, and the semantics and philosophy of events. This book thereby complements algebraic work that has been done on other philosophical entities, i.e. propositions, properties, relations, or situations. Located in the triangle of language, logic and philosophy, this volume is unique in combining the resources of different ¹elds in an interdisciplinary enterprise. Half of the fourteen chapters of this volume are original papers, complementing the collection of the author's previously published essays on the subject.
This exemplary volume shows how the shared interests of three different research areas can lead to significant and fruitful exchanges: six papers each very accessibly present an exciting contribution to the study and uses of algebras, diagrams, and decisions, ranging from indispensable overview papers about shared formal members to inspirational applications of formal tools to specific problems. Contributors include Pieter Adriaans, Sergei Artemov, Steven Givant, Edward Keenan, Almerindo Ojeda, Patrick Scotto di Luzio, and Edward Stabler.
Robert B. Brandom is one of the most original philosophers of our day, whose book Making It Explicit covered and extended a vast range of topics in metaphysics, epistemology, and philosophy of language--the very core of analytic philosophy. This new work provides an approachable introduction to the complex system that Making It Explicit mapped out. A tour of the earlier book's large ideas and relevant details, Articulating Reasons offers an easy entry into two of the main themes of Brandom's work: the idea that the semantic content of a sentence is determined by the norms governing inferences to and from it, and the idea that the distinctive function of logical vocabulary is to let us make our tacit inferential commitments explicit.
Brandom's work, making the move from representationalism to inferentialism, constitutes a near-Copernican shift in the philosophy of language--and the most important single development in the field in recent decades. Articulating Reasons puts this accomplishment within reach of nonphilosophers who want to understand the state of the foundations of semantics.
Table of Contents:
1. Semantic Inferentialism and Logical Expressivism 2. Action, Norms, and Practical Reasoning 3. Insights and Blindspots of Reliabilism 4. What Are Singular Terms, and Why Are There Any? 5. A Social Route from Reasoning to Representing 6. Objectivity and the Normative Fine Structure of Rationality
Displaying a sovereign command of the intricate discussion in the analytic philosophy of language, Brandom manages successfully to carry out a program within the philosophy of language that has already been sketched by others, without losing sight of the vision inspiring the enterprise in the important details of his investigation ' Using the tools of a complex theory of language, Brandom succeeds in describing convincingly the practices in which the reason and autonomy of subjects capable of speech and action are expressed. --J'rgen Habermas
Because of the ease of their implementations, attribute-value nased theorires of grammar are becoming increasingly populaar in theoretical linguistics as an alternative to transformational accounts, as well as in computational linguistics. Mark Johnson provides a formal analysis of attribute-value structures, of their use in a theory of grammar, of the representation of grammatical relations in such theories of grammar, and the implications of different representations. A classical treatment of disjunction and negation is alo included.
"Essential reading for anyone interested in recent unification-based approcahes to grammar. Johnson lucidly lays out a formal framework in which a sharp distinction is drawn between descriptions of linguistic objects and the objects themselves. Negation and disjuntion over complex features, though linguistically desirable, have given rise to many problems, and one of Johnson's main achievements is to show that they can be interpreted using classic logic." -Ewan Klein, University of Edinburgh
MARK JOHNSON is assitant professor of cognitive and linguistic sciences at Brown University.
Although John Dewey is celebrated for his work in the philosophy of education and acknowledged as a leading proponent of American pragmatism, he might also have enjoyed more of a reputation for his philosophy of logic had Bertrand Russell not attacked him so fervently on the subject. In Dewey's New Logic, Tom Burke analyzes the debate between Russell and Dewey that followed the 1938 publication of Dewey's Logic: The Theory of Inquiry. Here, he argues that Russell failed to understand Dewey's logic as Dewey intended, and despite Russell's resistance, Dewey's logic is surprisingly relevant to recent developments in philosophy and cognitive science.
Burke demonstrates that Russell misunderstood crucial aspects of Dewey's theory and contends that logic today has progressed beyond Russell and is approaching Dewey's broader perspective.
"[This] book should be of substantial interest not only to Dewey scholars and other historians of twentieth-century philosophy, but also to devotees of situation theory, formal semantics, philosophy of mind, cognitive science, and Artificial Intelligence."—Georges Dicker, Transactions of the C.S. Peirce Society
"No scholar, thus far, has offered such a sophisticated and detailed version of central themes and contentions in Dewey's Logic. This is a pathbreaking study."—John J. McDermott, editor of The Philosophy of John Dewey
McCawley supplements his earlier book—which covers such topics as presuppositional logic, the logic of mass terms and nonstandard quantifiers, and fuzzy logic—with new material on the logic of conditional sentences, linguistic applications of type theory, Anil Gupta's work on principles of identity, and the generalized quantifier approach to the logical properties of determiners.
This original study considers the effects of language and meaning on the brain. Jens Erik Fenstad—an expert in the fields of recursion theory, nonstandard analysis, and natural language semantics—combines current formal semantics with a geometric structure in order to trace how common nouns, properties, natural kinds, and attractors link with brain dynamics.
The application of logic to grammar is a fundamental issue in philosophy and has been investigated by such renowned philosophers as Leibniz, Bolzano, Frege, and Husserl. Language and Grammar examines categorial grammars and type-logical grammars, two linguistic theories that play a significant role in this area of study yet have been overshadowed until recently. The prominent scholars contributing to this volume also explore the impact of the Lambek program on linguistics and logical grammar, producing, ultimately, an exciting and important resource that demonstrates how type-logical grammars are promising future models of reasoning and computation.
Logic and Representation brings together a collection of essays, written over a period of ten years, that apply formal logic and the notion of explicit representation of knowledge to a variety of problems in artificial intelligence, natural language semantics, and the philosophy of mind and language. Particular attention is paid to modeling and reasoning about knowledge and belief, including reasoning about one's own beliefs, and the semantics of sentences about knowledge and belief.
Robert C. Moore begins by exploring the role of logic in artificial intelligence, considering logic as an analytical tool., as a basis for reasoning systems, and as a programming language. He then looks at various logical analyses of propositional attitudes, including possible-world models, syntactic models, and models based on Russellian propositions. Next Moore examines autoepistemic logic, a logic for modeling reasoning about one's own beliefs. Rounding out the volume is a section on the semantics of natural language, including a survey of problems in semantic representation; a detailed study of the relations among events, situations, and adverbs; and a presentation of a unification-based approach to semantic interpretation.
Robert C. Moore is principal scientist of the Artificial Intelligence Center of SRI International.
The fields of logic, linguistics and computer science are intimately related, and modern research has uncovered a wide range of connections. This collection focuses on work that is based on the unifying concept of information. This collection of nineteen papers covers subjects such as channel theory, presupposition and constraints, the modeling of discourse, and belief. They were all presented at the 1996 Conference on Information-Theoretic Approaches to Logic, Language, Information, and Computation.
Muskens radically simplifies Montague Semantics and generalises the theory by basing it on a partial higher order logic resulting in a theory which combines important aspects of Montague Semantics and Situation Semantics. Richard Montague formulated the revolutionary insight that we can understand the concept of meaning in ordinary languages much in the same way as we understand the semantics of logical languages. Unfortunately, he formalised his idea in an unnecessarily complex way. The present work does away with unnecessary complexities, obtains a streamlined version of the theory, shows how partialising the theory automatically provides us with the most central concepts of Situation Semantics, and offers a simple logical treatment of propositional attitude verbs, perception verbs and proper names.
Poetic Interaction presents an original approach to the history of philosophy in order to elaborate a fresh theory that accounts for the place freedom in the Western philosophical tradition. In his thorough analysis of the aesthetic theories of Hegel, Heidegger, and Kant, John McCumber shows that the interactionist perspective recently put forth by Jürgen Habermas was in fact already present in some form in the German Enlightenment and in Heidegger's hermeneutic phenomenology. McCumber's historical placement of the interactionist perspective runs counter to both Habermas's own views and to those of scholars who would locate the origin of these developments in American pragmatism. From the metaphysical approaches of Plato and Aristotle to the interactionist approaches of Habermas and Albrecht Wellmer, McCumber provides an original narrative of the history of philosophy that focuses on the ways that each thinker has formulated the relationships between language, truth, and freedom. Finally, McCumber presents his critical demarcation of various forms of freedom to reveal that the interactionist approach has to be expanded and enlarged to include all that is understood by "poetic interaction." For McCumber, freedom is inherently pluralistic. Poetic Interaction will be invaluable to political philosophers, historians of philosophy, philosophers of language, and scholars of legal criticism.
In this volume, Maria Cerezo examines Wittgenstein's Tractatus Logico-Philosophicus as a response to some of Frege's and Russel's logical problems. In analyzing the tractarian conditions for the possibility of language, she explains the two main theories of the proposition in Tractatus: the truth-functions theory and the picture theory. Cerezo shows that Wittgenstein initially separates the account of the structure of a proposition from the explanation of its expression. However, contrary to his intention, the combination of these theories creates new difficulties, since the requirements of each theory cannot be fully respected by the others. Cerezo also argues that Wittgenstein's theory of language cannot be fully understood unless attention is paid to his theory of expression and his doctrine of projection by the metaphysical subject.
No single work is more responsible for the heightened interest in argumentation and informal reasoning—and their relation to ethics and jurisprudence in the late twentieth century—than Chaïm Perelman and Lucie Olbrechts-Tyteca’s monumental study of argumentation, La Nouvelle Rhétorique: Traité de l'Argumentation. Published in 1958 and translated into English as The New Rhetoric in 1969, this influential volume returned the study of reason to classical concepts of rhetoric. In The Promise of Reason: Studies in The New Rhetoric, leading scholars of rhetoric Barbara Warnick, Jeanne Fahnestock, Alan G. Gross, Ray D. Dearin, and James Crosswhite are joined by prominent and emerging European and American scholars from different disciplines to demonstrate the broad scope and continued relevance of The New Rhetoric more than fifty years after its initial publication.
Divided into four sections—Conceptual Understandings of The New Rhetoric, Extensions of The New Rhetoric, The Ethical Turn in Perelman and The New Rhetoric, and Uses of The New Rhetoric—this insightful volume covers a wide variety of topics. It includes general assessments of The New Rhetoric and its central concepts, as well as applications of those concepts to innovative areas in which argumentation is being studied, such as scientific reasoning, visual media, and literary texts. Additional essays compare Perelman’s ideas with those of other significant thinkers like Kenneth Burke and Richard McKeon, explore his career as a philosopher and activist, and shed new light on Perelman and Olbrechts- Tyteca’s collaboration. Two contributions present new scholarship based on recent access to letters, interviews, and archival materials housed in the Université Libre de Bruxelles. Among the volume’s unique gifts is a personal memoir from Perelman’s daughter, Noémi Perelman Mattis, published here for the first time.
The Promise of Reason, expertly compiled and edited by John T. Gage, is the first to investigate the pedagogical implications of Perelman and Olbrechts- Tyteca’s groundbreaking work and will lead the way to the next generation of argumentation studies.
These papers treat those issues involved in formulating a logic of propositional attitudes and consider the relevance of the attitudes to the continuing study of both the philosophy of language and the philosophy of mind. C. Anthony Anderson is professor of philosophy and Joseph Owens is assistant professor of philosophy, both at the University of Minnesota.
This volume is an outgrowth of the second Workshop on Logic, Language and Computation held at Stanford in the spring of 1993. The workshop brought together researchers interested in natural language to discuss the current state of the art at the borderline of logic, linguistics and computer science. The papers in this collection fall into three central research areas of the nineties, namely quantifiers, deduction, and context. Each contribution reflects an ever-growing interest in a more dynamic approach to meaning, which focuses on inference patterns and the interpretation of sentences in the context of a larger discourse. The papers apply either current logical machinery - such as linear logic, generalised quantifier theory, dynamic logic - or formal analyses of the notion of context in discourse to classical linguistic issues, with original and thought-provoking results deserving of a wide audience.
Why are diagrams sometimes so useful, facilitating our understanding and thinking, while at other times they can be unhelpful and even misleading? Drawing on a comprehensive survey of modern research in philosophy, logic, artificial intelligence, cognitive psychology, and graphic design, Semantic Properties of Diagrams and Their Cognitive Potentials reveals the systematic reasons for this dichotomy, showing that the cognitive functions of diagrams are rooted in the characteristic ways they carry information. In analyzing the logical mechanisms behind the relative efficacy of diagrammatic representation, Atsushi Shimojima provides deep insight into the crucial question: What makes a diagram a diagram?
Situation theory and situation semantics are recent approaches to language and inforamtion, approaches first formulated by Jon Barwise amd John Perry in Situations and Attitudes (1983). The present volume collects some of Barwise's papers written since then, those directly concerned with relations between logic, situation theory, and situation semantics. Several appers appear here mfor the first time.
JON BARWISE is director of the Symbolic Systems Program and professor of philosophy at Stanford University and a researcher at CSLI.
Situation Theory grew out of attempts by Jon Barwise in the late 1970s to provide semantics for "naked-infinitive" perceptual reports such as 'Claire saw Jon run'. Barwise's intuition was that Claire didn't just see Jon, an individual, but Jon doing something, a situation. Situations are individuals having properties and standing in relations. A theory of situations would allow us to study and compare various types of situations or situation-like entities, such as facts, events and scenes.
One of the central themes of situation theory is that a theory of meaning and reference should be set within a general theory of information, one moreover that is rich enough to do justice to perception, communication and thought. By now many people have contributed to the development and application of situation theory, constrained by the need to account for certain kinds of semantic phenomena, and by the need to give a rigorous mathematical account of the principles of information that underwrite the theory.
This volume presents work that evolved out of the First Conference on Situation Theory and Its Applications held by CSLI at Asilomar, California, in March 1989. The nineteen papers included here fall into three categories. Those in Part I explore logical and mathematical issues that arise within situation theory. The papers in Part II connect situation theory with other approaches to logical issues, while those in Part III apply various version of situation theory to a number of linguistic issues.
Our knowledge about the world is often expressed by generic sentences, yet their meanings are far from clear. This book provides answers to central problems concerning generics: what do they mean? Which factors affect their interpretation? How can one reason with generics? Cohen proposes that the meanings of generics are probability judgments, and shows how this view accounts for many of their puzzling properties, including lawlikeness. Generics are evaluated with respect to alternatives. Cohen argues that alternatives are induced by the kind as well as by the predicated property, and thus provides a uniform account of the varied interpretations of generics. He studies the formal properties of alternatives and provides a compositional account of their derivation by focus and presupposition. Cohen uses his semantics of generics to provide a formal characterization of adequate default reasoning, and proves some desirable results of this formalism.
Thinking and Being:
Irad Kimhi Harvard University Press, 2018 Library of Congress B808.5.K56 2018 | Dewey Decimal 128.33
Opposing a long-standing orthodoxy of the Western philosophical tradition running from ancient Greek thought until the late nineteenth century, Frege argued that psychological laws of thought—those that explicate how we in fact think—must be distinguished from logical laws of thought—those that formulate and impose rational requirements on thinking. Logic does not describe how we actually think, but only how we should. Yet by thus sundering the logical from the psychological, Frege was unable to explain certain fundamental logical truths, most notably the psychological version of the law of non-contradiction—that one cannot think a thought and its negation simultaneously.
Irad Kimhi’s Thinking and Being marks a radical break with Frege’s legacy in analytic philosophy, exposing the flaws of his approach and outlining a novel conception of judgment as a two-way capacity. In closing the gap that Frege opened, Kimhi shows that the two principles of non-contradiction—the ontological principle and the psychological principle—are in fact aspects of the very same capacity, differently manifested in thinking and being.
As his argument progresses, Kimhi draws on the insights of historical figures such as Aristotle, Kant, and Wittgenstein to develop highly original accounts of topics that are of central importance to logic and philosophy more generally. Self-consciousness, language, and logic are revealed to be but different sides of the same reality. Ultimately, Kimhi’s work elucidates the essential sameness of thinking and being that has exercised Western philosophy since its inception.
This collection of articles and review essays, including many hard to find pieces, comprises the most important and fundamental studies of Indian logic and linguistics ever undertaken.
Frits Staal is concerned with four basic questions: Are there universals of logic that transcend culture and time? Are there universals of language and linguistics? What is the nature of Indian logic? And what is the nature of Indian linguistics? By addressing these questions, Staal demonstrates that, contrary to the general assumption among Western philosophers, the classical philosophers of India were rationalists, attentive to arguments. They were in this respect unlike contemporary Western thinkers inspired by existentialism or hermeneutics, and like the ancient Chinese, Greeks, and many medieval European schoolmen, only—as Staal says—more so. Universals establishes that Asia's contributions are not only compatible with what has been produced in the West, but a necessary ingredient and an essential component of any future human science.