front cover of Collected Papers of Martin Kay
Collected Papers of Martin Kay
A Half Century of Computational Linguistics
Martin Kay, with the editorial assistance of Dan Flickinger and Stephan Oepen
CSLI, 2010
Since the dawn of the age of computers, researchers have been pushing the limits of available processing power to tackle the formidable challenge of developing software that can understand ordinary human language.  At the forefront of this quest for the past fifty years, Martin Kay has been a constant source of new algorithms which have proven fundamental to progress in computational linguistics. Collected Papers of Martin Kay, the first comprehensive collection of his works to date, opens a window into the growth of an increasingly important field of scientific research and development. 
 
 
[more]

front cover of The Creativity Code
The Creativity Code
Art and Innovation in the Age of AI
Marcus du Sautoy
Harvard University Press, 2020

“A brilliant travel guide to the coming world of AI.”
—Jeanette Winterson


What does it mean to be creative? Can creativity be trained? Is it uniquely human, or could AI be considered creative?

Mathematical genius and exuberant polymath Marcus du Sautoy plunges us into the world of artificial intelligence and algorithmic learning in this essential guide to the future of creativity. He considers the role of pattern and imitation in the creative process and sets out to investigate the programs and programmers—from Deep Mind and the Flow Machine to Botnik and WHIM—who are seeking to rival or surpass human innovation in gaming, music, art, and language. A thrilling tour of the landscape of invention, The Creativity Code explores the new face of creativity and the mysteries of the human code.

“As machines outsmart us in ever more domains, we can at least comfort ourselves that one area will remain sacrosanct and uncomputable: human creativity. Or can we?…In his fascinating exploration of the nature of creativity, Marcus du Sautoy questions many of those assumptions.”
Financial Times

“Fascinating…If all the experiences, hopes, dreams, visions, lusts, loves, and hatreds that shape the human imagination amount to nothing more than a ‘code,’ then sooner or later a machine will crack it. Indeed, du Sautoy assembles an eclectic array of evidence to show how that’s happening even now.”
The Times

[more]

front cover of Language and Learning for Robots
Language and Learning for Robots
Colleen Crangle and Patrick Suppes
CSLI, 1994
Robot technology will find wide-scale use only when a robotic device can be given commands and taught new tasks in a natural language. How could a robot understand instructions expressed in English? How could a robot learn from instructions? Crangle and Suppes begin to answer these questions through a theoretical approach to language and learning for robots and by experimental work with robots.

The authors develop the notion of an instructable robot—one which derives its intelligence in part from interaction with humans. Since verbal interaction with a robot requires a natural language semantics, the authors propose a natural-model semantics which they then apply to the interpretation of robot commands. Two experimental projects are described which provide natural-language interfaces to robotic aids for the physically disabled. The authors discuss the specific challenges posed by the interpretation of "stop" commands and the interpretation of spatial prepositions.

The authors also examine the use of explicit verbal instruction to teach a robot new procedures, propose ways a robot can learn from corrective commands containing qualitative spatial expressions, and discuss the machine-learning of a natural language use to instruct a robot in the performance of simple physical tasks. Two chapters focus on probabilistic techniques in learning.
[more]

front cover of Language and the Rise of the Algorithm
Language and the Rise of the Algorithm
Jeffrey M. Binder
University of Chicago Press, 2022
A wide-ranging history of the algorithm.

Bringing together the histories of mathematics, computer science, and linguistic thought, Language and the Rise of the Algorithm reveals how recent developments in artificial intelligence are reopening an issue that troubled mathematicians well before the computer age: How do you draw the line between computational rules and the complexities of making systems comprehensible to people? By attending to this question, we come to see that the modern idea of the algorithm is implicated in a long history of attempts to maintain a disciplinary boundary separating technical knowledge from the languages people speak day to day.
 
Here Jeffrey M. Binder offers a compelling tour of four visions of universal computation that addressed this issue in very different ways: G. W. Leibniz’s calculus ratiocinator; a universal algebra scheme Nicolas de Condorcet designed during the French Revolution; George Boole’s nineteenth-century logic system; and the early programming language ALGOL, short for algorithmic language. These episodes show that symbolic computation has repeatedly become entangled in debates about the nature of communication. Machine learning, in its increasing dependence on words, erodes the line between technical and everyday language, revealing the urgent stakes underlying this boundary.
 
The idea of the algorithm is a levee holding back the social complexity of language, and it is about to break. This book is about the flood that inspired its construction.
[more]

front cover of The Myth of Artificial Intelligence
The Myth of Artificial Intelligence
Why Computers Can’t Think the Way We Do
Erik J. Larson
Harvard University Press, 2021

“Exposes the vast gap between the actual science underlying AI and the dramatic claims being made for it.”
—John Horgan


“If you want to know about AI, read this book…It shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence.”
—Peter Thiel

Ever since Alan Turing, AI enthusiasts have equated artificial intelligence with human intelligence. A computer scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape of AI to reveal why this is a profound mistake.

AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don’t correlate data sets. We make conjectures, informed by context and experience. And we haven’t a clue how to program that kind of intuitive reasoning, which lies at the heart of common sense. Futurists insist AI will soon eclipse the capacities of the most gifted mind, but Larson shows how far we are from superintelligence—and what it would take to get there.

“Larson worries that we’re making two mistakes at once, defining human intelligence down while overestimating what AI is likely to achieve…Another concern is learned passivity: our tendency to assume that AI will solve problems and our failure, as a result, to cultivate human ingenuity.”
—David A. Shaywitz, Wall Street Journal

“A convincing case that artificial general intelligence—machine-based intelligence that matches our own—is beyond the capacity of algorithmic machine learning because there is a mismatch between how humans and machines know what they know.”
—Sue Halpern, New York Review of Books

[more]

front cover of Passwords
Passwords
Philology, Security, Authentication
Brian Lennon
Harvard University Press, 2018

Cryptology, the mathematical and technical science of ciphers and codes, and philology, the humanistic study of natural or human languages, are typically understood as separate domains of activity. But Brian Lennon contends that these two domains, both concerned with authentication of text, should be viewed as contiguous. He argues that computing’s humanistic applications are as historically important as its mathematical and technical ones. What is more, these humanistic uses, no less than cryptological ones, are marked and constrained by the priorities of security and military institutions devoted to fighting wars and decoding intelligence.

Lennon’s history encompasses the first documented techniques for the statistical analysis of text, early experiments in mechanized literary analysis, electromechanical and electronic code-breaking and machine translation, early literary data processing, the computational philology of late twentieth-century humanities computing, and early twenty-first-century digital humanities. Throughout, Passwords makes clear the continuity between cryptology and philology, showing how the same practices flourish in literary study and in conditions of war.

Lennon emphasizes the convergence of cryptology and philology in the modern digital password. Like philologists, hackers use computational methods to break open the secrets coded in text. One of their preferred tools is the dictionary, that preeminent product of the philologist’s scholarly labor, which supplies the raw material for computational processing of natural language. Thus does the historic overlap of cryptology and philology persist in an artifact of computing—passwords—that many of us use every day.

[more]

front cover of Personal Knowledge Graphs (PKGs)
Personal Knowledge Graphs (PKGs)
Methodology, tools and applications
Sanju Tiwari
The Institution of Engineering and Technology, 2023
Since the development of the semantic web, knowledge graphs (KGs) have been used by search engines, knowledge-engines and question-answering services as well as social networks. A knowledge graph, also known as a semantic network, represents and illustrates a network of real-world entities such as objects, events, situations, or concepts and the relationships between them. This information is usually stored in a graph database and visualized as a graph structure, prompting the term "knowledge graph". Knowledge graphs structure the information of entities, their properties and the relation between them.
[more]

front cover of Putting Linguistics into Speech Recognition
Putting Linguistics into Speech Recognition
The Regulus Grammar Compiler
Manny Rayner, Beth Ann Hockey, and Pierrette Bouillon
CSLI, 2006
Most computer programs that analyze spoken dialogue use a spoken command grammar, which limits what the user can say when talking to the system. To make this process simpler, more automated, and effective for command grammars even at initial stages of a project, the Regulus grammar compiler was developed by a consortium of experts—including NASA scientists. This book presents a complete description of both the practical and theoretical aspects of Regulus and will be extremely helpful for students and scholars working in computational linguistics as well as software engineering.
[more]

front cover of The Tbilisi Symposium on Logic, Language and Computation
The Tbilisi Symposium on Logic, Language and Computation
Selected Papers
Edited by Jonathan Ginzburg, Zurab Khasidashvili, Carl Vogel, Jean-Jacques Lévy,
CSLI, 1998
This volume brings together papers from linguists, logicians, and computer scientists from thirteen countries (Armenia, Denmark, France, Georgia, Germany, Israel, Italy, Japan, Poland, Spain, Sweden, UK, and USA). This collection aims to serve as a catalyst for new interdisciplinary developments in language, logic and computation and to introduce new ideas from the expanded European academic community. Spanning a wide range of disciplines, the papers cover such topics as formal semantics of natural language, dynamic semantics, channel theory, formal syntax of natural language, formal language theory, corpus-based methods in computational linguistics, computational semantics, syntactic and semantic aspects of l-calculus, non-classical logics, and a fundamental problem in predicate logic.
[more]


Send via email Share on Facebook Share on Twitter