Editor Pieter Verdegem was shortlisted for best edited collection in the 2022 MeCCSA outstanding achievement awards. See https://uwestminsterpress.blog/2022/08/18/original-and-timely-uwp-title-shortlisted-for-major-academic-book-prize/ for more details.
We are entering a new era of technological determinism and solutionism in which governments and business actors are seeking data-driven change, assuming that Artificial Intelligence is now inevitable and ubiquitous. But we have not even started asking the right questions, let alone developed an understanding of the consequences. Urgently needed is debate that asks and answers fundamental questions about power. This book brings together critical interrogations of what constitutes AI, its impact and its inequalities in order to offer an analysis of what it means for AI to deliver benefits for everyone. The book is structured in three parts: Part 1, AI: Humans vs. Machines, presents critical perspectives on human-machine dualism. Part 2, Discourses and Myths About AI, excavates metaphors and policies to ask normative questions about what is ‘desirable’ AI and what conditions make this possible. Part 3, AI Power and Inequalities, discusses how the implementation of AI creates important challenges that urgently need to be addressed. Bringing together scholars from diverse disciplinary backgrounds and regional contexts, this book offers a vital intervention on one of the most hyped concepts of our times.
Based on a series of popular courses and workshops that Professor Patrick Barry has created for students, professionals, and anyone else interested in taking a skills-based approach to artificial intelligence, this book gives you a chance to engage with important AI concepts, experiment with exploratory AI exercises, and then ultimately develop your own customized list of AI traps to try as well as AI traps to avoid.
Through algorithms and artificial intelligence (AI), objects and digital services now demonstrate new skills they did not have before, right up to replacing human activity through pre-programming or by making their own decisions. As part of the internet of things, AI applications are already widely used today, for example in language processing, image recognition and the tracking and processing of data.
This policy brief illustrates the potential negative and positive impacts of AI and reviews related policy strategies adopted by the UK, US, EU, as well as Canada and China. Based on an ethical approach that considers the role of AI from a democratic perspective and considering the public interest, the authors make policy recommendations that help to strengthen the positive impact of AI and to mitigate its negative consequences.
A revealing genealogy of image-recognition techniques and technologies
Today’s most advanced neural networks and sophisticated image-analysis methods come from 1950s and ’60s Cold War culture—and many biases and ways of understanding the world from that era persist along with them. Aerial surveillance and reconnaissance shaped all of the technologies that we now refer to as computer vision, including facial recognition. The Birth of Computer Vision uncovers these histories and finds connections between the algorithms, people, and politics at the core of automating perception today.
James E. Dobson reveals how new forms of computerized surveillance systems, high-tech policing, and automated decision-making systems have become entangled, functioning together as a new technological apparatus of social control. Tracing the development of a series of important computer-vision algorithms, he uncovers the ideas, worrisome military origins, and lingering goals reproduced within the code and the products based on it, examining how they became linked to one another and repurposed for domestic and commercial uses. Dobson includes analysis of the Shakey Project, which produced the first semi-autonomous robot, and the impact of student protest in the early 1970s at Stanford University, as well as recovering the computer vision–related aspects of Frank Rosenblatt’s Perceptron as the crucial link between machine learning and computer vision.
Motivated by the ongoing use of these major algorithms and methods, The Birth of Computer Vision chronicles the foundations of computer vision and artificial intelligence, its major transformations, and the questionable legacy of its origins.
Cover alt text: Two overlapping circles in cream and violet, with black background. Top is a printed circuit with camera eye; below a person at a 1977 computer.
Covid-19 has highlighted limitations in our democratic politics – but also lessons for how to deepen our democracy and more effectively respond to future crises. In the face of an emergency, the working assumption all too often is that only a centralised, top-down response is possible. This book exposes the weakness of this assumption, making the case for deeper participation and deliberation in times of crises. During the pandemic, mutual aid and self-help groups have realised unmet needs. And forward-thinking organisations have shown that listening to and working with diverse social groups leads to more inclusive outcomes.
Participation and deliberation are not just possible in an emergency. They are valuable, perhaps even indispensable.
This book draws together a diverse range of voices of activists, practitioners, policy makers, researchers and writers. Together they make visible the critical role played by participation and deliberation during the pandemic and make the case for enhanced engagement during and beyond emergency contexts.
Another, more democratic world can be realised in the face of a crisis. The contributors to this book offer us meaningful insights into what this could look like.
A revealing and surprising origin story, showing how attempts to render human speech and language computable led from the era of big data to today’s AI.
Since the advent of computers, society has fantasized about conversing with machines. In this eye-opening book, technology expert Xiaochang Li shows readers how that dream both fueled the demand for data and set the stage for today’s generative AI. With original research and clear explanations, Li elucidates the origins of what’s known as natural language processing (NLP) and the heated twentieth-century debates between computer scientists, linguists, and communication engineers that shaped today’s technology. Starting with early devices that recorded, analyzed, and attempted to interpret human speech, she demonstrates how computer speech recognition, particularly efforts led by Bell Labs and IBM, advanced technology by deemphasizing linguistic meaning in favor of statistical prediction. In other words, researchers gradually abandoned systems that sought to understand human language, opting instead for work-arounds that simply predicted patterns in speech and text data. That solution became incredibly and surprisingly adaptable. As Li reveals, transforming linguistic questions into engineering ones ushered in the routine operation of search engines, spam filters, and the varied content sorting and recommendation mechanisms that regulate the access, circulation, and legitimacy of information across every platform. But this has all come at the cost of forever requiring copious and ever-growing amounts of new data.
At its core, Divination Engines illuminates how the artifacts of human communication—speech, text, and images—have become both the fodder for and products of computers. This connection between communication and computation, Li shows, has given rise to data-driven analytics, machine learning, and today’s algorithmic culture.
A timely investigation of the potential economic effects, both realized and unrealized, of artificial intelligence within the United States healthcare system.
In sweeping conversations about the impact of artificial intelligence on many sectors of the economy, healthcare has received relatively little attention. Yet it seems unlikely that an industry that represents nearly one-fifth of the economy could escape the efficiency and cost-driven disruptions of AI.
The Economics of Artificial Intelligence: Health Care Challenges brings together contributions from health economists, physicians, philosophers, and scholars in law, public health, and machine learning to identify the primary barriers to entry of AI in the healthcare sector. Across original papers and in wide-ranging responses, the contributors analyze barriers of four types: incentives, management, data availability, and regulation. They also suggest that AI has the potential to improve outcomes and lower costs. Understanding both the benefits of and barriers to AI adoption is essential for designing policies that will affect the evolution of the healthcare system.
A thought-provoking examination of how AI might either spur or harm human economic progress.
What happens to an economy when machines can think as well as, or even better than, humans? The Economics of Transformative AI tackles this issue, which is one of the most consequential economic questions of our time. This book brings together sixteen research studies from top economists that look closely at how transformative AI reshapes everything from innovation and market structure to employment, inequality, and human purpose. They explore both opportunities, such as personalized algorithmic assistance, accelerated scientific discovery, and new forms of organization, and profound challenges, including potential labor displacement, rising concentration of power, changes in the information ecosystem, and even possible existential risks to humanity.
The studies in this volume develop economic frameworks for understanding the conditions under which AI might enhance or undermine human flourishing. They offer policymakers, researchers, and business leaders the analytical tools needed to prepare for the potential economic transformations ahead.
Since antiquity, philosophers and engineers have tried to take life’s measure by reproducing it. Aiming to reenact Creation, at least in part, these experimenters have hoped to understand the links between body and spirit, matter and mind, mechanism and consciousness. Genesis Redux examines moments from this centuries-long experimental tradition: efforts to simulate life in machinery, to synthesize life out of material parts, and to understand living beings by comparison with inanimate mechanisms.
Jessica Riskin collects seventeen essays from distinguished scholars in several fields. These studies offer an unexpected and far-reaching result: attempts to create artificial life have rarely been driven by an impulse to reduce life and mind to machinery. On the contrary, designers of synthetic creatures have generally assumed a role for something nonmechanical. The history of artificial life is thus also a history of theories of soul and intellect.
Taking a historical approach to a modern quandary, Genesis Redux is essential reading for historians and philosophers of science and technology, scientists and engineers working in artificial life and intelligence, and anyone engaged in evaluating these world-changing projects.
Why AI offers a chance for the humanities to strengthen their relevance and significance
If humanistic research consists of the generation of consensus positions, simple expression, summarized texts, or passable translations, then we have arrived at the place where AI is able to accomplish these different missions to a convincing degree. However, Laurent Dubreuil argues, such tasks do not, in any way, constitute the humanities. On the contrary, he posits, a maximalist take on scholarship would not focus on generation but on creation, as a subject and as an object. Dubreuil seizes the opportunity of what AI reveals about the meaning of humanistic inquiry to offer a path for the renewal of the humanities on transhistorical, transcultural, and transdisciplinary grounds.
How data and artificial intelligence create a new, abstract digital subject
Ideal Subjects examines how samples of our lives and daily behaviors have come to reside in the world of data and artificial intelligence—and what this means for who we are and what we may become. Detailing how AI-facilitated algorithmic prediction and data modeling make “ideal subjects” of us, Olga Goriunova explores the complex ways we relate to these digital abstractions.
As more and more of our experience is funneled through computational records and models, datafied aspects of our lives are segmented and reconfigured to operate as new entities. Rather than viewing these abstract assemblages as extensions of our selves, Goriunova encourages us to consider these products of computational processes as an entirely new kind of subject, one that is both more and less than a human.
Through close readings of contemporary digital practices and data analytics, Goriunova exposes the profound ethical, aesthetic, and political implications of producing and managing these new digital subjects. Highlighting the distinctive impact of computation on contemporary subject formation while placing the present within a history of shifting conceptions of the subject, she gives us much-needed tools for understanding how our intimate selves are rendered by the abstract entities of big data. Ideal Subjects presents an uncanny and deeply fascinating portrait of modern subjectivity in the technological age.
Retail e-book files for this title are screen-reader friendly with images accompanied by short alt text and/or extended descriptions.
An in-depth assessment of innovations in military information technology informs hypothetical outcomes for artificial intelligence adaptations
In the coming decades, artificial intelligence (AI) could revolutionize the way humans wage war. The military organizations that best innovate and adapt to this AI revolution will likely gain significant advantages over their rivals. To this end, great powers such as the United States, China, and Russia are already investing in novel sensing, reasoning, and learning technologies that will alter how militaries plan and fight. The resulting transformation could fundamentally change the character of war.
In Information in War, Benjamin Jensen, Christopher Whyte, and Scott Cuomo provide a deeper understanding of the AI revolution by exploring the relationship between information, organizational dynamics, and military power. The authors analyze how militaries adjust to new information communication technology historically to identify opportunities, risks, and obstacles that will almost certainly confront modern defense organizations as they pursue AI pathways to the future. Information in War builds on these historical cases to frame four alternative future scenarios exploring what the AI revolution could look like in the US military by 2040.
A wide-ranging history of the algorithm.
Bringing together the histories of mathematics, computer science, and linguistic thought, Language and the Rise of the Algorithm reveals how recent developments in artificial intelligence are reopening an issue that troubled mathematicians well before the computer age: How do you draw the line between computational rules and the complexities of making systems comprehensible to people? By attending to this question, we come to see that the modern idea of the algorithm is implicated in a long history of attempts to maintain a disciplinary boundary separating technical knowledge from the languages people speak day to day.
Here, Jeffrey M. Binder offers a compelling tour of four visions of universal computation that addressed this issue in very different ways: G. W. Leibniz’s calculus ratiocinator; a universal algebra scheme Nicolas de Condorcet designed during the French Revolution; George Boole’s nineteenth-century logic system; and the early programming language ALGOL, short for algorithmic language. These episodes show that symbolic computation has repeatedly become entangled in debates about the nature of communication. Machine learning, in its increasing dependence on words, erodes the line between technical and everyday language, revealing the urgent stakes underlying this boundary.
The idea of the algorithm is a levee holding back the social complexity of language, and it is about to break. This book is about the flood that inspired its construction.
How generative AI systems capture a core function of language
Looking at the emergence of generative AI, Language Machines presents a new theory of meaning in language and computation, arguing that humanistic scholarship misconstrues how large language models (LLMs) function. Seeing LLMs as a convergence of computation and language, Leif Weatherby contends that AI does not simulate cognition, as widely believed, but rather creates culture. This evolution in language, he finds, is one that we are ill-prepared to evaluate, as what he terms “remainder humanism” counterproductively divides the human from the machine without drawing on established theories of representation that include both.
To determine the consequences of using AI for language generation, Weatherby reads linguistic theory in conjunction with the algorithmic architecture of LLMs. He finds that generative AI captures the ways in which language is at first complex, cultural, and poetic, and only later referential, functional, and cognitive. This process is the semiotic hinge on which an emergent AI culture depends. Weatherby calls for a “general poetics” of computational cultural forms under the formal conditions of the algorithmic reproducibility of language.
Locating the output of LLMs on a spectrum from poetry to ideology, Language Machines concludes that literary theory must be the backbone of a new rhetorical training for our linguistic-computational culture.
Exploring the influence of AI technologies on theories of reason, cognition, learning, and education
Learning Under Algorithmic Conditions presents twenty-seven concise essays that collectively chart the shifting terrain of learning in the age of artificial intelligence. Providing historical and philosophical context, this innovative volume features prominent scholars from the fields of media studies, philosophy, and education research, who shed light on how learning has become newly envisioned, machinic, and more-than-human. The contributors unravel various histories of machine intelligence and elucidate the current impact of machine learning technologies on practices of knowledge production. Teeming with theoretical and practical insights, Learning Under Algorithmic Conditions is an interdisciplinary guide for those working across the humanities and social sciences as well as anyone interested in understanding our changing social, political, and technical infrastructures.
Contributors: Craig Carson, Adelphi U; Felicity Coleman, U of the Arts London; Ed Dieterle; Shayan Doroudi, U of California, Irvine; David Gauthier, Utrecht U; Cathrine Hasse, Aarhus U; Talha Can İşsevenler, CUNY; Goda Klumbytė; Robb Lindgren, U of Illinois Urbana-Champaign; Michael Madiao; Henry Neim Osman; Luciana Parisi, Duke U; Carolyn Pedwell, Lancaster U; Arkady Plotnitsky, Purdue U; Julian Quiros, U of Pennsylvania; Sina Rismanchian; Warren Sack, U of California, Santa Cruz; R. Joshua Scannell, The New School; Gregory J. Seigworth, Millersville U; Rebecca Uliasz, U of Michigan; David Wagner, U of New Brunswick; Ben Williamson, U of Edinburgh.
Retail e-book files for this title are screen-reader friendly with images accompanied by short alt text and/or extended descriptions.
A critical examination of the figure of the neural network as it mediates neuroscientific and computational discourses and technical practices
Neural Networks proposes to reconstruct situated practices, social histories, mediating techniques, and ontological assumptions that inform the computational project of the same name. If so-called machine learning comprises a statistical approach to pattern extraction, then neural networks can be defined as a biologically inspired model that relies on probabilistically weighted neuron-like units to identify such patterns. Far from signaling the ultimate convergence of human and machine intelligence, however, neural networks highlight the technologization of neurophysiology that characterizes virtually all strands of neuroscientific and AI research of the past century. Taking this traffic as its starting point, this volume explores how cognition came to be constructed as essentially computational in nature, to the point of underwriting a technologized view of human biology, psychology, and sociability, and how countermovements provide resources for thinking otherwise.
Most human thinking is thoroughly informed by context but, until recently, theories of reasoning have concentrated on abstract rules and generalities that make no reference to this crucial factor. Perspectives on Contexts brings together essays from leading cognitive scientists to forge a vigorous interdisciplinary understanding of the contextual phenomenon. Applicable to human and machine cognition in philosophy, artificial intelligence, and psychology, this volume is essential to the current renaissance in thinking about context.
An authoritative look at how artificial intelligence both shapes and is shaped by the political and economic forces of the modern world.
As the effects of artificial intelligence are felt across economies and societies, many of its ramifications are still emerging. This volume brings together economists and political scientists to examine how AI intersects with regulation, military power, and political identity—offering analytical frameworks and identifying key open questions for future research.
The contributions address topics such as the allocation of property rights for AI inputs, trade-offs among alternative regulatory regimes, and the role of interest groups in shaping the technology’s trajectory. They explore how AI-related capabilities influence military effectiveness, resource allocation, and bargaining power among nations, and consider AI’s effects on political preferences, from the influence of AI-curated information on polarization to the implications of targeted political advertising and personalized education for national identity formation. The volume highlights key trade-offs that arise in AI’s political economy, and points toward empirical strategies and theoretical models that can advance understanding in this emerging field.
Drawing on diverse disciplinary perspectives, the collection provides a foundation for rigorous inquiry into how AI both shapes and is shaped by political and economic forces.
A timely investigation of what is at stake when AI takes legal decision-making out of human hands.
Artificial intelligence is proliferating in many professions, and the legal field is no exception. In The Rule of Law After Artificial Intelligence, Katie Szilagyi investigates the philosophical and practical implications of using AI in legal spaces, beginning with several fundamental questions: What is the law supposed to do, and from where does it derive its authority? Would law still achieve these aims if automated? How might automation affect the rule of law’s integrity and democratic institutions’ operations?
Blending legal philosophy, applied case studies, and insights from both critical legal scholarship and science and technology studies, Szilagyi argues that law and storytelling are deeply connected. Through creating and contesting the law, we make sense of the information around us and generate narratives about our collective world. These narratives are not static: legal precedent evolves, and legal deliberation on hard cases can help to resolve unclear or unprecedented social issues.
Szilagyi demonstrates that technological innovations make the rule of law vulnerable because large language models and machine learning undermine the visioning function of legal narratives, collapsing exercises of legal interpretation into mere administration. Datafication of law—built on the biased data of our cultural past—threatens longstanding legal ideals, lessens the constraints against abuses of power by private actors, and hamstrings society’s ability to reach a more egalitarian future. Szilagyi argues instead for centering narratives within the law and, in turn, rediscovering the tales the law tells us about who we are.
Decisions about war have always been made by humans, but now intelligent machines are on the cusp of changing things – with dramatic consequences for international affairs. This book explores the evolutionary origins of human strategy, and makes a provocative argument that Artificial Intelligence will radically transform the nature of war by changing the psychological basis of decision-making about violence.
Strategy, Evolution, and War is a cautionary preview of how Artificial Intelligence (AI) will revolutionize strategy more than any development in the last three thousand years of military history. Kenneth Payne describes strategy as an evolved package of conscious and unconscious behaviors with roots in our primate ancestry. Our minds were shaped by the need to think about warfare—a constant threat for early humans. As a result, we developed a sophisticated and strategic intelligence.
The implications of AI are profound because they depart radically from the biological basis of human intelligence. Rather than being just another tool of war, AI will dramatically speed up decision making and use very different cognitive processes, including when deciding to launch an attack, or escalate violence. AI will change the essence of strategy, the organization of armed forces, and the international order.
This book is a fascinating examination of the psychology of strategy-making from prehistoric times, through the ancient world, and into the modern age.
Dispelling the notion of “generative” AI
Neural networks are designed to dissolve all media into the vector space—a universal space of commensurability. In Vector Media, Leonardo Impett and Fabian Offert parse theories of automatic vision to trace contemporary artificial intelligence’s technical ideology of epistemic reduction, where sensory data is turned into abstracted forms of meaning. Under this regime, bias is not just a question of what is represented but of the logic of representation itself. Drawing on Phil Agre’s notion of a critical technical practice, Vector Media reveals how artificial intelligence systems embed new epistemologies of media beneath the surface of their architectures.
Analyzing the techniques underpinning large multimodal artificial intelligence models like DALL-E, Midjourney, Flux, or Stable Diffusion, Impett and Offert offer the concept of neural exchange value: the value cultural artifacts acquire not through meaning or context but through their capacity to function as vectors. In such a system, commensurability becomes a condition of existence: what matters is not what something is but that it can be embedded. Rather than focusing solely on datasets, Vector Media proposes a critical study of vector spaces—and the machine cultures they produce—as a necessary complement to prevailing approaches in AI critique.
Retail e-book files for this title are screen-reader friendly with images accompanied by short alt text and/or extended descriptions.
READERS
Browse our collection.
PUBLISHERS
See BiblioVault's publisher services.
STUDENT SERVICES
Files for college accessibility offices.
UChicago Accessibility Resources
home | accessibility | search | about | contact us
BiblioVault ® 2001 - 2026
The University of Chicago Press
