by Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer and Hito Steyerl
University of Minnesota Press, 2018
Paper: 978-1-5179-0645-0
Library of Congress Classification BF294
Dewey Decimal Classification 006.4

ABOUT THIS BOOK | AUTHOR BIOGRAPHY | REVIEWS
ABOUT THIS BOOK


How do “human” prejudices reemerge in algorithmic cultures allegedly devised to be blind to them?


How do “human” prejudices reemerge in algorithmic cultures allegedly devised to be blind to them? To answer this question, this book investigates a fundamental axiom in computer science: pattern discrimination. By imposing identity on input data, in order to filter—that is, to discriminate—signals from noise, patterns become a highly political issue. Algorithmic identity politics reinstate old forms of social segregation, such as class, race, and gender, through defaults and paradigmatic assumptions about the homophilic nature of connection.


Instead of providing a more “objective” basis of decision making, machine-learning algorithms deepen bias and further inscribe inequality into media. Yet pattern discrimination is an essential part of human—and nonhuman—cognition. Bringing together media thinkers and artists from the United States and Germany, this volume asks the urgent questions: How can we discriminate without being discriminatory? How can we filter information out of data without reinserting racist, sexist, and classist beliefs? How can we queer homophilic tendencies within digital cultures?




See other books on: Media Studies | Pattern perception | Prejudices | Social Science
See other titles from University of Minnesota Press