This volume presents a selection of the best papers presented at the forty-first annual Conference on Computer Applications and Quantitative Methods in Archaeology. The theme for the conference was “Across Space and Time” and the papers explore a multitude of topics related to that concept, including databases, the semantic Web, geographical information systems, data collection and management, and more.
The continuing growth in the size and complexity of VLSI devices requires a parallel development of well-designed, efficient CAD tools. The majority of commercially available tools are based on an algorithmic approach to the problem and there is a continuing research effort aimed at improving these. The sheer complexity of the problem has, however, led to an interest in examining the applicability of expert systems and other knowledge based techniques to certain problems in the area and a number of results are becoming available. The aim of this book is to sample the present state-of-the-art in CAD for VLSI and it covers both newly developed algorithms and applications of techniques from the artificial intelligence community. The editors believe it will prove of interest to all engineers concerned with the design and testing of integrated circuits and systems.
In growing numbers, archeologists are specializing in the analysis of excavated animal bones as clues to the environment and behavior of ancient peoples. This pathbreaking work provides a detailed discussion of the outstanding issues and methods of bone studies that will interest zooarcheologists as well as paleontologists who focus on reconstructing ecologies from bones. Because large samples of bones from archeological sites require tedious and time-consuming analysis, the authors also offer a set of computer programs that will greatly simplify the bone specialist's job.
After setting forth the interpretive framework that governs their use of numbers in faunal analysis, Richard G. Klein and Kathryn Cruz-Uribe survey various measures of taxonomic abundance, review methods for estimating the sex and age composition of a fossil species sample, and then give examples to show how these measures and sex/age profiles can provide useful information about the past. In the second part of their book, the authors present the computer programs used to calculate and analyze each numerical measure or count discussed in the earlier chapters. These elegant and original programs, written in BASIC, can easily be used by anyone with a microcomputer or with access to large mainframe computers.
Arabic is an exciting—yet challenging—language for scholars because many of its linguistic properties have not been fully described. Arabic Computational Linguistics documents the recent work of researchers in both academia and industry who have taken up the challenge of solving the real-life problems posed by an understudied language.
This comprehensive volume explores new Arabic machine translation systems, innovations in speech recognition and mention detection, tree banks, and linguistic corpora. Arabic Computational Linguistics will be an indispensable reference for language researchers and practitioners alike.
Research in artificial intelligence has developed many techniques and methodologies that can be either adapted or used directly to solve complex power system problems. A variety of such problems are covered in this book including reactive power control, alarm analysis, fault diagnosis, protection systems and load forecasting. Methods such as knowledge-based (expert) systems, fuzzy logic, neural networks and genetic algorithms are all first introduced and then investigated in terms of their applicability in the power systems field. The book, therefore, serves as both an introduction to the use of artificial intelligence techniques for those from a power systems background and as an overview of the power systems implementation area for those from an artificial intelligence computing or control background. It is structured so that it is suitable for various levels of reader, covering basic principles as well as applications and case studies. The most popular methods and the most fruitful application fields are considered in more detail. The book contains contributions from top international authors and will be an extremely useful text for all those with an interest in the field.
By relating grammar to cognitive architecture, John T. Hale shows step-by-step how incremental parsing works in models of perceptual processing and how specific learning rules might lead to frequency-sensitive preferences. Along the way, Hale reconsiders garden-pathing, the parallel/serial distinction, and information-theoretical complexity metrics, such as surprisal. This book is a must for cognitive scientists of language.
This book explores the aesthetics, medial affordances, and cultural economics of monumental literary works of the digital age and offers a comparative and cross-cultural perspective on a wide range of contemporary writers. Using an international archive of hefty tomes by authors such as Mark Z. Danielewski, Roberto Bolaño, Elena Ferrante, Karl Ove Knausgård, George R.R. Martin, Jonathan Franzen, and William T. Vollmann, van de Ven investigates multiple strands of bigness that speak to the tenuous position of print literature in the present but also to the robust stature of literary discourse within our age of proliferating digital media. Her study makes a case for the cultural agency of the big book—as a material object and a discursive phenomenon, entangled in complex ways with questions of canonicity, materiality, gender, and power. Van de Ven takes us into a contested terrain beyond the 1,000-page mark, where issues of scale and reader comprehension clash with authorial aggrandizement and the pleasures of binge reading and serial consumption.
This guide to preparing manuscripts on computer offers authors and publishers practical assistance on how to use authors' disks or tapes for typesetting. When the thirteenth edition of The Chicago Manual of Style was published in 1982, the impact of personal computers on the publishing process had just begun to be felt. This new book supplements information in the Chicago Manual by covering the rapidly changing subject of electronic manuscripts. Since the early 1980s more and more authors have been producing manuscripts on computers and expecting their publishers to make use of the electronic version. For a number of reasons, including the proliferation of incompatible machines and software, however, publishers have not always found it easy to work with electronic manuscripts. The University of Chicago Press has been doing so since 1981, and in this book passes on the results of six years' experience with preparing such manuscripts and converting them to type.
Coding Streams of Language is a systematic and practical research guide to coding verbal data in all its forms—written texts and oral talk, in print or online, gathered through surveys and interviews, database searches, or audio or video recordings. The thoughtful, detailed advice found in this book will help readers carry out analyses of language that are both systematic and complex. Situating themselves in the relatively new line of mixed methods research, the authors provide guidance on combining context-based inquiry with quantitative tools for examining big picture patterns that acknowledges the complexity of language use. Throughout Coding Streams of Language, exercises, points for discussion, and milestones help guide readers through an analytic project. As a supplement to the book, YouTube videos demonstrate tools and techniques.
Databases have revolutionized nearly every aspect of our lives. Information of all sorts is being collected on a massive scale, from Google to Facebook and well beyond. But as the amount of information in databases explodes, we are forced to reassess our ideas about what knowledge is, how it is produced, to whom it belongs, and who can be credited for producing it.
Every scientist working today draws on databases to produce scientific knowledge. Databases have become more common than microscopes, voltmeters, and test tubes, and the increasing amount of data has led to major changes in research practices and profound reflections on the proper professional roles of data producers, collectors, curators, and analysts.
Collecting Experiments traces the development and use of data collections, especially in the experimental life sciences, from the early twentieth century to the present. It shows that the current revolution is best understood as the coming together of two older ways of knowing—collecting and experimenting, the museum and the laboratory. Ultimately, Bruno J. Strasser argues that by serving as knowledge repositories, as well as indispensable tools for producing new knowledge, these databases function as digital museums for the twenty-first century.
Composing Media Composing Embodiment
Kristin L. Arola and Anne Frances Wysocki Utah State University Press, 2012 Library of Congress PE1404.C617574 2012 | Dewey Decimal 808.0420285
“What any body is—and is able to do—cannot be disentangled from the media we use to consume and produce texts.” ---from the Introduction.
Kristin Arola and Anne Wysocki argue that composing in new media is composing the body—is embodiment. In Composing (Media) = Composing (Embodiment), they havebrought together a powerful set of essays that agree on the need for compositionists—and their students—to engage with a wide range of new media texts. These chapters explore how texts of all varieties mediate and thereby contribute to the human experiences of communication, of self, the body, and composing. Sample assignments and activities exemplify how this exploration might proceed in the writing classroom.
Contributors here articulate ways to understand how writing enables the experience of our bodies as selves, and at the same time to see the work of (our) writing in mediating selves to make them accessible to institutional perceptions and constraints. These writers argue that what a body does, and can do, cannot be disentangled from the media we use, nor from the times and cultures and technologies with which we engage.
To the discipline of composition, this is an important discussion because it clarifies the impact/s of literacy on citizens, freedoms, and societies. To the classroom, it is important because it helps compositionists to support their students as they enact, learn, and reflect upon their own embodied and embodying writing.
This book provides an introduction to many aspects of computer control. It covers techniques or control algorithm design and tuning of controllers; computer communications; parallel processing; and software design and implementation. The theoretical material is supported by case studies covering power systems control, robot manipulators, liquid natural as vaporisers, batch control of chemical processes; and active control of aircraft.
Described by the New York Times as a visionary “pioneer in computerized learning,” Patrick Suppes (1922-2014) and his many collaborators at Stanford University conducted research on the development, commercialization, and use of computers in education from 1963 to 2013. Computers in Education synthesizes this wealth of scholarship into a single succinct volume that highlights the profound interconnections of technology in education. By capturing the great breadth and depth of this research, this book offers an accessible introduction to Suppes’s striking work.
The Core and the Periphery is a collection of papers inspired by the linguistics career of Ivan A. Sag (1949-2013), written to commemorate his many contributions to the field. Sag was professor of linguistics at Stanford University from 1979 to 2013; served as the director of the Symbolic Systems Program from 2005 to 2009; authored, co-authored, or edited fifteen volumes on linguistics; and was at the forefront of non-transformational approaches to syntax. Reflecting the breadth of Sag’s theoretical interests and approaches to linguistic problems, the papers collected here tackle a range of grammar-related issues using corpora, intuitions, and laboratory experiments. They are united by their use of and commitment to rich datasets and share the perspective that the best theories of grammar attempt to account for the full diversity and complexity of language data.
Database Aesthetics examines the database as cultural and aesthetic form, explaining how artists have participated in network culture by creating data art. The essays in this collection look at how an aesthetic emerges when artists use the vast amounts of available information as their medium. Here, the ways information is ordered and organized become artistic choices, and artists have an essential role in influencing and critiquing the digitization of daily life.
Contributors: Sharon Daniel, U of California, Santa Cruz; Steve Deitz, Carleton College; Lynn Hershman Leeson, U of California, Davis; George Legrady, U of California, Santa Barbara; Eduardo Kac, School of the Art Institute of Chicago; Norman Klein, California Institute of the Arts; John Klima; Lev Manovich, U of California, San Diego; Robert F. Nideffer, U of California, Irvine; Nancy Paterson, Ontario College of Art and Design; Christiane Paul, School of Visual Arts in New York; Marko Peljhan, U of California, Santa Barbara; Warren Sack, U of California, Santa Cruz; Bill Seaman, Rhode Island School of Design; Grahame Weinbren, School of Visual Arts, New York.
Victoria Vesna is a media artist, and professor and chair of the Department of Design and Media Arts at the University of California, Los Angeles.
In recent decades, there has been a major shift in the way researchers process and understand scientific data. Digital access to data has revolutionized ways of doing science in the biological and biomedical fields, leading to a data-intensive approach to research that uses innovative methods to produce, store, distribute, and interpret huge amounts of data. In Data-Centric Biology, Sabina Leonelli probes the implications of these advancements and confronts the questions they pose. Are we witnessing the rise of an entirely new scientific epistemology? If so, how does that alter the way we study and understand life—including ourselves?
Leonelli is the first scholar to use a study of contemporary data-intensive science to provide a philosophical analysis of the epistemology of data. In analyzing the rise, internal dynamics, and potential impact of data-centric biology, she draws on scholarship across diverse fields of science and the humanities—as well as her own original empirical material—to pinpoint the conditions under which digitally available data can further our understanding of life. Bridging the divide between historians, sociologists, and philosophers of science, Data-Centric Biology offers a nuanced account of an issue that is of fundamental importance to our understanding of contemporary scientific practices.
Defense planning faces significant uncertainties. This report applies robust decision making (RDM) to the air-delivered munitions mix challenge. RDM is quantitative, decision support methodology designed to inform decisions under conditions of deep uncertainty and complexity. This proof-of-concept demonstration suggests that RDM could help defense planners make plans more robust to a wide range of hard-to-predict futures.
James T. Hamilton Harvard University Press, 2016 Library of Congress PN4888.I56H36 2016 | Dewey Decimal 071.3
Investigative journalism holds democracies and individuals accountable to the public. But important stories are going untold as news outlets shy away from the expense of watchdog reporting. Computational journalism, using digital records and data-mining algorithms, promises to lower the cost and increase demand among readers, James Hamilton shows.
Bruce Mccomiskey Utah State University Press, 2015 Library of Congress P301.5.P47M323 2015 | Dewey Decimal 808
In Dialectical Rhetoric, Bruce McComiskey argues that the historical conflict between rhetoric and dialectic can be overcome in ways useful to both composition theory and the composition classroom.
Historically, dialectic has taken two forms in relation to rhetoric. First, it has been the logical development of linear propositions leading to necessary conclusions, a one-dimensional form that was the counterpart of rhetorics in which philosophical, metaphysical, and scientific truths were conveyed with as little cognitive interference from language as possible. Second, dialectic has been the topical development of opposed arguments on controversial issues and the judgment of their relative strengths and weaknesses, usually in political and legal contexts, a two-dimensional form that was the counterpart of rhetorics in which verbal battles over competing probabilities in public institutions revealed distinct winners and losers.
The discipline of writing studies is on the brink of developing a new relationship between dialectic and rhetoric, one in which dialectics and rhetorics mediate and negotiate different arguments and orientations that are engaged in any rhetorical situation. This new relationship consists of a three-dimensional hybrid art called “dialectical rhetoric,” whose method is based on five topoi: deconstruction, dialogue, identification, critique, and juxtaposition. Three-dimensional dialectical rhetorics function effectively in a wide variety of discursive contexts, including digital environments, since they can invoke contrasts in stagnant contexts and promote associations in chaotic contexts. Dialectical Rhetoric focuses more attention on three-dimensional rhetorics from the rhetoric and composition community.
Digital Critical Editions
Edited by Daniel Apollon, Claire Belisle, and Philippe Regnier University of Illinois Press, 2014 Library of Congress PN171.D37D54 2014 | Dewey Decimal 808.0270285
Provocative yet sober, Digital Critical Editions examines how transitioning from print to a digital milieu deeply affects how scholars deal with the work of editing critical texts. On one hand, forces like changing technology and evolving reader expectations lead to the development of specific editorial products, while on the other hand, they threaten traditional forms of knowledge and methods of textual scholarship.
Using the experiences of philologists, text critics, text encoders, scientific editors, and media analysts, Digital Critical Editions ranges from philology in ancient Alexandria to the vision of user-supported online critical editing, from peer-directed texts distributed to a few to community-edited products shaped by the many. The authors discuss the production and accessibility of documents, the emergence of tools used in scholarly work, new editing regimes, and how the readers' expectations evolve as they navigate digital texts. The goal: exploring questions such as, What kind of text is produced? Why is it produced in this particular way?
Digital Critical Editions provides digital editors, researchers, readers, and technological actors with insights for addressing disruptions that arise from the clash of traditional and digital cultures, while also offering a practical roadmap for processing traditional texts and collections with today's state-of-the-art editing and research techniques thus addressing readers' new emerging reading habits.
In this revolutionary and highly original work, poet-scholar Glazier investigates the ways in which computer technology has influenced and transformed the writing and dissemination of poetry.
In Digital Poetics, Loss Pequeño Glazier argues that the increase in computer technology and accessibility, specifically the World Wide Web, has created a new and viable place for the writing and dissemination of poetry. Glazier's work not only introduces the reader to the current state of electronic writing but also outlines the historical and technical contexts out of which electronic poetry has emerged and demonstrates some of the possibilities of the new medium.
Glazier examines three principal forms of electronic textuality: hypertext, visual/kinetic text, and works in programmable media. He considers avant-garde poetics and its relationship to the on-line age, the relationship between web "pages" and book technology, and the way in which certain kinds of web constructions are in and of themselves a type of writing. With convincing alacrity, Glazier argues that the materiality of electronic writing has changed the idea of writing itself. He concludes that electronic space is the true home of poetry and, in the 20th century, has become the ultimate "space of poesis."
Digital Poetics will attract a readership of scholars and students interested in contemporary creative writing and the potential of electronic media for imaginative expression.
What is “digital rhetoric”? This book aims to answer that question by looking at a number of interrelated histories, as well as evaluating a wide range of methods and practices from fields in the humanities, social sciences, and information sciences to determine what might constitute the work and the world of digital rhetoric. The advent of digital and networked communication technologies prompts renewed interest in basic questions such as What counts as a text? and Can traditional rhetoric operate in digital spheres or will it need to be revised? Or will we need to invent new rhetorical practices altogether?
Through examples and consideration of digital rhetoric theories, methods for both researching and making in digital rhetoric fields, and examples of digital rhetoric pedagogy, scholarship, and public performance, this book delivers a broad overview of digital rhetoric. In addition, Douglas Eyman provides historical context by investigating the histories and boundaries that arise from mapping this emerging field and by focusing on the theories that have been taken up and revised by digital rhetoric scholars and practitioners. Both traditional and new methods are examined for the tools they provide that can be used to both study digital rhetoric and to potentially make new forms that draw on digital rhetoric for their persuasive power.
Donald E. Knuth CSLI, 1998 Library of Congress Z249.3.K59 1999 | Dewey Decimal 686.22544536
In this collection, the second in the series, Knuth explores the relationship between computers and typography. The present volume, in the words of the author, is a legacy to all the work he has done on typography. When he thought he would take a few years' leave from his main work on the art of computer programming, as is well known, the short typographic detour lasted more than a decade. When type designers, punch cutters, typographers, book historians, and scholars visited the University during this period, it gave to Stanford what some consider to be its golden age of digital typography. By the author's own admission, the present work is one of the most difficult books that he has prepared. This is truly a work that only Knuth himself could have produced.
Of all developments surrounding hypermedia, none has been as hotly or frequently debated as the conjunction of fiction and digital technology. J. Yellowlees Douglas considers the implications of this union. She looks at the new light that interactive narratives may shed on theories of reading and interpretation and the possibilities for hypertext novels, World Wide Web-based short stories, and cinematic, interactive narratives on CD-ROM. She confronts questions that are at the center of the current debate: Does an interactive story demand too much from readers? Does the concept of readerly choice destroy the integrity of an author's vision? Does interactivity turn reading fiction from "play" into "work"--too much work? Will hypertext fiction overtake the novel as a form of art or entertainment? And what might future interactive books look like?
The book examines criticism on interactive fiction from both proponents and skeptics and examines similarities and differences between print and hypertext fiction. It looks closely at critically acclaimed interactive works, including Stuart Moulthrop's Victory Garden and Michael Joyce's Afternoon: A Story that illuminate how these hypertext narratives "work." While she sees this as a still-evolving technology and medium, the author identifies possible developments for the future of storytelling from outstanding examples of Web-based fiction and CD-ROM narratives, possibilities that will enable narratives to both portray the world with greater realism an to transcend the boundaries of novels and films, character and plot alike.
Written to be accessible to a wide range of readers, this lively and accessibly-written volume will appeal to those interested in technology and cyberculture, as well as to readers familiar with literary criticism and modern fiction.
J. Yellowlees Douglas is the Director of the William and Grace Dial Center for Written and Oral Communication, University of Florida. She is the author of numerous articles and essays on the subject of hypertext and interactive literature.
For well over a century, academic disciplines have studied human behavior using quantitative information. Until recently, however, the humanities have remained largely immune to the use of data—or vigorously resisted it. Thanks to new developments in computer science and natural language processing, literary scholars have embraced the quantitative study of literary works and have helped make Digital Humanities a rapidly growing field. But these developments raise a fundamental, and as yet unanswered question: what is the meaning of literary quantity?
In Enumerations, Andrew Piper answers that question across a variety of domains fundamental to the study of literature. He focuses on the elementary particles of literature, from the role of punctuation in poetry, the matter of plot in novels, the study of topoi, and the behavior of characters, to the nature of fictional language and the shape of a poet’s career. How does quantity affect our understanding of these categories? What happens when we look at 3,388,230 punctuation marks, 1.4 billion words, or 650,000 fictional characters? Does this change how we think about poetry, the novel, fictionality, character, the commonplace, or the writer’s career? In the course of answering such questions, Piper introduces readers to the analytical building blocks of computational text analysis and brings them to bear on fundamental concerns of literary scholarship. This book will be essential reading for anyone interested in Digital Humanities and the future of literary study.
Kenneth R. Beesley and Lauri Karttunen CSLI, 2003 Library of Congress P98.B35 2003 | Dewey Decimal 410.285
The finite-state paradigm of computer science has provided a basis for natural-language applications that are efficient, elegant, and robust. This volume is a practical guide to finite-state theory and the affiliated programming languages lexc and xfst. Readers will learn how to write tokenizers, spelling checkers, and especially morphological analyzer/generators for words in English, French, Finnish, Hungarian, and other languages.
Included are graded introductions, examples, and exercises suitable for individual study as well as formal courses. These take advantage of widely-tested lexc and xfst applications that are just becoming available for noncommercial use via the Internet.
Deriving the correct meaning of such colloquial expressions as "I am parked out back" requires a unique interaction of knowledge about the world with a person's natural language tools. In this volume, Markus Egg examines how natural language rules and knowledge of the world work together to produce correct understandings of expressions that cannot be fully understood through literal reading. An in-depth and exciting work on semantics and natural language, this volume will be essential reading for scholars in computational linguistics.
A Grammar Writer's Cookbook
Edited by Miriam Butt, Tracy Holloway King, María-Eugenia Niño, and Frédérique S CSLI, 1999 Library of Congress P98.G73 1999 | Dewey Decimal 415.0285
A Grammar Writer's Cookbook is an introduction to the issues involved in the writing and design of computational grammars, reporting on experiences and analyses within the ParGram parallel grammar development project. Using the Lexical Functional Grammar (LFG) framework, this project implemented grammars for German, French, and English to cover parallel corpora.
A Guide to MATLAB® Object-Oriented Programming is the first book to deliver broad coverage of the documented and undocumented object-oriented features of MATLAB®. Unlike the typical approach of other resources, this guide explains why each feature is important, demonstrates how each feature is used, and promotes an understanding of the interactions between features.
Most readers think of a written work as producing its meaning through the words it contains. But what is the significance of the detailed and beautiful illuminations on a medieval manuscript? Of the deliberately chosen typefaces in a book of poems by Yeats? Of the design and layout of text in an electronic format? How does the material form of a work shape its understanding in a particular historical moment, in a particular culture?
The material features of texts as physical artifacts--their "bibliographic codes" --have over the last decade excited increasing interest in a variety of disciplines. The Iconic Page in Manuscript, Print, and Digital Culture gathers essays by an extraordinarily distinguished group of scholars to offer the most comprehensive examination of these issues yet, drawing on examples from literature, history, the fine arts, and philosophy.
Fittingly, the volume contains over two dozen illustrations that display the iconic features of the works analyzed--from Alfred the Great's Boethius through medieval manuscripts to the philosophy of C. S. Peirce and the dustjackets on works by F. Scott Fitzgerald and William Styron.
The Iconic Page in Manuscript, Print, and Digital Culture will be groundbreaking reading for scholars in a wide range of fields.
George Bornstein is C. A. Patrides Professor of English, University of Michigan. Theresa Tinkle is Associate Professor of English, University of Michigan.
Interdisciplining Digital Humanities sorts through definitions and patterns of practice over roughly sixty-five years of work, providing an overview for specialists and a general audience alike. It is the only book that tests the widespread claim that Digital Humanities is interdisciplinary. By examining the boundary work of constructing, expanding, and sustaining a new field, it depicts both the ways this new field is being situated within individual domains and dynamic cross-fertilizations that are fostering new relationships across academic boundaries. It also accounts for digital reinvigorations of “public humanities” in cultural heritage institutions of museums, archives, libraries, and community forums.
People are very creative in their use of language. This observation was made convincingly by Chomsky in the 1950s and is generally accepted in the scientific communities concerned with the study of language. Computers, on the other hand, are neither creative, flexible, nor adaptable. This is in spite of the fact that their ability to process language is based largely on the grammars developed by linguists and computer scientists. Thus, there is a mismatch between the observed human creativity and our ability as theorists to explain it. Language at Work examines grammars and other descriptions of language by combining the scientific and the practical. The scientific motivation is to unite distinct intellectual traditions, mathematics and descriptive social science, which have tried to provide an adequate explanation of language and its use on their own to no avail. This volume argues that Situation Theory, a theory of information couched in mathematics, has provided a uniform framework for the investigation of the creative aspects of language use. The application of Situation Theory in the study of language use in everyday communication to improve human/computer interaction is explored and espoused.
A growing number of archaeologists are applying Geographic Information Science (GIS) technologies to their research problems and questions. Advances in GIS and its use across disciplines allows for collaboration and enables archaeologists to ask ever more sophisticated questions and develop increasingly elaborate models on numerous aspects of past human behavior. Least cost analysis (LCA) is one such avenue of inquiry. While least cost studies are not new to the social sciences in general, LCA is relatively new to archaeology; until now, there has been no systematic exploration of its use within the field.
This edited volume presents a series of case studies illustrating the intersection of archaeology and LCA modeling at the practical, methodological, and theoretical levels. Designed to be a guidebook for archaeologists interested in using LCA in their own research, it presents a wide cross-section of practical examples for both novices and experts. The contributors to the volume showcase the richness and diversity of LCA’s application to archaeological questions, demonstrate that even simple applications can be used to explore sophisticated research questions, and highlight the challenges that come with injecting geospatial technologies into the archaeological research process.
The current trend toward machine-scoring of student work, Ericsson and Haswell argue, has created an emerging issue with implications for higher education across the disciplines, but with particular importance for those in English departments and in administration. The academic community has been silent on the issue—some would say excluded from it—while the commercial entities who develop essay-scoring software have been very active.
Machine Scoring of Student Essays is the first volume to seriously consider the educational mechanisms and consequences of this trend, and it offers important discussions from some of the leading scholars in writing assessment.
Reading and evaluating student writing is a time-consuming process, yet it is a vital part of both student placement and coursework at post-secondary institutions. In recent years, commercial computer-evaluation programs have been developed to score student essays in both of these contexts. Two-year colleges have been especially drawn to these programs, but four-year institutions are moving to them as well, because of the cost-savings they promise. Unfortunately, to a large extent, the programs have been written, and institutions are installing them, without attention to their instructional validity or adequacy.
Since the education software companies are moving so rapidly into what they perceive as a promising new market, a wider discussion of machine-scoring is vital if scholars hope to influence development and/or implementation of the programs being created. What is needed, then, is a critical resource to help teachers and administrators evaluate programs they might be considering, and to more fully envision the instructional consequences of adopting them. And this is the resource that Ericsson and Haswell are providing here.
In this volume, Matthew L. Jockers introduces readers to large-scale literary computing and the revolutionary potential of macroanalysis--a new approach to the study of the literary record designed for probing the digital-textual world as it exists today, in digital form and in large quantities. Using computational analysis to retrieve key words, phrases, and linguistic patterns across thousands of texts in digital libraries, researchers can draw conclusions based on quantifiable evidence regarding how literary trends are employed over time, across periods, within regions, or within demographic groups, as well as how cultural, historical, and societal linkages may bind individual authors, texts, and genres into an aggregate literary culture.
Moving beyond the limitations of literary interpretation based on the "close-reading" of individual works, Jockers describes how this new method of studying large collections of digital material can help us to better understand and contextualize the individual works within those collections.
This book combines the teaching of the MATLAB® programming language with the presentation and development of carefully selected electrical and computer engineering (ECE) fundamentals. This is what distinguishes it from other books concerned with MATLAB®: it is directed specifically to ECE concerns. Students will see, quite explicitly, how and why MATLAB® is well suited to solve practical ECE problems.
Just as the majority of books about computer literacy deal more with technological issues than with literacy issues, most computer literacy programs overemphasize technical skills and fail to adequately prepare students for the writing and communications tasks in a technology-driven era. Multiliteracies for a Digital Age serves as a guide for composition teachers to develop effective, full-scale computer literacy programs that are also professionally responsible by emphasizing different kinds of literacies and proposing methods for helping students move among them in strategic ways.
Defining computer literacy as a domain of writing and communication, Stuart A. Selber addresses the questions that few other computer literacy texts consider: What should a computer literate student be able to do? What is required of literacy teachers to educate such a student? How can functional computer literacy fit within the values of teaching writing and communication as a profession? Reimagining functional literacy in ways that speak to teachers of writing and communication, he builds a framework for computer literacy instruction that blends functional, critical, and rhetorical concerns in the interest of social action and change.
Multiliteracies for a Digital Age reviews the extensive literature on computer literacy and critiques it from a humanistic perspective. This approach, which will remain useful as new versions of computer hardware and software inevitably replace old versions, helps to usher students into an understanding of the biases, belief systems, and politics inherent in technological contexts. Selber redefines rhetoric at the nexus of technology and literacy and argues that students should be prepared as authors of twenty-first-century texts that defy the established purview of English departments. The result is a rich portrait of the ideal multiliterate student in a digital age and a social approach to computer literacy envisioned with the requirements for systemic change in mind.
Joseph Loscalzo Harvard University Press, 2017 Library of Congress R858.N48 2016 | Dewey Decimal 610.285
Big data, genomics, and quantitative approaches to network-based analysis are combining to advance the frontiers of medicine as never before. With contributions from leading experts, Network Medicine introduces this rapidly evolving field of research, which promises to revolutionize the diagnosis and treatment of human diseases.
This textbook teaches students to create computer codes used to engineer antennas, microwave circuits, and other critical technologies for wireless communications and other applications of electromagnetic fields and waves. Worked code examples are provided for MATLAB technical computing software. It is the only textbook on numerical methods that begins at the undergraduate engineering student level but brings students to the state-of-the-art by the end of the book. It focuses on the most important and popular numerical methods, going into depth with examples and problem sets of escalating complexity. This book requires only one core course of electromagnetics, allowing it to be useful both at the senior and beginning graduate levels. Developing and using numerical methods in a powerful tool for students to learn the principles of intermediate and advanced electromagnetics. This book fills the missing space of current textbooks that either lack depth on key topics (particularly integral equations and the method of moments) and where the treatment is not accessible to students without an advanced theory course. Important topics include: Method of Moments; Finite Difference Time Domain Method; Finite Element Method; Finite Element Method-Boundary Element Method; Numerical Optimization; and Inverse Scattering.
Many books have indexes, but most textual media have none. Newspapers, legal transcripts, conference proceedings, correspondence, video subtitles, and web pages are increasingly accessible with computers, yet are still without indexes or other sophisticated means of finding the excerpts most relevant to a reader's question.
Better than an index, and much better than a keyword search, are the high-precision computerized question-answering systems explored in this book. Marius Pasca presents novel and robust methods for capturing the semantics of natural language questions and for finding the most relevant portions of texts. This research has led to a fully implemented and rigorously evaluated architecture that has produced experimental results showing great promise for the future of internet search technology.
Michael Joyce's new collection continues to examine the connections between the poles of art and instruction, writing and teaching in the form of what Joyce has called theoretical narratives, pieces that are both narratives of theory and texts in which theory often takes the form of narrative. His concerns include hypertext and interactive fiction, the geography of cyberspace, and interactive film, and Joyce here searches out the emergence of network culture in spaces ranging from the shifting nature of the library to MOOs and other virtual spaces to life along a river.
While in this collection Joyce continues to be one of our most lyrical, wide-ranging, and informed cultural critics and theorists of new media, his essays exhibit an evolving distrust of unconsidered claims for newness in the midst of what Joyce calls "the blizzard of the next," as well as a recurrent insistence upon grounding our experience of the emergence of network culture in the body.
Michael Joyce is Associate Professor of English, Vassar College. He is author of a number of hypertext fictions on the web and on disk, most notably Afternoon: A Story.
His previous books are Of Two Minds: Hypertext Pedagogy and Poetics and Moral Tale and Meditations: Technological Parables and Refractions.
Pascal Programming for Music Research addresses those who wish to develop the programming skills necessary for doing computer-assisted music research, particularly in the fields of music theory and musicology. Many of the programming techniques are also applicable to computer assisted instruction (CAI), composition, and music synthesis. The programs and techniques can be implemented on personal computers or larger computer systems using standard Pascal compilers and will be valuable to anyone in the humanities creating data bases.
Among its useful features are:
-complete programs, from simple illustrations to substantial applications;
-beginning programming through such advanced topics as linked data structures, recursive algorithms, DARMS translation, score processing;
-bibliographic references at the end of each chapter to pertinent sources in music theory, computer science, and computer applications in music;
-exercises which explore and extend topics discussed in the text;
-appendices which include a DARMS translator and a library of procedures for building and manipulating a linked representation of scores;
-most algorithms and techniques that are given in Pascal programming translate easily to other computer languages.
Beginning, as well as advanced, programmers and anyone interested in programming music applications will find this book to be an invaluable resource.
Gail Hawisher and Cynthia Selfe created a volume that set the agenda in the field of computers and composition scholarship for a decade. The technology changes that scholars of composition studies faced as the new century opened couldn't have been more deserving of passionate study. While we have always used technologies (e.g., the pencil) to communicate with each other, the electronic technologies we now use have changed the world in ways that we have yet to identify or appreciate fully. Likewise, the study of language and literate exchange, even our understanding of terms like literacy, text, and visual, has changed beyond recognition, challenging even our capacity to articulate them.
As Hawisher, Selfe, and their contributors engage these challenges and explore their importance, they "find themselves engaged in the messy, contradictory, and fascinating work of understanding how to live in a new world and a new century." The result is a broad, deep, and rewarding anthology of work still among the standard works of computers and composition study.
The proliferation of smart devices, digital media, and network technologies has led to everyday people experiencing everyday things increasingly on and through the screen. In fact, much of the world has become so saturated by digital mediations that many individuals have adopted digitally inflected sensibilities. This gestures not simply toward posthumanism, but more fundamentally toward an altogether post-digital condition—one in which the boundaries between the “real” and the “digital” have become blurred and technology has fundamentally reconfigured how we make sense of the world.
Post-Digital Rhetoric and the New Aesthetic takes stock of these reconfigurations and their implications for rhetorical studies by taking up the New Aesthetic—a movement introduced by artist/digital futurist James Bridle that was meant to capture something of a digital way of seeing by identifying aesthetic values that could not exist without computational and digital technologies. Bringing together work in rhetoric, art, and digital media studies, Hodgson treats the New Aesthetic as a rhetorical ecology rather than simply an aesthetic movement, allowing him to provide operative guides for the knowing, doing, and making of rhetoric in a post-digital culture.
Thomas Wasow CSLI, 2002 Library of Congress PE1385.W37 2002 | Dewey Decimal 425
Compared to many languages, English has relatively fixed word order, but the ordering among phrases following the verb exhibits a good deal of variation. This monograph explores factors that influence the choice among possible orders of postverbal elements, testing hypotheses using a combination of corpus studies and psycholinguistic experiments. Wasow's final chapters explore how studies of language use bear on issues in linguistic theory, with attention to the roles of quantitative data and Chomsky's arguments against the use of statistics and probability in linguistics.
Education is in crisis—at least, so we hear. And at the center of this crisis is technology. New technologies like computer-based classroom instruction, online K–12 schools, MOOCs (massive open online courses), and automated essay scoring may be our last great hope—or the greatest threat we have ever faced.
In The Problem with Education Technology, Ben Fink and Robin Brown look behind the hype to explain the problems—and potential—of these technologies. Focusing on the case of automated essay scoring, they explain the technology, how it works, and what it does and doesn’t do. They explain its origins, its evolution (both in the classroom and in our culture), and the controversy that surrounds it. Most significantly, they expose the real problem—the complicity of teachers and curriculum-builders in creating an education system so mechanical that machines can in fact often replace humans—and how teachers, students, and other citizens can work together to solve it.
Offering a new perspective on the change that educators can hope, organize, and lobby for, The Problem with Education Technology challenges teachers and activists on “our side,” even as it provides new evidence to counter the profit-making, labor-saving logics that drive the current push for technology in the classroom.
Most computer programs that analyze spoken dialogue use a spoken command grammar, which limits what the user can say when talking to the system. To make this process simpler, more automated, and effective for command grammars even at initial stages of a project, the Regulus grammar compiler was developed by a consortium of experts—including NASA scientists. This book presents a complete description of both the practical and theoretical aspects of Regulus and will be extremely helpful for students and scholars working in computational linguistics as well as software engineering.
Paleobiology struggled for decades to influence our understanding of evolution and the history of life because it was stymied by a focus on microevolution and an incredibly patchy fossil record. But in the 1970s, the field took a radical turn, as paleobiologists began to investigate processes that could only be recognized in the fossil record across larger scales of time and space. That turn led to a new wave of macroevolutionary investigations, novel insights into the evolution of species, and a growing prominence for the field among the biological sciences.
In The Quality of the Archaeological Record, Charles Perreault shows that archaeology not only faces a parallel problem, but may also find a model in the rise of paleobiology for a shift in the science and theory of the field. To get there, he proposes a more macroscale approach to making sense of the archaeological record, an approach that reveals patterns and processes not visible within the span of a human lifetime, but rather across an observation window thousands of years long and thousands of kilometers wide. Just as with the fossil record, the archaeological record has the scope necessary to detect macroscale cultural phenomena because it can provide samples that are large enough to cancel out the noise generated by micro-scale events. By recalibrating their research to the quality of the archaeological record and developing a true macroarchaeology program, Perreault argues, archaeologists can finally unleash the full contributive value of their discipline.
Besides familiar and now-commonplace tasks that computers do all the time, what else are they capable of? Stephen Ramsay's intriguing study of computational text analysis examines how computers can be used as "reading machines" to open up entirely new possibilities for literary critics. Computer-based text analysis has been employed for the past several decades as a way of searching, collating, and indexing texts. Despite this, the digital revolution has not penetrated the core activity of literary studies: interpretive analysis of written texts.
Computers can handle vast amounts of data, allowing for the comparison of texts in ways that were previously too overwhelming for individuals, but they may also assist in enhancing the entirely necessary role of subjectivity in critical interpretation. Reading Machines discusses the importance of this new form of text analysis conducted with the assistance of computers. Ramsay suggests that the rigidity of computation can be enlisted in the project of intuition, subjectivity, and play.
How can computers distinguish the coherent from the unintelligible, recognize new information in a sentence, or draw inferences from a natural language passage? Computational semantics is an exciting new field that seeks answers to these questions, and this volume is the first textbook wholly devoted to this growing subdiscipline. The book explains the underlying theoretical issues and fundamental techniques for computing semantic representations for fragments of natural language. This volume will be an essential text for computer scientists, linguists, and anyone interested in the development of computational semantics.
Winner of the 2017 Sweetland Digital Rhetoric Collaborative Book Prize
Software developers work rhetorically to make meaning through the code they write. In some ways, writing code is like any other form of communication; in others, it proves to be new, exciting, and unique. In Rhetorical Code Studies, Kevin Brock explores how software code serves as meaningful communication through which software developers construct arguments that are made up of logical procedures and express both implicit and explicit claims as to how a given program operates.
Building on current scholarly work in digital rhetoric, software studies, and technical communication, Brock connects and continues ongoing conversations among rhetoricians, technical communicators, software studies scholars, and programming practitioners to demonstrate how software code and its surrounding discourse are highly rhetorical forms of communication. He considers examples ranging from large, well-known projects like Mozilla Firefox to small-scale programs like the “FizzBuzz” test common in many programming job interviews. Undertaking specific examinations of code texts as well as the contexts surrounding their composition, Brock illuminates the variety and depth of rhetorical activity taking place in and around code, from individual differences in style to changes in large-scale organizational and community norms.
Rhetorical Code Studies holds significant implications for digital communication, multimodal composition, and the cultural analysis of software and its creation. It will interest academics and students of writing, rhetoric, and software engineering as well as technical communicators and developers of all types of software.
According to Ben McCorkle, the rhetorical canon of delivery—traditionally seen as the aspect of oratory pertaining to vocal tone, inflection, and physical gesture—has undergone a period of renewal within the last few decades to include the array of typefaces, color palettes, graphics, and other design elements used to convey a message to a chosen audience. McCorkle posits that this redefinition, while a noteworthy moment of modern rhetorical theory, is just the latest instance in a historical pattern of interaction between rhetoric and technology. In Rhetorical
Delivery as Technological Discourse: A Cross-Historical Study, McCorkle explores the symbiotic relationship between delivery and technologies of writing and communication. Aiming to enhance historical understanding by demonstrating how changes in writing technology have altered our conception of delivery, McCorkle reveals the ways in which oratory and the tools of written expression have directly affected one another throughout the ages.
To make his argument, the author examines case studies from significant historical moments in the Western rhetorical tradition. Beginning with the ancient Greeks, McCorkle illustrates how the increasingly literate Greeks developed rhetorical theories intended for oratory that incorporated “writerly” tendencies, diminishing delivery’s once-prime status in the process. Also explored is the near-eradication of rhetorical delivery in the mid-fifteenth century—the period of transition from late manuscript to early print culture—and the implications of the burgeoning
print culture during the nineteenth century.
McCorkle then investigates the declining interest in delivery as technology designed to replace the human voice and gesture became prominent at the beginning of the 1900s. Situating scholarship on delivery within a broader postmodern structure, he moves on to a discussion of the characteristics of contemporary hypertextual and digital communication and its role in reviving the canon, while also anticipating the future of communication technologies, the likely shifts in attitude toward delivery, and the implications of both on the future of teaching rhetoric.
Rhetorical Delivery as Technological Discourse traces a long-view perspective of rhetorical history to present readers a productive reading of the volatile treatment of delivery alongside the parallel history of writing and communication technologies. This rereading will expand knowledge of the canon by not only offering the most thorough treatment of the history of rhetorical delivery available but also inviting conversation about the reciprocal impacts of rhetorical theory and written communication on each other throughout this history.
A landmark volume that explores the interconnected nature of technologies and rhetorical practice
Rhetorical Machines addresses new approaches to studying computational processes within the growing field of digital rhetoric. While computational code is often seen as value-neutral and mechanical, this volume explores the underlying, and often unexamined, modes of persuasion this code engages. In so doing, it argues that computation is in fact rife with the values of those who create it and thus has powerful ethical and moral implications. From Socrates’s critique of writing in Plato’s Phaedrus to emerging new media and internet culture, the scholars assembled here provide insight into how computation and rhetoric work together to produce social and cultural effects.
This multidisciplinary volume features contributions from scholar-practitioners across the fields of rhetoric, computer science, and writing studies. It is divided into four main sections: “Emergent Machines” examines how technologies and algorithms are framed and entangled in rhetorical processes, “Operational Codes” explores how computational processes are used to achieve rhetorical ends, “Ethical Decisions and Moral Protocols” considers the ethical implications involved in designing software and that software’s impact on computational culture, and the final section includes two scholars’ responses to the preceding chapters. Three of the sections are prefaced by brief conversations with chatbots (autonomous computational agents) addressing some of the primary questions raised in each section.
At the heart of these essays is a call for emerging and established scholars in a vast array of fields to reach interdisciplinary understandings of human-machine interactions. This innovative work will be valuable to scholars and students in a variety of disciplines, including but not limited to rhetoric, computer science, writing studies, and the digital humanities.
Scanner Data and Price Indexes
Edited by Robert C. Feenstra and Matthew D. Shapiro University of Chicago Press, 2003 Library of Congress HC106.3.C714 vol. 64 | Dewey Decimal 330
Every time you buy a can of tuna or a new television, its bar code is scanned to record its price and other information. These "scanner data" offer a number of attractive features for economists and statisticians, because they are collected continuously, are available quickly, and record prices for all items sold, not just a statistical sample. But scanner data also present a number of difficulties for current statistical systems.
Scanner Data and Price Indexes assesses both the promise and the challenges of using scanner data to produce economic statistics. Three papers present the results of work in progress at statistical agencies in the U.S., United Kingdom, and Canada, including a project at the U.S. Bureau of Labor Statistics to investigate the feasibility of incorporating scanner data into the monthly Consumer Price Index. Other papers demonstrate the enormous potential of using scanner data to test economic theories and estimate the parameters of economic models, and provide solutions for some of the problems that arise when using scanner data, such as dealing with missing data.
Computer simulation was first pioneered as a scientific tool in meteorology and nuclear physics in the period following World War II, but it has grown rapidly to become indispensible in a wide variety of scientific disciplines, including astrophysics, high-energy physics, climate science, engineering, ecology, and economics. Digital computer simulation helps study phenomena of great complexity, but how much do we know about the limits and possibilities of this new scientific practice? How do simulations compare to traditional experiments? And are they reliable? Eric Winsberg seeks to answer these questions in Science in the Age of Computer Simulation.
Scrutinizing these issue with a philosophical lens, Winsberg explores the impact of simulation on such issues as the nature of scientific evidence; the role of values in science; the nature and role of fictions in science; and the relationship between simulation and experiment, theories and data, and theories at different levels of description. Science in the Age of Computer Simulation will transform many of the core issues in philosophy of science, as well as our basic understanding of the role of the digital computer in the sciences.
We live in a world where seemingly everything can be measured. We rely on indicators to translate social phenomena into simple, quantified terms, which in turn can be used to guide individuals, organizations, and governments in establishing policy. Yet counting things requires finding a way to make them comparable. And in the process of translating the confusion of social life into neat categories, we inevitably strip it of context and meaning—and risk hiding or distorting as much as we reveal.
With The Seductions of Quantification, leading legal anthropologist Sally Engle Merry investigates the techniques by which information is gathered and analyzed in the production of global indicators on human rights, gender violence, and sex trafficking. Although such numbers convey an aura of objective truth and scientific validity, Merry argues persuasively that measurement systems constitute a form of power by incorporating theories about social change in their design but rarely explicitly acknowledging them. For instance, the US State Department’s Trafficking in Persons Report, which ranks countries in terms of their compliance with antitrafficking activities, assumes that prosecuting traffickers as criminals is an effective corrective strategy—overlooking cultures where women and children are frequently sold by their own families. As Merry shows, indicators are indeed seductive in their promise of providing concrete knowledge about how the world works, but they are implemented most successfully when paired with context-rich qualitative accounts grounded in local knowledge.
In the years since the Mars Exploration Rover Spirit and Opportunity first began transmitting images from the surface of Mars, we have become familiar with the harsh, rocky, rusty-red Martian landscape. But those images are much less straightforward than they may seem to a layperson: each one is the result of a complicated set of decisions and processes involving the large team behind the Rovers.
With Seeing Like a Rover, Janet Vertesi takes us behind the scenes to reveal the work that goes into creating our knowledge of Mars. Every photograph that the Rovers take, she shows, must be processed, manipulated, and interpreted—and all that comes after team members negotiate with each other about what they should even be taking photographs of in the first place. Vertesi’s account of the inspiringly successful Rover project reveals science in action, a world where digital processing uncovers scientific truths, where images are used to craft consensus, and where team members develop an uncanny intimacy with the sensory apparatus of a robot that is millions of miles away. Ultimately, Vertesi shows, every image taken by the Mars Rovers is not merely a picture of Mars—it’s a portrait of the whole Rover team, as well.
This 43rd volume of the ASLU series presents a useful GIS procedure to study settlement patterns in landscape archaeology. In several Mediterranean regions, archaeological sites have been mapped by fieldwalking surveys, producing large amounts of data. These legacy site-based survey data represent an important resource to study ancient settlement organization. Methodological procedures are necessary to cope with the limits of these data, and more importantly with the distortions on data patterns caused by biasing factors.
This book develops and applies a GIS procedure to use legacy survey data in settlement pattern analysis. It consists of two parts. One part regards the assessment of biases that can affect the spatial patterns exhibited by survey data. The other part aims to shed light on the location preferences and settlement strategy of ancient communities underlying site patterns. In this book, a case-study shows how the method works in practice. As part of the research by the Landscapes of Early Roman Colonization project (NWO, Leiden University, KNIR) site-based datasets produced by survey projects in central-southern Italy are examined in a comparative framework to investigate settlement patterns in the early Roman colonial period (3rd century B.C.).
Nearly a decade ago, Johanna Drucker cofounded the University of Virginia’s SpecLab, a digital humanities laboratory dedicated to risky projects with serious aims. In SpecLab she explores the implications of these radical efforts to use critical practices and aesthetic principles against the authority of technology based on analytic models of knowledge.
Inspired by the imaginative frontiers of graphic arts and experimental literature and the technical possibilities of computation and information management, the projects Drucker engages range from Subjective Meteorology to Artists’ Books Online to the as yet unrealized ’Patacritical Demon, an interactive tool for exposing the structures that underlie our interpretations of text. Illuminating the kind of future such experiments could enable, SpecLab functions as more than a set of case studies at the intersection of computers and humanistic inquiry. It also exemplifies Drucker’s contention that humanists must play a role in designing models of knowledge for the digital age—models that will determine how our culture will function in years to come.
Thinking Globally, Composing Locally explores how writing and its pedagogy should adapt to the ever-expanding environment of international online communication. Communication to a global audience presents a number of new challenges; writers seeking to connect with individuals from many different cultures must rethink their concept of audience. They must also prepare to address friction that may arise from cross-cultural rhetorical situations, variation in available technology and in access between interlocutors, and disparate legal environments.
The volume offers a pedagogical framework that addresses three interconnected and overarching objectives: using online media to contact audiences from other cultures to share ideas; presenting ideas in a manner that invites audiences from other cultures to recognize, understand, and convey or act upon them; and composing ideas to connect with global audiences to engage in ongoing and meaningful exchanges via online media. Chapters explore a diverse range of pedagogical techniques, including digital notebooks designed to create a space for active dialogic and multicultural inquiry, experience mapping to identify communication disruption points in international customer service, and online forums used in global distance education.
Thinking Globally, Composing Locally will prove an invaluable resource for instructors seeking to address the many exigencies of online writing situations in global environments.
Contributors: Suzanne Blum Malley, Katherine Bridgman, Maury Elizabeth Brown, Kaitlin Clinnin, Cynthia Davidson, Susan Delagrange, Scott Lloyd Dewitt, Amber Engelson, Kay Halasek, Lavinia Hirsu, Daniel Hocutt, Vassiliki Kourbani, Tika Lamsal, Liz Lane, Ben Lauren, J. C. Lee, Ben McCorkle, Jen Michaels, Minh-Tam Nguyen, Beau S. Pihlaja, Mª Pilar Milagros, Cynthia L. Selfe, Heather Turner, Don Unger, Josephine Walwema
Development of computer science techniques has significantly enhanced computational electromagnetic methods in recent years. The multi-core CPU computers and multiple CPU work stations are popular today for scientific research and engineering computing. How to achieve the best performance on the existing hardware platforms, however, is a major challenge. In addition to the multi-core computers and multiple CPU workstations, distributed computing has become a primary trend due to the low cost of the hardware and the high performance of network systems. In this book we introduce a general hardware acceleration technique that can significantly speed up FDTD simulations and their applications to engineering problems without requiring any additional hardware devices.
Wiring The Writing Center
Eric Hobson Utah State University Press, 1998 Library of Congress PE1404.W56 1998 | Dewey Decimal 808.0420285
Published in 1998, Wiring the Writing Center was one of the first few books to address the theory and application of electronics in the college writing center. Many of the contributors explore particular features of their own "wired" centers, discussing theoretical foundations, pragmatic choices, and practical strengths. Others review a range of centers for the approaches they represent. A strong annotated bibliography of signal work in the area is also included.
During the 19th century, throughout the Anglophone world, most fiction was first published in periodicals. In Australia, newspapers were not only the main source of periodical fiction, but the main source of fiction in general. Because of their importance as fiction publishers, and because they provided Australian readers with access to stories from around the world—from Britain, America and Australia, as well as Austria, Canada, France, Germany, New Zealand, Russia, South Africa, and beyond—Australian newspapers represent an important record of the transnational circulation and reception of fiction in this period.
Investigating almost 10,000 works of fiction in the world’s largest collection of mass-digitized historical newspapers (the National Library of Australia’s Trove database), A World of Fiction reconceptualizes how fiction traveled globally, and was received and understood locally, in the 19th century. Katherine Bode’s innovative approach to the new digital collections that are transforming research in the humanities are a model of how digital tools can transform how we understand digital collections and interpret literatures in the past.
Writing History in the Digital Age
Jack Dougherty and Kristen Nawrotzki, editors University of Michigan Press, 2013 Library of Congress D16.12.W75 2013 | Dewey Decimal 902.85
Writing History in the Digital Age began as a “what-if” experiment by posing a question: How have Internet technologies influenced how historians think, teach, author, and publish? To illustrate their answer, the contributors agreed to share the stages of their book-in-progress as it was constructed on the public web.
To facilitate this innovative volume, editors Jack Dougherty and Kristen Nawrotzki designed a born-digital, open-access, and open peer review process to capture commentary from appointed experts and general readers. A customized WordPress plug-in allowed audiences to add page- and paragraph-level comments to the manuscript, transforming it into a socially networked text. The initial six-week proposal phase generated over 250 comments, and the subsequent eight-week public review of full drafts drew 942 additional comments from readers across different parts of the globe.
The finished product now presents 20 essays from a wide array of notable scholars, each examining (and then breaking apart and reexamining) if and how digital and emergent technologies have changed the historical profession.
As new media mature, the changes they bring to writing in college are many and suggest implications not only for the tools of writing, but also for the contexts, personae, and conventions of writing. An especially visible change has been the increase of visual elements-from typographic flexibility to the easy use and manipulation of color and images. Another would be in the scenes of writing-web sites, presentation "slides," email, online conferencing and coursework, even help files, all reflect non-traditional venues that new media have brought to writing. By one logic, we must reconsider traditional views even of what counts as writing; a database, for example, could be a new form of written work.
The authors of Writing New Media bring these ideas and the changes they imply for writing instruction to the audience of rhetoric/composition scholars. Their aim is to expand the college writing teacher's understanding of new media and to help teachers prepare students to write effectively with new media beyond the classroom. Each chapter in the volume includes a lengthy discussion of rhetorical and technological background, and then follows with classroom-tested assignments from the authors' own teaching.