front cover of Against Prediction
Against Prediction
Profiling, Policing, and Punishing in an Actuarial Age
Bernard E. Harcourt
University of Chicago Press, 2006

From random security checks at airports to the use of risk assessment in sentencing, actuarial methods are being used more than ever to determine whom law enforcement officials target and punish. And with the exception of racial profiling on our highways and streets, most people favor these methods because they believe they’re a more cost-effective way to fight crime.

In Against Prediction, Bernard E. Harcourt challenges this growing reliance on actuarial methods. These prediction tools, he demonstrates, may in fact increase the overall amount of crime in society, depending on the relative responsiveness of the profiled populations to heightened security. They may also aggravate the difficulties that minorities already have obtaining work, education, and a better quality of life—thus perpetuating the pattern of criminal behavior. Ultimately, Harcourt shows how the perceived success of actuarial methods has begun to distort our very conception of just punishment and to obscure alternate visions of social order. In place of the actuarial, he proposes instead a turn to randomization in punishment and policing. The presumption, Harcourt concludes, should be against prediction.

[more]

logo for Harvard University Press
The Analysis of Cross-Classified Data Having Ordered Categories
Leo A. Goodman
Harvard University Press, 1984

front cover of Applied Factor Analysis
Applied Factor Analysis
Rudolf J. Rummel
Northwestern University Press, 1970
Applied Factor Analysis was written to help others apply factor analysis throughout the sciences with the conviction that factor analysis is a calculus of the social sciences. The book developed from research undertaken to do a 236-variable cross-national analysis. 
[more]

front cover of Beyond Grammar
Beyond Grammar
An Experience-Based Theory of Language
Rens Bod
CSLI, 1998
During the last few years, a new approach to language processing has started to emerge, which has become known under the name of "Data Oriented Parsing" or "DOP". This approach embodies the assumption that human language comprehension and production works with representations of concrete past language experiences, rather than with abstract grammatical rules. The models that instantiate this approach therefore maintain corpora of linguistic representations of previously occurring utterances. New utterance-representations are constructed by freely combining partial structures from the corpus. A probability model is used to choose from the collection of different structures of different sizes those that make up the most appropriate representation of an utterance. In this book, DOP models for several kinds of linguistic representations are developed, ranging from tree representations, compositional semantic representations, attribute-value representations, and dialogue representations. These models are studied from a formal, linguistic and computational perspective and are tested with available language corpora. The main outcome of these tests suggests that the productive units of natural language cannot be defined in terms of a minimal set of rules (or constraints or principles), as is usually attempted in linguistic theory, but need to be defined in terms of a large, redundant set of previously experienced structures with virtually no restriction on their size and complexity. I will argue that this outcome has important consequences for linguistic theory, leading to a new notion of language competence. In particular, it means that the knowledge of a speaker/hearer cannot be understood as a grammar, but as a statistical ensemble of language experiences that changes slightly every time a new utterance is processed.
[more]

front cover of Big Data for Twenty-First-Century Economic Statistics
Big Data for Twenty-First-Century Economic Statistics
Edited by Katharine G. Abraham, Ron S. Jarmin, Brian C. Moyer, and Matthew D. Shapiro
University of Chicago Press, 2022
The papers in this volume analyze the deployment of Big Data to solve both existing and novel challenges in economic measurement. 

The existing infrastructure for the production of key economic statistics relies heavily on data collected through sample surveys and periodic censuses, together with administrative records generated in connection with tax administration. The increasing difficulty of obtaining survey and census responses threatens the viability of existing data collection approaches. The growing availability of new sources of Big Data—such as scanner data on purchases, credit card transaction records, payroll information, and prices of various goods scraped from the websites of online sellers—has changed the data landscape. These new sources of data hold the promise of allowing the statistical agencies to produce more accurate, more disaggregated, and more timely economic data to meet the needs of policymakers and other data users. This volume documents progress made toward that goal and the challenges to be overcome to realize the full potential of Big Data in the production of economic statistics. It describes the deployment of Big Data to solve both existing and novel challenges in economic measurement, and it will be of interest to statistical agency staff, academic researchers, and serious users of economic statistics.
[more]

front cover of The Cult of Statistical Significance
The Cult of Statistical Significance
How the Standard Error Costs Us Jobs, Justice, and Lives
Stephen T. Ziliak and Deirdre N. McCloskey
University of Michigan Press, 2010

“McCloskey and Ziliak have been pushing this very elementary, very correct, very important argument through several articles over several years and for reasons I cannot fathom it is still resisted. If it takes a book to get it across, I hope this book will do it. It ought to.”

—Thomas Schelling, Distinguished University Professor, School of Public Policy, University of Maryland, and 2005 Nobel Prize Laureate in Economics

“With humor, insight, piercing logic and a nod to history, Ziliak and McCloskey show how economists—and other scientists—suffer from a mass delusion about statistical analysis. The quest for statistical significance that pervades science today is a deeply flawed substitute for thoughtful analysis. . . . Yet few participants in the scientific bureaucracy have been willing to admit what Ziliak and McCloskey make clear: the emperor has no clothes.”

—Kenneth Rothman, Professor of Epidemiology, Boston University School of Health

The Cult of Statistical Significance shows, field by field, how “statistical significance,” a technique that dominates many sciences, has been a huge mistake. The authors find that researchers in a broad spectrum of fields, from agronomy to zoology, employ “testing” that doesn’t test and “estimating” that doesn’t estimate. The facts will startle the outside reader: how could a group of brilliant scientists wander so far from scientific magnitudes? This study will encourage scientists who want to know how to get the statistical sciences back on track and fulfill their quantitative promise. The book shows for the first time how wide the disaster is, and how bad for science, and it traces the problem to its historical, sociological, and philosophical roots.

Stephen T. Ziliak is the author or editor of many articles and two books. He currently lives in Chicago, where he is Professor of Economics at Roosevelt University. Deirdre N. McCloskey, Distinguished Professor of Economics, History, English, and Communication at the University of Illinois at Chicago, is the author of twenty books and three hundred scholarly articles. She has held Guggenheim and National Humanities Fellowships. She is best known for How to Be Human* Though an Economist (University of Michigan Press, 2000) and her most recent book, The Bourgeois Virtues: Ethics for an Age of Commerce (2006).

[more]

front cover of Data-Oriented Parsing
Data-Oriented Parsing
Edited by Rens Bod, Remko Scha, and Khalil Sima'an
CSLI, 2003
Data-Oriented Parsing (DOP) is one of the leading paradigms in Statistical Natural Language Processing. In this volume, a collection of computational linguists offer a state-of-the-art overview of DOP, suitable for students and researchers in natural language processing and speech recognition as well as for computational linguistics.

This handbook begins with the theoretical background of DOP and introduces the algorithms used in DOP as well as in other probabilistic grammar models. After surveying extensions to the basic DOP model, the volume concludes with close study of the applications that use DOP as a backbone: speech understanding, machine translation, and language learning.
[more]

logo for University of Chicago Press
The Economics of Information and Uncertainty
Edited by John J. McCall
University of Chicago Press, 1982

front cover of Enumerations
Enumerations
Data and Literary Study
Andrew Piper
University of Chicago Press, 2018
For well over a century, academic disciplines have studied human behavior using quantitative information. Until recently, however, the humanities have remained largely immune to the use of data—or vigorously resisted it. Thanks to new developments in computer science and natural language processing, literary scholars have embraced the quantitative study of literary works and have helped make Digital Humanities a rapidly growing field. But these developments raise a fundamental, and as yet unanswered question: what is the meaning of literary quantity?
          In Enumerations, Andrew Piper answers that question across a variety of domains fundamental to the study of literature. He focuses on the elementary particles of literature, from the role of punctuation in poetry, the matter of plot in novels, the study of topoi, and the behavior of characters, to the nature of fictional language and the shape of a poet’s career. How does quantity affect our understanding of these categories? What happens when we look at 3,388,230 punctuation marks, 1.4 billion words, or 650,000 fictional characters? Does this change how we think about poetry, the novel, fictionality, character, the commonplace, or the writer’s career? In the course of answering such questions, Piper introduces readers to the analytical building blocks of computational text analysis and brings them to bear on fundamental concerns of literary scholarship. This book will be essential reading for anyone interested in Digital Humanities and the future of literary study.
 
[more]

front cover of Essential Demographic Methods
Essential Demographic Methods
Kenneth W. Wachter
Harvard University Press, 2014

Essential Demographic Methods brings to readers the full range of ideas and skills of demographic analysis that lie at the core of social sciences and public health. Classroom tested over many years, filled with fresh data and examples, this approachable text is tailored to the needs of beginners, advanced students, and researchers alike. An award-winning teacher and eminent demographer, Kenneth Wachter uses themes from the individual lifecourse, history, and global change to convey the meaning of concepts such as exponential growth, cohorts and periods, lifetables, population projection, proportional hazards, parity, marity, migration flows, and stable populations. The presentation is carefully paced and accessible to readers with knowledge of high-school algebra. Each chapter contains original problem sets and worked examples.

From the most basic concepts and measures to developments in spatial demography and hazard modeling at the research frontier, Essential Demographic Methods brings out the wider appeal of demography in its connections across the sciences and humanities. It is a lively, compact guide for understanding quantitative population analysis in the social and biological world.

[more]

front cover of Fertility Change in Contemporary Japan
Fertility Change in Contemporary Japan
Robert W. Hodge and Naohiro Ogawa
University of Chicago Press, 1992
The authors examine the striking decline in Japan's birthrate in light of the rapid urbanization, industrialization, and socioeconomic development experienced by the nation since World War II.
[more]

front cover of The Handbook of Research Synthesis
The Handbook of Research Synthesis
Harris Cooper
Russell Sage Foundation, 1994
"The Handbook is a comprehensive treatment of literature synthesis and provides practical advice for anyone deep in the throes of, just teetering on the brink of, or attempting to decipher a meta-analysis. Given the expanding application and importance of literature synthesis, understanding both its strengths and weaknesses is essential for its practitioners and consumers. This volume is a good beginning for those who wish to gain that understanding." —Chance "Meta-analysis, as the statistical analysis of a large collection of results from individual studies is called, has now achieved a status of respectability in medicine. This respectability, when combined with the slight hint of mystique that sometimes surrounds meta-analysis, ensures that results of studies that use it are treated with the respect they deserve….The Handbook of Research Synthesis is one of the most important publications in this subject both as a definitive reference book and a practical manual."—British Medical Journal The Handbook of Research Synthesis is the definitive reference and how-to manual for behavioral and medical scientists applying the craft of research synthesis. It draws upon twenty years of ground-breaking advances that have transformed the practice of synthesizing research literature from an art into a scientific process in its own right. Editors Harris Cooper and Larry V. Hedges have brought together leading authorities to guide the reader through every stage of the research synthesis process—problem formulation, literature search and evaluation, statistical integration, and report preparation. The Handbook of Research Synthesis incorporates in a single volume state-of-the-art techniques from all quantitative synthesis traditions, including Bayesian inference and the meta-analytic approaches. Distilling a vast technical literature and many informal sources, the Handbook provides a portfolio of the most effective solutions to problems of quantitative data integration. The Handbook of Research Synthesis also provides a rich treatment of the non-statistical aspects of research synthesis. Topics include searching the literature, managing reference databases and registries, and developing coding schemes. Those engaged in research synthesis will also find useful advice on how tables, graphs, and narration can be deployed to provide the most meaningful communication of the results of research synthesis. The Handbook of Research Synthesis is an illuminating compilation of practical instruction, theory, and problem solving. It provides an accumulation of knowledge about the craft of reviewing a scientific literature that can be found in no other single source. The Handbook offers the reader thorough instruction in the skills necessary to conduct powerful research syntheses meeting the highest standards of objectivity, systematicity, and rigor demanded of scientific enquiry. This definitive work will represent the state of the art in research synthesis for years to come.
[more]

front cover of The Handbook of Research Synthesis and Meta-Analysis
The Handbook of Research Synthesis and Meta-Analysis
Harris Cooper
Russell Sage Foundation, 2019
Research synthesis is the practice of systematically distilling and integrating data from many studies in order to draw more reliable conclusions about a given research issue. When the first edition of The Handbook of Research Synthesis and Meta-Analysis was published in 1994, it quickly became the definitive reference for conducting meta-analyses in both the social and behavioral sciences. In the third edition, editors Harris Cooper, Larry Hedges, and Jeff Valentine present updated versions of classic chapters and add new sections that evaluate cutting-edge developments in the field.
 
The Handbook of Research Synthesis and Meta-Analysis draws upon groundbreaking advances that have transformed research synthesis from a narrative craft into an important scientific process in its own right. The editors and leading scholars guide the reader through every stage of the research synthesis process—problem formulation, literature search and evaluation, statistical integration, and report preparation. The Handbook incorporates state-of-the-art techniques from all quantitative synthesis traditions and distills a vast literature to explain the most effective solutions to the problems of quantitative data integration. Among the statistical issues addressed are the synthesis of non-independent data sets, fixed and random effects methods, the performance of sensitivity analyses and model assessments, the development of machine-based abstract screening, the increased use of meta-regression and the problems of missing data. The Handbook also addresses the non-statistical aspects of research synthesis, including searching the literature and developing schemes for gathering information from study reports. Those engaged in research synthesis will find useful advice on how tables, graphs, and narration can foster communication of the results of research syntheses.
 
The third edition of the Handbook provides comprehensive instruction in the skills necessary to conduct research syntheses and represents the premier text on research synthesis.


Praise for the first edition: "The Handbook is a comprehensive treatment of literature synthesis and provides practical advice for anyone deep in the throes of, just teetering on the brink of, or attempting to decipher a meta-analysis. Given the expanding application and importance of literature synthesis, understanding both its strengths and weaknesses is essential for its practitioners and consumers. This volume is a good beginning for those who wish to gain that understanding." —Chance "Meta-analysis, as the statistical analysis of a large collection of results from individual studies is called, has now achieved a status of respectability in medicine. This respectability, when combined with the slight hint of mystique that sometimes surrounds meta-analysis, ensures that results of studies that use it are treated with the respect they deserve….The Handbook of Research Synthesis is one of the most important publications in this subject both as a definitive reference book and a practical manual."—British Medical Journal When the first edition of The Handbook of Research Synthesis was published in 1994, it quickly became the definitive reference for researchers conducting meta-analyses of existing research in both the social and biological sciences. In this fully revised second edition, editors Harris Cooper, Larry Hedges, and Jeff Valentine present updated versions of the Handbook's classic chapters, as well as entirely new sections reporting on the most recent, cutting-edge developments in the field. Research synthesis is the practice of systematically distilling and integrating data from a variety of sources in order to draw more reliable conclusions about a given question or topic. The Handbook of Research Synthesis and Meta-Analysis draws upon years of groundbreaking advances that have transformed research synthesis from a narrative craft into an important scientific process in its own right. Cooper, Hedges, and Valentine have assembled leading authorities in the field to guide the reader through every stage of the research synthesis process—problem formulation, literature search and evaluation, statistical integration, and report preparation. The Handbook of Research Synthesis and Meta-Analysis incorporates state-of-the-art techniques from all quantitative synthesis traditions. Distilling a vast technical literature and many informal sources, the Handbook provides a portfolio of the most effective solutions to the problems of quantitative data integration. Among the statistical issues addressed by the authors are the synthesis of non-independent data sets, fixed and random effects methods, the performance of sensitivity analyses and model assessments, and the problem of missing data. The Handbook of Research Synthesis and Meta-Analysis also provides a rich treatment of the non-statistical aspects of research synthesis. Topics include searching the literature, and developing schemes for gathering information from study reports. Those engaged in research synthesis will also find useful advice on how tables, graphs, and narration can be used to provide the most meaningful communication of the results of research synthesis. In addition, the editors address the potentials and limitations of research synthesis, and its future directions. The past decade has been a period of enormous growth in the field of research synthesis. The second edition Handbook thoroughly revises original chapters to assure that the volume remains the most authoritative source of information for researchers undertaking meta-analysis today. In response to the increasing use of research synthesis in the formation of public policy, the second edition includes a new chapter on both the strengths and limitations of research synthesis in policy debates
[more]

front cover of The Handbook of Research Synthesis and Meta-Analysis
The Handbook of Research Synthesis and Meta-Analysis
Harris Cooper
Russell Sage Foundation, 2009
Praise for the first edition: "The Handbook is a comprehensive treatment of literature synthesis and provides practical advice for anyone deep in the throes of, just teetering on the brink of, or attempting to decipher a meta-analysis. Given the expanding application and importance of literature synthesis, understanding both its strengths and weaknesses is essential for its practitioners and consumers. This volume is a good beginning for those who wish to gain that understanding." —Chance "Meta-analysis, as the statistical analysis of a large collection of results from individual studies is called, has now achieved a status of respectability in medicine. This respectability, when combined with the slight hint of mystique that sometimes surrounds meta-analysis, ensures that results of studies that use it are treated with the respect they deserve….The Handbook of Research Synthesis is one of the most important publications in this subject both as a definitive reference book and a practical manual."—British Medical Journal When the first edition of The Handbook of Research Synthesis was published in 1994, it quickly became the definitive reference for researchers conducting meta-analyses of existing research in both the social and biological sciences. In this fully revised second edition, editors Harris Cooper, Larry Hedges, and Jeff Valentine present updated versions of the Handbook's classic chapters, as well as entirely new sections reporting on the most recent, cutting-edge developments in the field. Research synthesis is the practice of systematically distilling and integrating data from a variety of sources in order to draw more reliable conclusions about a given question or topic. The Handbook of Research Synthesis and Meta-Analysis draws upon years of groundbreaking advances that have transformed research synthesis from a narrative craft into an important scientific process in its own right. Cooper, Hedges, and Valentine have assembled leading authorities in the field to guide the reader through every stage of the research synthesis process—problem formulation, literature search and evaluation, statistical integration, and report preparation. The Handbook of Research Synthesis and Meta-Analysis incorporates state-of-the-art techniques from all quantitative synthesis traditions. Distilling a vast technical literature and many informal sources, the Handbook provides a portfolio of the most effective solutions to the problems of quantitative data integration. Among the statistical issues addressed by the authors are the synthesis of non-independent data sets, fixed and random effects methods, the performance of sensitivity analyses and model assessments, and the problem of missing data. The Handbook of Research Synthesis and Meta-Analysis also provides a rich treatment of the non-statistical aspects of research synthesis. Topics include searching the literature, and developing schemes for gathering information from study reports. Those engaged in research synthesis will also find useful advice on how tables, graphs, and narration can be used to provide the most meaningful communication of the results of research synthesis. In addition, the editors address the potentials and limitations of research synthesis, and its future directions. The past decade has been a period of enormous growth in the field of research synthesis. The second edition Handbook thoroughly revises original chapters to assure that the volume remains the most authoritative source of information for researchers undertaking meta-analysis today. In response to the increasing use of research synthesis in the formation of public policy, the second edition includes a new chapter on both the strengths and limitations of research synthesis in policy debates
[more]

front cover of The Hidden Game of Football
The Hidden Game of Football
A Revolutionary Approach to the Game and Its Statistics
Bob Carroll, Pete Palmer, and John Thorn
University of Chicago Press, 2023
The 1988 cult classic behind football’s data analytics revolution, now back in print with a new foreword and preface.

Data analytics have revolutionized football. With play sheets informed by advanced statistical analysis, today’s coaches pass more, kick less, and go for more two-point or fourth-down conversions than ever before. In 1988, sportswriters Bob Carroll, Pete Palmer, and John Thorn proposed just this style of play in The Hidden Game of Football, but at the time baffled readers scoffed at such a heartless approach to the game. Football was the ultimate team sport and unlike baseball could not be reduced to pure probabilities. Nevertheless, the book developed a cult following among analysts who, inspired by its unorthodox methods, went on to develop the core metrics of football analytics used today: win probability, expected points, QBR, and more. With a new preface by Thorn and Palmer and a new foreword by Football Outsiders’s Aaron Schatz, The Hidden Game of Football remains an essential resource for armchair coaches, fantasy managers, and fans of all stripes.
[more]

front cover of A History of the Modern Fact
A History of the Modern Fact
Problems of Knowledge in the Sciences of Wealth and Society
Mary Poovey
University of Chicago Press, 1998
How did the fact become modernity's most favored unit of knowledge? How did description come to seem separable from theory in the precursors of economics and the social sciences?

Mary Poovey explores these questions in A History of the Modern Fact, ranging across an astonishing array of texts and ideas from the publication of the first British manual on double-entry bookkeeping in 1588 to the institutionalization of statistics in the 1830s. She shows how the production of systematic knowledge from descriptions of observed particulars influenced government, how numerical representation became the privileged vehicle for generating useful facts, and how belief—whether figured as credit, credibility, or credulity—remained essential to the production of knowledge.

Illuminating the epistemological conditions that have made modern social and economic knowledge possible, A History of the Modern Fact provides important contributions to the history of political thought, economics, science, and philosophy, as well as to literary and cultural criticism.

[more]

front cover of How Our Days Became Numbered
How Our Days Became Numbered
Risk and the Rise of the Statistical Individual
Dan Bouk
University of Chicago Press, 2015
Long before the age of "Big Data" or the rise of today's "self-quantifiers," American capitalism embraced "risk"--and proceeded to number our days. Life insurers led the way, developing numerical practices for measuring individuals and groups, predicting their fates, and intervening in their futures. Emanating from the gilded boardrooms of Lower Manhattan and making their way into drawing rooms and tenement apartments across the nation, these practices soon came to change the futures they purported to divine.

How Our Days Became Numbered tells a story of corporate culture remaking American culture--a story of intellectuals and professionals in and around insurance companies who reimagined Americans' lives through numbers and taught ordinary Americans to do the same. Making individuals statistical did not happen easily. Legislative battles raged over the propriety of discriminating by race or of smoothing away the effects of capitalism's fluctuations on individuals. Meanwhile, debates within companies set doctors against actuaries and agents, resulting in elaborate, secretive systems of surveillance and calculation.

Dan Bouk reveals how, in a little over half a century, insurers laid the groundwork for the much-quantified, risk-infused world that we live in today. To understand how the financial world shapes modern bodies, how risk assessments can perpetuate inequalities of race or sex, and how the quantification and claims of risk on each of us continue to grow, we must take seriously the history of those who view our lives as a series of probabilities to be managed.
[more]

front cover of Identification Problems in the Social Sciences
Identification Problems in the Social Sciences
Charles F. Manski
Harvard University Press, 1999

This book provides a language and a set of tools for finding bounds on the predictions that social and behavioral scientists can logically make from nonexperimental and experimental data. The economist Charles Manski draws on examples from criminology, demography, epidemiology, social psychology, and sociology as well as economics to illustrate this language and to demonstrate the broad usefulness of the tools.

There are many traditional ways to present identification problems in econometrics, sociology, and psychometrics. Some of these are primarily statistical in nature, using concepts such as flat likelihood functions and nondistinct parameter estimates. Manski's strategy is to divorce identification from purely statistical concepts and to present the logic of identification analysis in ways that are accessible to a wide audience in the social and behavioral sciences. In each case, problems are motivated by real examples with real policy importance, the mathematics is kept to a minimum, and the deductions on identifiability are derived giving fresh insights.

Manski begins with the conceptual problem of extrapolating predictions from one population to some new population or to the future. He then analyzes in depth the fundamental selection problem that arises whenever a scientist tries to predict the effects of treatments on outcomes. He carefully specifies assumptions and develops his nonparametric methods of bounding predictions. Manski shows how these tools should be used to investigate common problems such as predicting the effect of family structure on children's outcomes and the effect of policing on crime rates.

Successive chapters deal with topics ranging from the use of experiments to evaluate social programs, to the use of case-control sampling by epidemiologists studying the association of risk factors and disease, to the use of intentions data by demographers seeking to predict future fertility. The book closes by examining two central identification problems in the analysis of social interactions: the classical simultaneity problem of econometrics and the reflection problem faced in analyses of neighborhood and contextual effects.

[more]

logo for American Library Association
Improving the Visibility and Use of Digital Repositories
Kenneth Arlitsch
American Library Association, 2013

front cover of International Economic Transactions
International Economic Transactions
Issues in Measurement and Empirical Research
Edited by Peter Hooper and J. David Richardson
University of Chicago Press, 1991
How the government arrives at its official economic statistics deeply influences the lives of every American. Social Security payments and even some wages are linked to import prices through official inflation rates; special measures of national product are necessary for valid comparisons of vital social indicators such as relative standards of living and relative poverty. Poor information can result in poor policies. And yet, federal statistics agencies have been crippled by serious budget cuts—and more cuts may lie ahead.

Questioning the quality of current data and analytical procedures, this ambitious volume proposes innovative research designs and methods for data enhancement, and offers new data on trade prices and service transactions for future studies. Leading researchers address the measurement of international trade flows and prices, including the debate over measurement of computer prices and national productivity; compare international levels of manufacturing output; and assess the extent to which the United States has fallen into debt to the rest of the world.
[more]

front cover of Labor Statistics Measurement Issues
Labor Statistics Measurement Issues
Edited by John Haltiwanger, Marilyn E. Manser, and Robert H. Topel
University of Chicago Press, 1998
Rapidly changing technology, the globalization of markets, and the declining role of unions are just some of the factors that have led to dramatic changes in working conditions in the United States. Little attention has been paid to the difficult measurement problems underlying analysis of the labor market. Labor Statistics Measurement Issues helps to fill this gap by exploring key theoretical and practical issues in the measurement of employment, wages, and workplace practices.

Some of the chapters in this volume explore the conceptual issues of what is needed, what is known, or what can be learned from existing data, and what needs have not been met by available data sources. Others make innovative uses of existing data to analyze these topics. Also included are papers examining how answers to important questions are affected by alternative measures used and how these can be reconciled. This important and useful book will find a large audience among labor economists and consumers of labor statistics.

[more]

logo for American Library Association
Library Improvement through Data Analytics
Lesley S. J. Farmer
American Library Association, 2016

logo for American Library Association
Managing with Data
Using ACRLMetrics and PLAmetrics
Peter Hernon
American Library Association, 2015

logo for Assoc of College & Research Libraries
Meaningful Metrics
A 21St Century Librarian's Guide To
Robin Chin Roemer
Assoc of College & Research Libraries, 2015

logo for Assoc of College & Research Libraries
Meaningful Metrics
A 21St Century Librarian's Guide to Bibliometrics, Altmetrics, and Research Impact
Robin Chin Roemer
Assoc of College & Research Libraries, 2015

front cover of Measured Language
Measured Language
Quantitative Studies of Acquisition, Assessment, and Variation
Jeffrey Connor-Linton and Luke Wander Amoroso, Editors
Georgetown University Press, 2014

Measured Language: Quantitative Studies of Acquisition, Assessment, and Variation focuses on ways in which various aspects of language can be quantified and how measurement informs and advances our understanding of language. The metaphors and operationalizations of quantification serve as an important lingua franca for seemingly disparate areas of linguistic research, allowing methods and constructs to be translated from one area of linguistic investigation to another.

Measured Language includes forms of measurement and quantitative analysis current in diverse areas of linguistic research from language assessment to language change, from generative linguistics to experimental psycholinguistics, and from longitudinal studies to classroom research. Contributors demonstrate how to operationalize a construct, develop a reliable way to measure it, and finally validate that measurement—and share the relevance of their perspectives and findings to other areas of linguistic inquiry. The range and clarity of the research collected here ensures that even linguists who would not traditionally use quantitative methods will find this volume useful.

[more]

front cover of The Measurement of Capital
The Measurement of Capital
Edited by Dan Usher
University of Chicago Press, 1980
How is real capital measured by government statistical agencies? How could this measure be improved to correspond more closely to an economist's ideal measure of capital in economic analysis and prediction? It is possible to construct a single, reliable time series for all capital goods, regardless of differences in vintage, technological complexity, and rates of depreciation? These questions represent the common themes of this collection of papers, originally presented at a 1976 meeting of the Conference on Income and Wealth.
[more]

front cover of Measurement of Nontariff Barriers
Measurement of Nontariff Barriers
Alan V. Deardorff and Robert M. Stern, Editors
University of Michigan Press, 1998
As tariffs on imports of manufactures have been reduced as a result of multi-lateral trade negotiations, interest in the extent to which existing nontariff barriers may distort and restrict international trade is growing. Accurate and reliable measures are needed in order to address the issues involving the use and impacts of nontariff barriers. This study assesses currently available methods for quantifying such barriers and makes recommendations as to those methods that can be most effectively employed. The authors focus both on the conceptual issues arising in the measurement of the different types of nontariff barriers and on the applied research that has been carried out in studies prepared by country members of the OECD Pilot Group and others seeking to quantify the barriers.
Nontariff barriers include quotas, variable levies, voluntary export restraints, government procurement regulations, domestic subsidies, and antidumping and countervailing duty measures. The authors discuss the many different methods available for measuring the effects of these and other nontariff barriers. Illustrative results are presented for industrial OECD countries, including Australia, Canada, Germany, Norway, the European Union, the United Kingdom, and the United States. Finally, the authors offer guideline principles and recommend procedures for measuring different types of nontariff barriers.
Economists, political scientists, government officials, and lawyers involved in international trade will find this an invaluable resource for understanding and measuring NTBs.
Alan V. Deardorff and Robert M. Stern are Professors of Economics and Public Policy, University of Michigan.
[more]

front cover of Measuring Capital in the New Economy
Measuring Capital in the New Economy
Edited by Carol Corrado, John Haltiwanger, and Daniel Sichel
University of Chicago Press, 2005
As the accelerated technological advances of the past two decades continue to reshape the United States' economy, intangible assets and high-technology investments are taking larger roles. These developments have raised a number of concerns, such as: how do we measure intangible assets? Are we accurately appraising newer, high-technology capital? The answers to these questions have broad implications for the assessment of the economy's growth over the long term, for the pace of technological advancement in the economy, and for estimates of the nation's wealth.

In Measuring Capital in the New Economy, Carol Corrado, John Haltiwanger, Daniel Sichel, and a host of distinguished collaborators offer new approaches for measuring capital in an economy that is increasingly dominated by high-technology capital and intangible assets. As the contributors show, high-tech capital and intangible assets affect the economy in ways that are notoriously difficult to appraise. In this detailed and thorough analysis of the problem and its solutions, the contributors study the nature of these relationships and provide guidance as to what factors should be included in calculations of different types of capital for economists, policymakers, and the financial and accounting communities alike.
[more]

front cover of Measuring Economic Sustainability and Progress
Measuring Economic Sustainability and Progress
Edited by Dale W. Jorgenson, J. Steven Landefeld, and Paul Schreyer
University of Chicago Press, 2014
Since the Great Depression, researchers and statisticians have recognized the need for more extensive methods for measuring economic growth and sustainability. The recent recession renewed commitments to closing long-standing gaps in economic measurement, including those related to sustainability and well-being.

The latest in the NBER’s influential Studies in Income and Wealth series, which has played a key role in the development of national account statistics in the United States and other nations, this volume explores collaborative solutions between academics, policy researchers, and official statisticians to some of today’s most important economic measurement challenges. Contributors to this volume extend past research on the integration and extension of national accounts to establish an even more comprehensive understanding of the distribution of economic growth and its impact on well-being, including health, human capital, and the environment. The research contributions assess, among other topics, specific conceptual and empirical proposals for extending national accounts.
[more]

logo for Duke University Press
Measuring Functioning and Well-Being
The Medical Outcomes Study Approach
Anita L. Stewart and John E. Ware Jr., eds.
Duke University Press, 1992
Measuring Functioning and Well-Being is a comprehensive account a broad range of self-reported functioning and well-being measures developed for the Medical Outcomes Study, a large-sale study of how patients fare with health care in the United States. This book provides a set of ready-to-use generic measures that are applicable to all adults, including those well and chronically ill, as well as a methodological guide to collecting health data and constructing health measures. As demand increases for more practical methods to monitor the outcomes of health care, this volume offers a timely and valuable contribution to the field.
The contributors address conceptual and methodological issues involved in measuring such important health status concepts as: physical, social, and role functioning; psychological distress and well-being; general health perceptions; energy and fatigue; sleep; and pain. The authors present psychometric results and explain how to administer, score, and interpret the measures.
Comprising the work of a number of highly respected scholars in the field of health assessment, Measuring Functioning and Well-Being will be of great interest and value to the growing number of researchers, policymakers, and clinicians concerned with the management and evaluation of health care.
[more]

front cover of Modeling and Interpreting Interactive Hypotheses in Regression Analysis
Modeling and Interpreting Interactive Hypotheses in Regression Analysis
Cindy D. Kam and Robert J. Franzese, Jr.
University of Michigan Press, 2007

Social scientists study complex phenomena about which they often propose intricate hypotheses tested with linear-interactive or multiplicative terms. While interaction terms are hardly new to social science research, researchers have yet to develop a common methodology for using and interpreting them. Modeling and Interpreting Interactive Hypotheses in Regression Analysis provides step-by-step guidance on how to connect substantive theories to statistical models and how to interpret and present the results.

"Kam and Franzese is a must-have for all empirical social scientists interested in teasing out the complexities of their data."
---Janet M. Box-Steffensmeier, Ohio State University

"Kam and Franzese have written what will become the definitive source on dealing with interaction terms and testing interactive hypotheses. It will serve as the standard reference for political scientists and will be one of those books that everyone will turn to when helping our students or doing our work. But more than that, this book is the best text I have seen for getting students to really think about the importance of careful specification and testing of their hypotheses."
---David A. M. Peterson, Texas A&M University

"Kam and Franzese have given scholars and teachers of regression models something they've needed for years: a clear, concise guide to understanding multiplicative interactions. Motivated by real substantive examples and packed with valuable examples and graphs, their book belongs on the shelf of every working social scientist."
---Christopher Zorn, University of South Carolina

"Kam and Franzese make it easy to model what good researchers have known for a long time: many important and interesting causal effects depend on the presence of other conditions. Their book shows how to explore interactive hypotheses in your own research and how to present your results. The book is straightforward yet technically sophisticated. There are no more excuses for misunderstanding, misrepresenting, or simply missing out on interaction effects!"
---Andrew Gould, University of Notre Dame

Cindy D. Kam is Assistant Professor, Department of Political Science, University of California, Davis.

Robert J. Franzese Jr. is Associate Professor, Department of Political Science, University of Michigan, and Research Associate Professor, Center for Political Studies, Institute for Social Research, University of Michigan.

For datasets, syntax, and worksheets to help readers work through the examples covered in the book, visit: www.press.umich.edu/KamFranzese/Interactions.html

[more]

front cover of The Nature of Scientific Evidence
The Nature of Scientific Evidence
Statistical, Philosophical, and Empirical Considerations
Edited by Mark L. Taper and Subhash R. Lele
University of Chicago Press, 2004
An exploration of the statistical foundations of scientific inference, The Nature of Scientific Evidence asks what constitutes scientific evidence and whether scientific evidence can be quantified statistically. Mark Taper, Subhash Lele, and an esteemed group of contributors explore the relationships among hypotheses, models, data, and inference on which scientific progress rests in an attempt to develop a new quantitative framework for evidence. Informed by interdisciplinary discussions among scientists, philosophers, and statisticians, they propose a new "evidential" approach, which may be more in keeping with the scientific method. The Nature of Scientific Evidence persuasively argues that all scientists should care more about the fine points of statistical philosophy because therein lies the connection between theory and data.

Though the book uses ecology as an exemplary science, the interdisciplinary evaluation of the use of statistics in empirical research will be of interest to any reader engaged in the quantification and evaluation of data.
[more]

front cover of Postverbal Behavior
Postverbal Behavior
Thomas Wasow
CSLI, 2002
Compared to many languages, English has relatively fixed word order, but the ordering among phrases following the verb exhibits a good deal of variation. This monograph explores factors that influence the choice among possible orders of postverbal elements, testing hypotheses using a combination of corpus studies and psycholinguistic experiments. Wasow's final chapters explore how studies of language use bear on issues in linguistic theory, with attention to the roles of quantitative data and Chomsky's arguments against the use of statistics and probability in linguistics.
[more]

front cover of Prospective Longevity
Prospective Longevity
A New Vision of Population Aging
Warren C. Sanderson and Sergei Scherbov
Harvard University Press, 2019

From two leading experts, a revolutionary new way to think about and measure aging.

Aging is a complex phenomenon. We usually think of chronological age as a benchmark, but it is actually a backward way of defining lifespan. It tells us how long we’ve lived so far, but what about the rest of our lives?

In this pathbreaking book, Warren C. Sanderson and Sergei Scherbov provide a new way to measure individual and population aging. Instead of counting how many years we’ve lived, we should think about the number of years we have left, our “prospective age.” Two people who share the same chronological age probably have different prospective ages, because one will outlive the other. Combining their forward-thinking measure of our remaining years with other health metrics, Sanderson and Scherbov show how we can generate better demographic estimates, which inform better policies. Measuring prospective age helps make sense of observed patterns of survival, reorients understanding of health in old age, and clarifies the burden of old-age dependency. The metric also brings valuable data to debates over equitable intergenerational pensions.

Sanderson and Scherbov’s pioneering model has already been adopted by the United Nations. Prospective Longevity offers us all an opportunity to rethink aging, so that we can make the right choices for our societal and economic health.

[more]

logo for Georgetown University Press
Qualitative Comparative Analysis
An Introduction to Research Design and Application
Patrick A. Mello
Georgetown University Press, 2022

A comprehensive and accessible guide to learning and successfully applying QCA

Social phenomena can rarely be attributed to single causes—instead, they typically stem from a myriad of interwoven factors that are often difficult to untangle. Drawing on set theory and the language of necessary and sufficient conditions, qualitative comparative analysis (QCA) is ideally suited to capturing this causal complexity. A case-based research method, QCA regards cases as combinations of conditions and compares the conditions of each case in a structured way to identify the necessary and sufficient conditions for an outcome.

Qualitative Comparative Analysis: An Introduction to Research Design and Application is a comprehensive guide to QCA. As QCA becomes increasingly popular across the social sciences, this textbook teaches students, scholars, and self-learners the fundamentals of the method, research design, interpretation of results, and how to communicate findings.

Following an ideal typical research cycle, the book’s ten chapters cover the methodological basis and analytical routine of QCA, as well as matters of research design, causation and causal complexity, QCA variants, and the method’s reception in the social sciences. A comprehensive glossary helps to clarify the meaning of frequently used terms. The book is complemented by an accessible online R manual to help new users to practice QCA’s analytical steps on sample data and then implement with their own findings. This hands-on textbook is an essential resource for students and researchers looking for a complete and up-to-date introduction to QCA.

[more]

logo for American Library Association
Say It with Data
A Concise Guide to Making Your Case and Getting Results
Priscille Dando
American Library Association, 2014

front cover of Scanner Data and Price Indexes
Scanner Data and Price Indexes
Edited by Robert C. Feenstra and Matthew D. Shapiro
University of Chicago Press, 2003
Every time you buy a can of tuna or a new television, its bar code is scanned to record its price and other information. These "scanner data" offer a number of attractive features for economists and statisticians, because they are collected continuously, are available quickly, and record prices for all items sold, not just a statistical sample. But scanner data also present a number of difficulties for current statistical systems.

Scanner Data and Price Indexes assesses both the promise and the challenges of using scanner data to produce economic statistics. Three papers present the results of work in progress at statistical agencies in the U.S., United Kingdom, and Canada, including a project at the U.S. Bureau of Labor Statistics to investigate the feasibility of incorporating scanner data into the monthly Consumer Price Index. Other papers demonstrate the enormous potential of using scanner data to test economic theories and estimate the parameters of economic models, and provide solutions for some of the problems that arise when using scanner data, such as dealing with missing data.
[more]

front cover of Selected Papers, Volume 3
Selected Papers, Volume 3
Stochastic, Statistical, and Hydromagnetic Problems in Physics and Astronomy
S. Chandrasekhar
University of Chicago Press, 1989
This is the third of six volumes collecting significant papers of the distinguished astrophysicist and Nobel laureate S. Chandrasekhar. His work is notable for its breadth as well as for its brilliance; his practice has been to change his focus from time to time to pursue new areas of research. The result has been a prolific career full of discoveries and insights, some of which are only now being fully appreciated.

Chandrasekhar has selected papers that trace the development of his ideas and that present aspects of his work not fully covered in the books he has periodically published to summarize his research in each area.

This volume is divided into four sections. The first, on dynamical friction and Brownian motion, includes papers written after Chandrasekhar published his 1942 monograph Principles of Stellar Dynamics. Also in this section is "Stochastic Problems in Physics and Astronomy," one of the most cited papers in the physics literature, as well as papers written jointly with John von Neumann that have been given impetus to recent research. As Chandrasekhar notes, the papers in the second section, on statistical problems in astronomy, were influenced by Ambartsumian's analysis of brightness in the Milky Way. A third section on the statistical theory of turbulence addresses issues still unresolved in fluid dynamics, and the last section is devoted to hydromagnetic problems in astrophysics that are not discussed in Chandrasekhar's monographs.
[more]

front cover of Statistical Explanation and Statistical Relevance
Statistical Explanation and Statistical Relevance
Wesley C. Salmon
University of Pittsburgh Press, 1971
According to modern physics, many objectively improbable events actually occur, such as the spontaneous disintegration of radioactive atoms. Because of high levels of improbability, scientists are often at a loss to explain such phenomena. In this main essay of this book, Wesley Salmon offers a solution to scientific explanation based on the concept of statistical relevance (the S-R model). In this vein, the other two essays herein discuss “Statistical Relevance vs. Statistical Inference,” and “Explanation and Information.”
[more]

logo for The Institution of Engineering and Technology
Statistical Techniques for High-Voltage Engineering
W. Hauschild
The Institution of Engineering and Technology, 1992
This book sets out statistical methods that can be used in the preparation, execution, evaluation and interpretation of experiments of a random nature. It also includes the assessment of test methods used in high-voltage engineering from a statistical standpoint, and contains detailed sections on breakdown statistics of typical electrical insulating arrangements. Separate special areas of mathematical statistics - such as statistical trial planning, questions of reliability, and stochastic processes - are mentioned briefly. The extensive bibliography points the way to more advanced work.
[more]

front cover of Statistics for Public Policy
Statistics for Public Policy
A Practical Guide to Being Mostly Right (or at Least Respectably Wrong)
Jeremy G. Weber
University of Chicago Press, 2024

A long-overdue guide on how to use statistics to bring clarity, not confusion, to policy work.

Statistics are an essential tool for making, evaluating, and improving public policy. Statistics for Public Policy is a crash course in wielding these unruly tools to bring maximum clarity to policy work. Former White House economist Jeremy G. Weber offers an accessible voice of experience for the challenges of this work, focusing on seven core practices: 

  • Thinking big-picture about the role of data in decisions
  • Critically engaging with data by focusing on its origins, purpose, and generalizability
  • Understanding the strengths and limits of the simple statistics that dominate most policy discussions
  • Developing reasons for considering a number to be practically small or large  
  • Distinguishing correlation from causation and minor causes from major causes
  • Communicating statistics so that they are seen, understood, and believed
  • Maintaining credibility by being right (or at least respectably wrong) in every setting
Statistics for Public Policy dispenses with the opacity and technical language that have long made this space impenetrable; instead, Weber offers an essential resource for all students and professionals working at the intersections of data and policy interventions. This book is all signal, no noise.
[more]

front cover of Thicker Than Blood
Thicker Than Blood
How Racial Statistics Lie
Tukufu Zuberi
University of Minnesota Press, 2003
A clear explanation and provocative look at the impact of new technologies on world society. In our complex and multicultural society, racial identity is often as much a matter of family background, economic opportunity, and geographic location as it is determined by skin color or hair texture. And yet study after study is released and reported in the media regarding African American test scores, Asian American social mobility, and the white domination of our political institutions. In short, there is a fundamental disconnect between the nuanced understanding many people have of race and the ways it is studied and quantified by researchers. In this timely and hard-hitting volume, Tukufu Zuberi offers a concise account of the historical connections between the development of the idea of race and the birth of social statistics. Zuberi describes the ways race-differentiated data is misinterpreted in the social sciences and asks essential questions about the ways racial statistics are used: What is the value of knowing the income disparities or differences in crime and incarceration rates, differences in test scores, infant mortality rates, abortion frequencies, or choices of sexual partner between different racial groups? When these data are available, what should the principles be guiding their dissemination, interpretation, and analysis? How does the availability of this information shape public discourse, alter scientific research agendas, inform political decision making, and ultimately influence the very social meaning of racial difference? When statistics are interpreted in a racist manner, no matter how inadvertent the racism may be, the public is exposed to seemingly neutral information that in its effect is anything but neutral. Zuberi argues that statistical analysis can and must be deracialized, and that this deracialization is essential to the goal of achieving social justice for all. He concludes by putting forward a principle of racially conscious social justice, offering an incendiary and necessary correction to the inaccuracies that have plagued this topic at the center of American life. "Zuberi, who was named one of Philadelphia's 76 smartest people by Philadelphia Magazine, has written a brilliant new book, Thicker Than Blood. One of the most powerful claims of the book is that instead of being a fixed biological reality, race is instead a socially produced phenomenon. His point is to show just how vicious-especially through the use of statistics-the notion of race has been when it has been employed to protect the interest of those in power (whites), especially those who say that because race does not exist, racism is not real." Michael Eric Dyson in The Chicago Sun-Times "A call to action and, Zuberi hopes, a precursor to a conversation about the real meaning of race, ethnicity, and political power in America." Time Magazine "Tukufu Zuberi's critical assessment of the analysis of racial data in Thicker Than Blood is a tour de force. His discussion and evaluation of the use of racial statistics in historical and cross-cultural contexts is original and important. I strongly feel that all students and scholars in the social sciences should read this thoughtful book." William Julius Wilson Tukufu Zuberi is professor of sociology and director of the African Census Analysis Project at the University of Pennsylvania. He is the author of Swing Low, Sweet Chariot (1995).
[more]

front cover of Understanding Crime and Place
Understanding Crime and Place
A Methods Handbook
Edited by Elizabeth R. Groff and Cory P. Haberman
Temple University Press, 2023

Place has become both a major field of criminological study as well as an important area for policy development. Capturing state of the art crime and place research methods and analysis, Understanding Crime and Place is a comprehensive Handbook focused on the specific skills researchers need. 

The editors and contributors are scholars who have been fundamental in introducing or developing a particular method for crime and place research. Understanding Crime and Place is organized around the scientific process, introducing major crime and place theories and concepts, discussions of data and data collection, core spatial data concepts, as well as statistical and computational techniques for analyzing spatial data and place-based evaluation. The lessons in the book are supplemented by additional instructions, examples, problems, and datasets available for download.

Conducting place-based research is an emerging field that requires a wide range of cutting-edge methods and analysis techniques that are only beginning to be widely taught in criminology. Understanding Crime and Place bridges that gap, formalizes the discipline, and promotes an even greater use of place-based research.

Contributors: Martin A. Andresen, Matthew P J Ashby, Eric Beauregard, Wim Bernasco, Daniel Birks, Hervé Borrion, Kate Bowers, Anthony A. Braga, Tom Brenneman, David Buil-Gil, Meagan Cahill, Stefano Caneppele, Julien Chopin, Jeffrey E. Clutter, Toby Davies, Hashem Dehghanniri, Jillian Shafer Desmond, Beidi Dong, John E. Eck, Miriam Esteve, Timothy C. Hart, Georgia Hassall, David N. Hatten, Julie Hibdon, James Hunter, Shane D. Johnson, Samuel Langton, YongJei Lee, Ned Levine, Brian Lockwood, Dominique Lord, Nick Malleson, Dennis Mares, David Mazeika, Lorraine Mazerolle, Asier Moneva, Andrew Newton, Bradley J. O’Guinn, Ajima Olaghere, Graham C. Ousey, Ken Pease, Eric L. Piza, Jerry Ratcliffe, Caterina G. Roman, Stijn Ruiter, Reka Solymosi, Evan T. Sorg, Wouter Steenbeek, Hannah Steinman, Ralph B. Taylor, Marie Skubak Tillyer, Lisa Tompson, Brandon Turchan, David Weisburd, Brandon C. Welsh, Clair White, Douglas J. Wiebe, Pamela Wilcox, David B. Wilson, Alese Wooditch, Kathryn Wuschke, Sue-Ming Yang, and the editors.

[more]

logo for American Library Association
Using Digital Analytics for Smart Assessment
Tabatha Farney
American Library Association, 2017

front cover of The Vegetation of Wisconsin
The Vegetation of Wisconsin
An Ordination of Plant Communities
John T. Curtis
University of Wisconsin Press, 1959
One of the most important contributions in the field of plant ecology during the twentieth century, this definitive survey established the geographical limits, species compositions, and as much as possible of the environmental relations of the communities composing the vegetation of Wisconsin.
[more]

logo for American Library Association
Web Analytics Strategies for Information Professionals
A LITA Guide
Tabatha American Library Association
American Library Association, 2013


Send via email Share on Facebook Share on Twitter