AC-DC Power System Analysis
Jos Arrillaga The Institution of Engineering and Technology, 1998 Library of Congress TK1005.A758 1998 | Dewey Decimal 621.31
With the expansion of HV DC transmission throughout the world, and the increasing numbers of international interconnections, few power systems can continue to escape the effect of this technology in their planning and operation. The primary subject of this book is the incorporation of AC-DC converters and DC transmission in power system analysis. However, the concepts and methods described are also applicable to the FACTS (flexible AC transmission systems) technology.
As well as conventional power flows, faults and stability, the book describes the simulation of power system steady-state waveform distortion and transient behaviour. The modelling of power electronic devices in electromagnetic transient programs is given prominence, as these programs have become the 'workhorse' of power system design.
This graduate-level text should be of interest to practising engineers and researchers involved in the solution of modern power system problems. It will also be a useful reference for advanced electrical engine ering students.
Historically, electromagnetics and complex circuit modelling existed as separate disciplines, each with their own tools, models and even languages. More recently, however, the emergence of very high-speed digital circuits and pressure on the telecommunications market to move towards microwave and millimetre wave bands are increasing the need to find ways to combine the two fields.
The consumer market demands low cost mass production devices operating at higher frequencies where the finite dimensions of the circuits with respect to wavelength can no longer be ignored. Similarly, integrated planar microwave circuits pose new modelling challenges, as neither conductors nor dielectrics can be considered as ideal at these frequencies and, consequently, most techniques and formulas developed over the past twenty years for dealing with ideal thin conductors no longer model the physical reality.
These challenges are the main subject of this book which investigates analytical techniques encompassing the linear modelling of passive and active - in particular FET - structures. This timely book was primarily conceived as a bridge between the mathematical abilities of the pure EM theorist and those of the FET circuit modeller. However the resulting text will be of equal benefit to researchers in microwave and millimetric components and as a textbook for specialised courses.
New political parties have regularly appeared in developed democracies around the world. In some countries issues focusing on the environment, immigration, economic decline, and regional concerns have been brought to the forefront by new political parties. In other countries these issues have been addressed by established parties, and new issue-driven parties have failed to form. Most current research is unable to explain why under certain circumstances new issues or neglected old ones lead to the formation of new parties. Based on a novel theoretical framework, this study demonstrates the crucial interplay between established parties and possible newcomers to explain the emergence of new political parties.
Deriving stable hypotheses from a simple theoretical model, the book proceeds to a study of party formation in twenty-two developed democracies. New or neglected issues still appear as a driving force in explaining the emergence of new parties, but their effect is partially mediated by institutional factors, such as access to the ballot, public support for parties, and the electoral system. The hypotheses in part support existing theoretical work, but in part present new insights. The theoretical model also pinpoints problems of research design that are hardly addressed in the comparative literature on new political parties. These insights from the theoretical model lead to empirical tests that improve on those employed in the literature and allow for a much-enhanced understanding of the formation and the success of new parties.
Simon Hug is Lecturer in Political Science, University of Geneva.
Most antennas are assembled from conducting surfaces and wires. The usual approach to numerical analysis of such structures is to approximate them by small surface or wire elements, with simple current approximation over the elements (the so-called subdomain approach), which requires a large amount of computer storage. This book describes a novel general entire-domain method for the analysis of metallic antennas and scatterers which enables the solution of a very wide class of problems to be obtained using computers of relatively modest capability.
Antenna-design engineers, scientists and graduate students interested in the analysis and design of electrically small and medium-sized metallic antennas and scatterers will find in this book a self-contained and extremely powerful tool that circumvents most difficulties encountered in other available methods.
Animals lead rich social lives. They care for one another, compete for resources, and mate. Within a society, social relationships may be simple or complex and usually vary considerably, both between different groups of individuals and over time. These social systems are fundamental to biological organization, and animal societies are central to studies of behavioral and evolutionary biology. But how do we study animal societies? How do we take observations of animals fighting, grooming, or forming groups and produce a realistic description or model of their societies?
Analyzing AnimalSocieties presents a conceptual framework for analyzing social behavior and demonstrates how to put this framework into practice by collecting suitable data on the interactions and associations of individuals so that relationships can be described, and, from these, models can be derived. In addition to presenting the tools, Hal Whitehead illustrates their applicability using a wide range of real data on a variety of animal species—from bats and chimps to dolphins and birds. The techniques that Whitehead describes will be profitably adopted by scientists working with primates, cetaceans, birds, and ungulates, but the tools can be used to study societies of invertebrates, amphibians, and even humans. Analyzing AnimalSocieties will become a standard reference for those studying vertebrate social behavior and will give to these studies the kind of quality standard already in use in other areas of the life sciences.
Non-metallic materials and composites are now commonplace in modern vehicle construction, and the need to compute scattering and other electromagnetic phenomena in the presence of material structures has led to the development of new simulation techniques.
This book describes a variety of methods for the approximate simulation of material surfaces, and provides the first comprehensive treatment of boundary conditions in electromagnetics. The genesis and properties of impedance, resistive sheet, conductive sheet, generalised (or higher order) and absorbing (or non-reflecting) boundary conditions are discussed. Applications to diffraction by numerous canonical geometries and impedance (coated) structures are presented, and accuracy and uniqueness issues are also addressed, high frequency techniques such as the physical and geometrical theories of diffraction are introduced, and more than i 30 figures illustrate the results, many of which have not appeared previously in the literature.
Written by two of the authorities m the field, this graduate-level text should be of interest to all scientists and engineers concerned with the analytical and numerical solution of electromagnetic problems.
Fort Polk Military Reservation encompasses approximately 139,000 acres in western Louisiana 40 miles southwest of Alexandria. As a result of federal mandates for cultural resource investigation, more archaeological work has been undertaken there, beginning in the 1970s, than has occurred at any other comparably sized area in Louisiana or at most other localities in the southeastern United States. The extensive program of survey, excavation, testing, and large-scale data and artifact recovery, as well as historic and archival research, has yielded a massive amount of information. While superbly curated by the U.S. Army, the material has been difficult to examine and comprehend in its totality.
With this volume, Anderson and Smith collate and synthesize all the information into a comprehensive whole. Included are previous investigations, an overview of local environmental conditions, base military history and architecture, and the prehistoric and historic cultural sequence. An analysis of location, environmental, and assemblage data employing a sample of more than 2,800 sites and isolated finds was used to develop a predictive model that identifies areas where significant cultural resources are likely to occur. Developed in 1995, this model has already proven to be highly accurate and easy to use.
Archaeology, History, and Predictive Modeling will allow scholars to more easily examine the record of human activity over the past 13,000 or more years in this part of western Louisiana and adjacent portions of east Texas. It will be useful to southeastern archaeologists and anthropologists, both professional and amateur.
David G. Anderson is an archaeologist with the National Park Service's Southeast Archeological Center in Tallahassee, Florida, and coeditor of The Woodland Southeast.Steven D. Smith is with SCIAA in Columbia, South Carolina. J.W. Joseph and Mary Beth Reed are with New South Associates in Stone Mountain, Georgia.
Why do consumer prices and wages adjust so slowly to changes in market conditions? The rigidity or stickiness of price setting in business is central to Keynesian economic theory and a key to understanding how monetary policy works, yet economists have made little headway in determining why it occurs. Asking About Prices offers a groundbreaking empirical approach to a puzzle for which theories abound but facts are scarce. Leading economist Alan Blinder, along with co-authors Elie Canetti, David Lebow, and Jeremy B. Rudd, interviewed a national, multi-industry sample of 200 CEOs, company heads, and other corporate price setters to test the validity of twelve prominent theories of price stickiness. Using everyday language and pertinent scenarios, the carefully designed survey asked decisionmakers how prominently these theoretical concerns entered into their own attitudes and thought processes. Do businesses tend to view the costs of changing prices as prohibitive? Do they worry that lower prices will be equated with poorer quality goods? Are firms more likely to try alternate strategies to changing prices, such as warehousing excess inventory or improving their quality of service? To what extent are prices held in place by contractual agreements, or by invisible handshakes? Asking About Prices offers a gold mine of previously unavailable information. It affirms the widespread presence of price stickiness in American industry, and offers the only available guide to such business details as what fraction of goods are sold by fixed price contract, how often transactions involve repeat customers, and how and when firms review their prices. Some results are surprising: contrary to popular wisdom, prices do not increase more easily than they decrease, and firms do not appear to practice anticipatory pricing, even when they can foresee cost increases. Asking About Prices also offers a chapter-by-chapter review of the survey findings for each of the twelve theories of price stickiness. The authors determine which theories are most popular with actual price setters, how practices vary within different business sectors, across firms of different sizes, and so on. They also direct economists' attention toward a rationale for price stickiness that does not stem from conventional theory, namely a strong reluctance by firms to antagonize or inconvenience their customers. By illuminating how company executives actually think about price setting, Asking About Prices provides an elegant model of a valuable new approach to conducting economic research.
In this volume, specialists from traditionally separate areas in economics and finance investigate issues at the conjunction of their fields. They argue that financial decisions of the firm can affect real economic activity—and this is true for enough firms and consumers to have significant aggregate economic effects. They demonstrate that important differences—asymmetries—in access to information between "borrowers" and "lenders" ("insiders" and "outsiders") in financial transactions affect investment decisions of firms and the organization of financial markets. The original research emphasizes the role of information problems in explaining empirically important links between internal finance and investment, as well as their role in accounting for observed variations in mechanisms for corporate control.
There have been significant developments in the field of numerical methods for diffraction problems in recent years, and as a result, it is now possible to perform computations with more than ten million unknowns. However, the importance of asymptotic methods should not be overlooked. Not only do they provide considerable physical insight into diffraction mechanisms, and can therefore aid the design of electromagnetic devices such as radar targets and antennas, some objects are still too large in terms of wavelengths to fall in the realm of numerical methods. Furthermore, very low Radar Cross Section objects are often difficult to compute using multiple methods. Finally, objects that are very large in terms of wavelength, but with complicated details, are still a challenge both for asymptotic and numerical methods. The best, but now widely explored, solution for these problems is to combine various methods in so called hybrid methods.
Asymptotic and Hybrid Methods in Electromagnetics is based on a short course, and presents recent developments in the field.
By relating grammar to cognitive architecture, John T. Hale shows step-by-step how incremental parsing works in models of perceptual processing and how specific learning rules might lead to frequency-sensitive preferences. Along the way, Hale reconsiders garden-pathing, the parallel/serial distinction, and information-theoretical complexity metrics, such as surprisal. This book is a must for cognitive scientists of language.
In every generation, Americans have worried about the solidarity of the nation. Since the days of the Mayflower, those already settled here have wondered how newcomers with different cultures, values, and (frequently) skin color would influence America. Would the new groups create polarization and disharmony? Thus far, the United States has a remarkable track record of incorporating new people into American society, but acceptance and assimilation have never meant equality. In Century of Difference, Claude Fischer and Michael Hout provide a compelling—and often surprising—new take on the divisions and commonalities among the American public over the tumultuous course of the twentieth century. Using a hundred years worth of census and opinion poll data, Century of Difference shows how the social, cultural, and economic fault lines in American life shifted in the last century. It demonstrates how distinctions that once loomed large later dissipated, only to be replaced by new ones. Fischer and Hout find that differences among groups by education, age, and income expanded, while those by gender, region, national origin, and, even in some ways, race narrowed. As the twentieth century opened, a person's national origin was of paramount importance, with hostilities running high against Africans, Chinese, and southern and eastern Europeans. Today, diverse ancestries are celebrated with parades. More important than ancestry for today's Americans is their level of schooling. Americans with advanced degrees are increasingly putting distance between themselves and the rest of society—in both a literal and a figurative sense. Differences in educational attainment are tied to expanding inequalities in earnings, job quality, and neighborhoods. Still, there is much that ties all Americans together. Century of Difference knocks down myths about a growing culture war. Using seventy years of survey data, Fischer and Hout show that Americans did not become more fragmented over values in the late-twentieth century, but rather were united over shared ideals of self-reliance, family, and even religion. As public debate has flared up over such matters as immigration restrictions, the role of government in redistributing resources to the poor, and the role of religion in public life, it is important to take stock of the divisions and linkages that have typified the U.S. population over time. Century of Difference lucidly profiles the evolution of American social and cultural differences over the last century, examining the shifting importance of education, marital status, race, ancestry, gender, and other factors on the lives of Americans past and present. A Volume in the Russell Sage Foundation Census Series
Chaos Theory in the Social Sciences: Foundations and Applications offers the most recent thinking in applying the chaos paradigm to the social sciences. The book explores the methodological techniques--and their difficulties--for determining whether chaotic processes may in fact exist in a particular instance and examines implications of chaos theory when applied specifically to political science, economics, and sociology. The contributors to the book show that no single technique can be used to diagnose and describe all chaotic processes and identify the strengths and limitations of a variety of approaches.
The essays in this volume consider the application of chaos theory to such diverse phenomena as public opinion, the behavior of states in the international arena, the development of rational economic expectations, and long waves.
Contributors include Brian J. L. Berry, Thad Brown, Kenyon B. DeGreene, Dimitrios Dendrinos, Euel Elliott, David Harvey, L. Ted Jaditz, Douglas Kiel, Heja Kim, Michael McBurnett, Michael Reed, Diana Richards, J. Barkley Rosser, Jr., and Alvin M. Saperstein.
L. Douglas Kiel and Euel W. Elliott are both Associate Professors of Government, Politics, and Political Economy, University of Texas at Dallas.
In this insightful work, Martin H. Krieger shows what physicists are really doing when they employ mathematical models as research tools. He argues that the technical details of these complex calculations serve not only as a means to an end, but also reveal key aspects of the physical properties they model.
Krieger's lucid discussions will help readers to appreciate the larger physical issues behind the mathematical detail of modern physics and gain deeper insights into how theoretical physicists work. Constitutions of Matter is a rare, behind-the-scenes glimpse into the world of modern physics.
"[Krieger] provides students of physics and applied mathematics with a view of the physical forest behind the mathematical trees, historians and philosophers of science with insights into how theoretical physicists go about their work, and technically advanced general readers with a glimpse into the discipline."—Scitech Book News
This comprehensive book covers the state-of-the-art in control-oriented modelling and identification techniques. With contributions from leading researchers in the subject, Control-oriented Modelling and Identification: Theory and practice covers the main methods and tools available to develop advanced mathematical models suitable for control system design, including: object-oriented modelling and simulation; projection-based model reduction techniques; integrated modelling and parameter estimation; identification for robust control of complex systems; subspace-based multi-step predictors for predictive control; closed-loop subspace predictive control; structured nonlinear system identification; and linear fractional LPV model identification from local experiments using an H1-based glocal approach.
This book also takes a practical look at a variety of applications of advanced modelling and identification techniques covering spacecraft dynamics, vibration control, rotorcrafts, models of anaerobic digestion, a brake-by-wire racing motorcycle actuator, and robotic arms.
Albert Einstein’s theory of general relativity describes the effect of gravitation on the shape of space and the flow of time. But for more than four decades after its publication, the theory remained largely a curiosity for scientists; however accurate it seemed, Einstein’s mathematical code—represented by six interlocking equations—was one of the most difficult to crack in all of science. That is, until a twenty-nine-year-old Cambridge graduate solved the great riddle in 1963. Roy Kerr’s solution emerged coincidentally with the discovery of black holes that same year and provided fertile testing ground—at long last—for general relativity. Today, scientists routinely cite the Kerr solution, but even among specialists, few know the story of how Kerr cracked Einstein’s code.
Fulvio Melia here offers an eyewitness account of the events leading up to Kerr’s great discovery. Cracking the Einstein Code vividly describes how luminaries such as Karl Schwarzschild, David Hilbert, and Emmy Noether set the stage for the Kerr solution; how Kerr came to make his breakthrough; and how scientists such as Roger Penrose, Kip Thorne, and Stephen Hawking used the accomplishment to refine and expand modern astronomy and physics. Today more than 300 million supermassive black holes are suspected of anchoring their host galaxies across the cosmos, and the Kerr solution is what astronomers and astrophysicists use to describe much of their behavior.
By unmasking the history behind the search for a real world solution to Einstein’s field equations, Melia offers a first-hand account of an important but untold story. Sometimes dramatic, often exhilarating, but always attuned to the human element, Cracking the Einstein Code is ultimately a showcase of how important science gets done.
Since the 1970s, two major trends have emerged among developing countries: the rise of new democracies and the rush to free trade. For some, the confluence of these events suggests that a free-market economy complements a fledgling democracy. Others argue that the two are inherently incompatible and that exposure to economic globalization actually jeopardizes new democracies. Which view is correct? Bumba Mukherjee argues that the reality of how democracy and trade policy unravel in developing countries is more nuanced than either account.
Mukherjee offers the first comprehensive cross-national framework for identifying the specific economic conditions that influence trade policy in developing countries. Laying out the causes of variation in trade policy in four developing or recently developed countries—Brazil, India, Indonesia, and South Africa—he argues persuasively that changing political interactions among parties, party leaders, and the labor market are often key to trade policy outcome. For instance, if workers are in a position to benefit from opening up to trade, party leaders in turn support trade reforms by decreasing tariffs and other trade barriers.
At a time when discussions about the stability of new democracies are at the forefront, Democracy and Trade Policy in Developing Countries provides invaluable insight into the conditions needed for a democracy to survive in the developing world in the context of globalization.
Derivatives were responsible for one of the worst financial meltdowns in history, one from which we have not yet fully recovered. However, they are likewise capable of generating some of the most incredible wealth we have ever seen. This book asks how we might ensure the latter while avoiding the former. Looking past the usual arguments for the regulation or abolition of derivative finance, it asks a more probing question: what kinds of social institutions and policies would we need to put in place to both avail ourselves of the derivative’s wealth production and make sure that production benefits all of us?
To answer that question, the contributors to this book draw upon their deep backgrounds in finance, social science, art, and the humanities to create a new way of understanding derivative finance that does justice to its social and cultural dimensions. They offer a two-pronged analysis. First, they develop a social understanding of the derivative that casts it in the light of anthropological concepts such as the gift, ritual, play, dividuality, and performativity. Second, they develop a derivative understanding of the social, using financial concepts such as risk, hedging, optionality, and arbitrage to uncover new dimensions of contemporary social reality. In doing so, they construct a necessary, renewed vision of derivative finance as a deeply embedded aspect not just of our economics but our culture.
Glocal control, a term coined by Professor Shinji Hara at The University of Tokyo, represents a new framework for studying behaviour of complex dynamical systems from a feedback control perspective. A large number of dynamical components can be interconnected and interact with each other to form an integrated system with certain functionalities. Such complex systems are found in nature and have been created by man, including gene regulatory networks, neuronal circuits for memory, decision making, and motor control, bird flocking, global climate dynamics, central processing units for computers, electrical power grids, the World Wide Web, and financial markets. A common feature of these systems is that a global property or function emerges as a result of local, distributed, dynamical interactions of components. The objective of 'glocal' (global + local) control is to understand the mechanisms underlying this feature, analyze existing complex systems, and to design and create innovative systems with new functionalities. This book is dedicated to Professor Shinji Hara on the occasion of his 60th birthday, collecting the latest results by leading experts in control theories to lay a solid foundation towards the establishment of glocal control theory in the coming decades.
Distributed feedback (DFB) semiconductor lasers emit light in a single mode which is essential to providing the carrier in long haul high bit-rate optical communication systems. This comprehensive research monograph provides:
thorough analysis of the operation and design of DFB lasers
a high level of tutorial discussion with many valuable appendices
the first full account of time-domain numerical modelling techniques applicable to future optical systems as well as present devices
Web access to a suite of MATLAB programs (student version MATLAB 4 or higher).
It is essential reading for those studying optical communications at graduate and advanced under-graduate level, and a key book for industrial designers of opto-electronic devices.
Edgar Allan Poe - American Writers 89 was first published in 1970. Minnesota Archive Editions uses digital technology to make long-unavailable books once again accessible, and are published unaltered from the original University of Minnesota Press editions.
Understanding ion channel gating has been a goal of researchers since Hodgkin and Huxley's classic publication in 1952, but the gating mechanism has remained elusive. In this book it is shown how electrons can control gating. Introducing the electron as a gating agent requires amplification, but until now there has been no appropriate mechanism for amplification.
It was discovered that NH3 groups, at the end of arginine and lysine side chains, act as amplifiers for electron tunneling. The amplification is about 25-fold, due to the inversion of NH3. The inversion frequency for NH3 attached to the side chain was calculated from an electron-gating model, calibrated to the Hodgkin-Huxley model. It was also determined experimentally in Blue Fluorescent Protein, using a new microwave spectroscopy technique developed specifically for this purpose. The inversion frequency of NH3 in the gas phase occurs at about 24 GHz and is used by the ammonia clock and for amplification in the ammonia maser. This frequency, reduced by the attachment of NH3 to the side chain, is the basis for amplification of electron tunneling at arginine and lysine sites. Detailed models for electron gating in sodium and potassium ion channels are described and an electron-gating model for calcium oscillators is presented.
The book is a documentation of the author's research and is oriented towards research workers in the biological sciences. The new quantum-mechanical approach to ion channel gating elucidates mechanisms important to cellular function and signaling.
Equality of Opportunity
John E. ROEMER Harvard University Press, 1998 Library of Congress HB846.R63 1998 | Dewey Decimal 330.126
John Roemer points out that there are two views of equality of opportunity that are widely held today. The first, which he calls the nondiscrimination principle, states that in the competition for positions in society, individuals should be judged only on attributes relevant to the performance of the duties of the position in question. Attributes such as race or sex should not be taken into account. The second states that society should do what it can to level the playing field among persons who compete for positions, especially during their formative years, so that all those who have the relevant potential attributes can be considered.
Common to both positions is that at some point the principle of equal opportunity holds individuals accountable for achievements of particular objectives, whether they be education, employment, health, or income. Roemer argues that there is consequently a "before" and an "after" in the notion of equality of opportunity: before the competition starts, opportunities must be equalized, by social intervention if need be; but after it begins, individuals are on their own. The different views of equal opportunity should be judged according to where they place the starting gate which separates "before" from "after." Roemer works out in a precise way how to determine the location of the starting gate in the different views.
Classroom-tested over many years and filled with fresh examples, Essential Demographic Methods is tailored to beginners, advanced students, and researchers. Award-winning teacher and eminent demographer Kenneth Wachter draws on themes from the individual lifecourse, history, and global change to bring out the wider appeal of demography.
At a time of unprecedented expansion in the life sciences, evolution is the one theory that transcends all of biology. Any observation of a living system must ultimately be interpreted in the context of its evolution. Evolutionary change is the consequence of mutation and natural selection, which are two concepts that can be described by mathematical equations. Evolutionary Dynamics is concerned with these equations of life. In this book, Martin A. Nowak draws on the languages of biology and mathematics to outline the mathematical principles according to which life evolves. His work introduces readers to the powerful yet simple laws that govern the evolution of living systems, no matter how complicated they might seem.
Evolution has become a mathematical theory, Nowak suggests, and any idea of an evolutionary process or mechanism should be studied in the context of the mathematical equations of evolutionary dynamics. His book presents a range of analytical tools that can be used to this end: fitness landscapes, mutation matrices, genomic sequence space, random drift, quasispecies, replicators, the Prisoner’s Dilemma, games in finite and infinite populations, evolutionary graph theory, games on grids, evolutionary kaleidoscopes, fractals, and spatial chaos. Nowak then shows how evolutionary dynamics applies to critical real-world problems, including the progression of viral diseases such as AIDS, the virulence of infectious agents, the unpredictable mutations that lead to cancer, the evolution of altruism, and even the evolution of human language. His book makes a clear and compelling case for understanding every living system—and everything that arises as a consequence of living systems—in terms of evolutionary dynamics.
John G. Cragg and Burton G. Malkiel collected detailed forecasts of professional investors concerning the growth of 175 companies and use this information to examine the impact of such forecasts on the market evaluations of the companies and to test and extend traditional models of how stock market values are determined.
The United States is distinctive among Western countries in its reliance on nonprofit institutions to perform major social functions. This reliance is rooted in American history and is fostered by federal tax provisions for charitable giving. In this study, Charles T. Clotfelter demonstrates that changes in tax policy—effected through legislation or inflation—can have a significant impact on the level and composition of giving.
Clotfelter focuses on empirical analysis of the effects of tax policy on charitable giving in four major areas: individual contributions, volunteering, corporate giving, and charitable bequests. For each area, discussions of economic theory and relevant tax law precede a review of the data and methodology used in econometric studies of charitable giving. In addition, new econometric analyses are presented, as well as empirical data on the effect of taxes on foundations.
While taxes are not the most important determinant of contributions, the results of the analyses presented here suggest that charitable deductions, as well as tax rates and other aspects of the tax system, are significant factors in determining the size and distribution of charitable giving. This work is a model for policy-oriented research efforts, but it also supplies a major (and very timely) addition to the evidence that must inform future proposals for tax reform.
Free to Lose
John E. ROEMER Harvard University Press, 1988 Library of Congress HB97.5.R6162 1988 | Dewey Decimal 335.412
John Roemer challenges the morality of an economic system based on the private ownership of the means of production. Unless you start with a certain amount of wealth in such a society, you are only "free to lose." This book addresses crucial questions of political philosophy and normative economics in terms understandable by readers with a minimal knowledge of economics.
To study the strategic interaction of individuals, we can use game theory. Despite the long history shared by game theory and political science, many political scientists remain unaware of the exciting game theoretic techniques that have been developed over the years. As a result they use overly simple games to illustrate complex processes. Games, Information, and Politics is written for political scientists who have an interest in game theory but really do not understand how it can be used to improve our understanding of politics. To address this problem, Gates and Humes write for scholars who have little or no training in formal theory and demonstrate how game theoretic analysis can be applied to politics. They apply game theoretic models to three subfields of political science: American politics, comparative politics, and international relations. They demonstrate how game theory can be applied to each of these subfields by drawing from three distinct pieces of research. By drawing on examples from current research projects the authors use real research problems--not hypothetical questions--to develop their discussion of various techniques and to demonstrate how to apply game theoretic models to help answer important political questions. Emphasizing the process of applying game theory, Gates and Humes clear up some common misperceptions about game theory and show how it can be used to improve our understanding of politics.
Games, Information, and Politics is written for scholars interested in understanding how game theory is used to model strategic interactions. It will appeal to sociologists and economists as well as political scientists.
Scott Gates is Assistant Professor of Political Science, Michigan State University. Brian D. Humes is Associate Professor of Political Science, University of Nebraska-Lincoln.
This book reports the authors' research on one of the most sophisticated general equilibrium models designed for tax policy analysis. Significantly disaggregated and incorporating the complete array of federal, state, and local taxes, the model represents the U.S. economy and tax system in a large computer package. The authors consider modifications of the tax system, including those being raised in current policy debates, such as consumption-based taxes and integration of the corporate and personal income tax systems. A counterfactual economy associated with each of these alternatives is generated, and the possible outcomes are compared.
This book presents an exposition of general equilibrium theory for advanced undergraduate and graduate-level students of economics. It contains discussions of economic efficiency, competitive equilibrium, the welfare theorems, the Kuhn-Tucker approach to general equilibrium, the Arrow-Debreu model, and rational expectations equilibrium and the permanent income hypothesis. It presents a unified approach to portions of macro- as well as microeconomic theory and contains problems sets for most chapters.
Nonclassical logics have played an increasing role in recent years in disciplines ranging from mathematics and computer science to linguistics and philosophy. Generalized Galois Logics develops a uniform framework of relational semantics to mediate between logical calculi and their semantics through algebra. This volume addresses normal modal logics such as K and S5, and substructural logics, including relevance logics, linear logic, and Lambek calculi. The authors also treat less-familiar and new logical systems with equal deftness.
A book for engineers who design and build filters of all types, including lumped element, coaxial, helical, dielectric resonator, stripline and microstrip types. A thorough review of classic and modern filter design techniques, containing extensive practical design information of passband characteristics, topologies and transformations, component effects and matching. An excellent text for the design and construction of microstrip filters.
Although complexity surrounds us, its inherent uncertainty, ambiguity, and contradiction can at first make complex systems appear inscrutable. Ecosystems, for instance, are nonlinear, self-organizing, seemingly chaotic structures in which individuals interact both with each other and with the myriad biotic and abiotic components of their surroundings across geographies as well as spatial and temporal scales. In the face of such complexity, ecologists have long sought tools to streamline and aggregate information. Among them, in the 1980s, T. F. H. Allen and Thomas B. Starr implemented a burgeoning concept from business administration: hierarchy theory. Cutting-edge when Hierarchy was first published, their approach to unraveling complexity is now integrated into mainstream ecological thought.
This thoroughly revised and expanded second edition of Hierarchy reflects the assimilation of hierarchy theory into ecological research, its successful application to the understanding of complex systems, and the many developments in thought since. Because hierarchies and levels are habitual parts of human thinking, hierarchy theory has proven to be the most intuitive and tractable vehicle for addressing complexity. By allowing researchers to look explicitly at only the entities and interconnections that are relevant to a specific research question, hierarchically informed data analysis has enabled a revolution in ecological understanding. With this new edition of Hierarchy, that revolution continues.
Henri Leridon University of Chicago Press, 1977 Library of Congress QP251.L4213 | Dewey Decimal 612.6
In this innovative and comprehensive work, expanded by one-third for the English-language edition, Henri Leridon integrates biology and demography to investigate human fertility, both natural and controlled. Traditionally, demographers have been concerned with birthrates in different populations under varying conditions, while biologists have limited themselves to the study of the reproductive process. Leridon has formulated the first coherent overview of the functioning of the human reproductive system in relation to the external conditions that affect fertility.
The book begins with a readable, authoritative review of human fertility in its natural state. Leridon summarizes and evaluates current knowledge, drawing together rare statistical data on physiological variables as well as demographic treatments of these data. After discussing the classical framework used by demographers, Leridon undertakes a "microdemographic" analysis in which he focuses on the individual and explicates the biological processes through which social, psychological, and economic factors affect fertility. He isolates its components—fecundability, intrauterine mortality, the physiological nonsusceptible period, and sterility—then reviews the composite effect of variation in any one component.
Leridon considers situations of controlled fertility: contraception, abortion, and sterilization. The author also presents valuable new data from his own investigations of varying risks of intrauterine mortality. Finally, he shows how the previous approaches can be complemented by the use of mathematical models.
This compact and original exposition of optimal control theory and applications is designed for graduate and advanced undergraduate students in economics. It presents a new elementary yet rigorous proof of the maximum principle and a new way of applying the principle that will enable students to solve any one-dimensional problem routinely. Its unified framework illuminates many famous economic examples and models.
This work also emphasizes the connection between optimal control theory and the classical themes of capital theory. It offers a fresh approach to fundamental questions such as: What is income? How should it be measured? What is its relation to wealth?
The book will be valuable to students who want to formulate and solve dynamic allocation problems. It will also be of interest to any economist who wants to understand results of the latest research on the relationship between comprehensive income accounting and wealth or welfare.
Table of Contents:
Part I. Introduction to the Maximum Principle 1. The Calculus of Variations and the Stationary Rate of Return on Capital 2. The Prototype-Economic Control Problem 3. The Maximum Principle in One Dimension 4. Applications of the Maximum Principle in One Dimension
Part II. Comprehensive Accounting and the Maximum Principle 5. Optimal Multisector Growth and Dynamic Competitive Equilibrium 6. The Pure Theory of Perfectly Complete National Income Accounting 7. The Stochastic Wealth and Income Version of the Maximum Principle
This book brings together Professor Arthur’s pioneering article and provide a comprehensive presentation of his exciting vision of an economics that incorporates increasing returns. After a decade of resistance from economists, these ideas are now being widely discussed and adopted, as Kenneth Arrow recounts in his foreword. In fundamental ways they are changing our views of the working economy.
This text/reference is a detailed look at the development and use of integral equation methods for electromagnetic analysis, specifically for antennas and radar scattering. Developers and practitioners will appreciate the broad-based approach to understanding and utilizing integral equation methods and the unique coverage of historical developments that led to the current state-of-the-art. In contrast to existing books, Integral Equation Methods for Electromagnetics lays the groundwork in the initial chapters so students and basic users can solve simple problems and work their way up to the most advanced and current solutions.
This is the first book to discuss the solution of two-dimensional integral equations in many forms of their application and utility. As 2D problems are simpler to discuss, the student and basic reader can gain the necessary expertise before diving into 3D applications. This is also the first basic text to cover fast integral methods for metallic, impedance, and material geometries. It will provide the student or advanced reader with a fairly complete and up-to-date coverage of integral methods for composite scatterers.
The International Transmission of Inflation
Michael R. Darby, James R. Lothian, Arthur E. Gandolfi, Anna J. Schwartz, and Al University of Chicago Press, 1984 Library of Congress HG229.D35 1983 | Dewey Decimal 332.41
Inflation became the dominant economic, social, and political problem of the industrialized West during the 1970s. This book is about how the inflation came to pass and what can be done about it. Certain to provoke controversy, it is a major source of new empirical information and theoretical conclusions concerning the causes of international inflation.
The authors construct a consistent data base of information for eight countries and design a theoretically sound model to test and evaluate competing hypotheses incorporating the most recent theoretical developments. Additional chapters address an impressive variety of issues that complement and corroborate the core of the study. They answer such questions as these: Can countries conduct an independent monetary policy under fixed exchange rates? How closely tied are product prices across countries? How are disturbances transmitted across countries?
The International Transmission of Inflation is an important contribution to international monetary economics in furnishing an invaluable empirical foundation for future investigation and discussion.
Over the last several decades, mathematical models have become central to the study of social evolution, both in biology and the social sciences. But students in these disciplines often seriously lack the tools to understand them. A primer on behavioral modeling that includes both mathematics and evolutionary theory, Mathematical Models of Social Evolution aims to make the student and professional researcher in biology and the social sciences fully conversant in the language of the field.
Teaching biological concepts from which models can be developed, Richard McElreath and Robert Boyd introduce readers to many of the typical mathematical tools that are used to analyze evolutionary models and end each chapter with a set of problems that draw upon these techniques. Mathematical Models of Social Evolution equips behaviorists and evolutionary biologists with the mathematical knowledge to truly understand the models on which their research depends. Ultimately, McElreath and Boyd’s goal is to impart the fundamental concepts that underlie modern biological understandings of the evolution of behavior so that readers will be able to more fully appreciate journal articles and scientific literature, and start building models of their own.
This powerful new theoretical approach to analyzing urban housing problems and the policies designed to rectify them will be a vital resource for urban planners, developers, policymakers, and economists. The search for the roots of serious urban housing problems such as homelessness, abandonment, rent burdens, slums, and gentrification has traditionally focused on the poorest sector of the housing market. The findings set forth in this volume show that the roots of such problems lie in the relationships among different parts of the market—not solely within the lower-quality portion—though that is where problems are most dramatically manifested and housing reforms are myopically focused.
The authors propose a new understanding of the market structure characterized by a closely interrelated array of quality submarkets. Their comprehensive models ground a unified theory that accounts for demand by both renters and owner occupants, supply by owners of existing dwellings, changes in the stock of housing due to conversions and new construction, and interactions across submarkets.
Sharon E. Kingsland University of Chicago Press, 1995 Library of Congress QH352.K56 1995 | Dewey Decimal 574.5248
The first history of population ecology traces two generations of science and scientists from the opening of the twentieth century through 1970. Kingsland chronicles the careers of key figures and the field's theoretical, empirical, and institutional development, with special attention to tensions between the descriptive studies of field biologists and later mathematical models. This second edition includes a new afterword that brings the book up to date, with special attention to the rise of "the new natural history" and debates about ecology's future as a large-scale scientific enterprise.
Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimisation of error between the model response and actual system response. However, with the proliferation of highspeed digital computers, elegant and innovative techniques like filter error method, genetic algorithms and artificial neural networks are finding more and more use in parameter estimation problems. Modelling and Parameter Estimation of Dynamic Systems presents a detailed examination of many estimation techniques and modelling problems.
For any organisation to be successful in an increasingly competitive and global working environment, it is essential that there is a clear understanding of all aspects of the business. Given that no two organisations are exactly alike, there is no definitive understanding of exactly what these aspects are as they will depend on the organisation's nature, size and so on. Some of the aspects of the business that must be considered include: process models, process descriptions, competencies, standards, methodologies, infrastructure, people and business goals.
It is important that these different aspects of the business are not only understood, but also that they are consistent and congruent with one another. The creation of an effective Enterprise Architecture (EA) provides a means by which an organisation can obtain such an understanding.
This book looks at the practical needs of creating and maintaining an effective EA within a twenty-first-century business through the use of pragmatic modelling. The book introduces the concepts behind enterprise architectures, teaches the modelling notation needed to effectively realise an enterprise architecture and explores the concepts more fully through a real-life enterprise architecture.
A New Architecture for the U.S. National Accounts brings together a distinguished group of contributors to initiate the development of a comprehensive and fully integrated set of United States national accounts. The purpose of the new architecture is not only to integrate the existing systems of accounts, but also to identify gaps and inconsistencies and expand and incorporate systems of nonmarket accounts with the core system.
Since the United States economy accounts for almost thirty percent of the world economy, it is not surprising that accounting for this huge and diverse set of economic activities requires a decentralized statistical system. This volume outlines the major assignments among institutions that include the Bureau of Economic Analysis, the Bureau of Labor Statistics, the Department of Labor, the Census Bureau, and the Governors of the Federal Reserve System.
An important part of the motivation for the new architecture is to integrate the different components and make them consistent. This volume is the first step toward achieving that goal.
This book describes the three major power system transients and dynamics simulation tools based on a circuit-theory approach that are widely used all over the world (EMTP-ATP, EMTP-RV and EMTDC/PSCAD), together with other powerful simulation tools such as XTAP.
In the first part of the book, the basics of circuit-theory based simulation tools and of numerical electromagnetic analysis methods are explained, various simulation tools are introduced and the features, strengths and weaknesses are described together with some application examples. In the second part, various transient and dynamic phenomena in power systems are investigated and studied by applying the numerical analysis tools, including: transients in various components related to a renewable system; surges on wind farm and collection systems; protective devices such as fault locators and high-speed switchgear; overvoltages in a power system; dynamic phenomena in FACTS, especially STATCOM (Static Synchronous Compensator); the application of SVC to a cable system; and grounding systems.
Combining underlying theory with real-world examples, this book will be of use to researchers involved in analysis of power systems for development and optimization, and professionals and advanced students working with power systems in general.
This textbook teaches students to create computer codes used to engineer antennas, microwave circuits, and other critical technologies for wireless communications and other applications of electromagnetic fields and waves. Worked code examples are provided for MATLAB technical computing software. It is the only textbook on numerical methods that begins at the undergraduate engineering student level but brings students to the state-of-the-art by the end of the book. It focuses on the most important and popular numerical methods, going into depth with examples and problem sets of escalating complexity. This book requires only one core course of electromagnetics, allowing it to be useful both at the senior and beginning graduate levels. Developing and using numerical methods in a powerful tool for students to learn the principles of intermediate and advanced electromagnetics. This book fills the missing space of current textbooks that either lack depth on key topics (particularly integral equations and the method of moments) and where the treatment is not accessible to students without an advanced theory course. Important topics include: Method of Moments; Finite Difference Time Domain Method; Finite Element Method; Finite Element Method-Boundary Element Method; Numerical Optimization; and Inverse Scattering.
Spurred by the advances in option theory that have been remaking financial and economic scholarship over the past thirty years, a revolution is taking shape in the way legal scholars conceptualize property and the way it is protected by the law. Ian Ayres's Optional Law explores how option theory is overthrowing many accepted wisdoms and producing tangible new tools for courts in deciding cases.
Ayres identifies flaws in the current system and shows how option theory can radically expand and improve the ways that lawmakers structure legal entitlements. An option-based system, Ayres shows, gives parties the option to purchase—or the option to sell—the relevant legal entitlement. Choosing to exercise a legal option forces decisionmakers to reveal information about their own valuation of the entitlement. And, as with auctions, entitlements in option-based law naturally flow to those who value them the most. Seeing legal entitlements through this lens suggests a variety of new entitlement structures from which lawmakers might choose. Optional Law provides a theory for determining which structure is likely to be most effective in harnessing parties' private information.
Proposing a practical approach to the foundational question of how to allocate and protect legal rights, Optional Law will be applauded by legal scholars and professionals who continue to seek new and better ways of fostering both equitable and efficient legal rules.
This book is the first to present the application of parabolic equation methods in electromagnetic wave propagation. These powerful numerical techniques have become the dominant tool for assessing clear-air and terrain effects on radiowave propagation and are growing increasingly popular for solving scattering problems.
The book gives the mathematical background to parabolic equation modelling and describes simple parabolic equation algorithms before progressing to more advanced topics such as domain truncation, the treatment of impedance boundaries and the implementation of very fast hybrid methods combining ray-tracing and parabolic equation techniques. The last three chapters are devoted to scattering problems, with application to propagation in urban environments and to radar cross section computation.
This book will prove useful to scientists and engineers who require accurate assessment of diffraction and ducting on radio and radar systems. Its self-contained approach should also make it particularly suitable for graduate students and other researchers interested in radiowave propagation scattering.
What determines whether children grow up to be rich or poor? Arguing that parental actions are some of the most important sources of wealth inequality, Casey B. Mulligan investigates the transmission of economic status from one generation to the next by constructing an economic model of parental preferences.
In Mulligan's model, parents determine the degree of their altruistic concern for their children and spend time with and resources on them accordingly—just as they might make choices about how they spend money. Mulligan tests his model against both old and new evidence on the intergenerational transmission of consumption, earnings, and wealth, including models that emphasize "financial constraints." One major prediction of Mulligan's model confirmed by the evidence is that children of wealthy parents typically spend more than they earn.
Mulligan's innovative approach can also help explain other important behavior, such as charitable giving and "corporate loyalty," and will appeal to a wide range of quantitatively oriented social scientists and sociobiologists.
John E ROEMER Harvard University Press, 2001 Library of Congress JF2051.R64 2001 | Dewey Decimal 324.20151
John Roemer presents a unified and rigorous theory of political competition between parties and he models the theory under many specifications, including whether parties are policy oriented or oriented toward winning, whether they are certain or uncertain about voter preferences, and whether the policy space is uni- or multidimensional.
This collection illustrates how nonlinear methods can provide new insight into existing political questions. Politics is often characterized by unexpected consequences, sensitivity to small changes, non-equilibrium dynamics, the emergence of patterns, and sudden changes in outcomes. These are all attributes of nonlinear processes. Bringing together a variety of recent nonlinear modeling approaches, Political Complexity explores what happens when political actors operate in a dynamic and complex social environment.
The contributions to this collection are organized in terms of three branches within non-linear theory: spatial nonlinearity, temporal nonlinearity, and functional nonlinearity. The chapters advance beyond analogy towards developing rigorous nonlinear models capable of empirical verification.
Contributions to this volume cover the areas of landscape theory, computational modeling, time series analysis, cross-sectional analysis, dynamic game theory, duration models, neural networks, and hidden Markov models. They address such questions as: Is international cooperation necessary for effective economic sanctions? Is it possible to predict alliance configurations in the international system? Is a bureaucratic agency harder to remove as time goes on? Is it possible to predict which international crises will result in war and which will avoid conflict? Is decentralization in a federal system always beneficial?
The contributors are David Bearce, Scott Bennett, Chris Brooks, Daniel Carpenter, Melvin Hinich, Ken Kollman, Susanne Lohmann, Walter Mebane, John Miller, Robert E. Molyneaux, Scott Page, Philip Schrodt, and Langche Zeng.
This book will be of interest to a broad group of political scientists, ranging from those who employ nonlinear methods to those curious to see what it is about. Scholars in other social science disciplines will find the new methodologies insightful for their own substantive work.
Diana Richards is Associate Professor of Political Science, University of Minnesota.
Since the time of Isaac Newton, physicists have used mathematics to describe the behavior of matter of all sizes, from subatomic particles to galaxies. In the past three decades, as advances in molecular biology have produced an avalanche of data, computational and mathematical techniques have also become necessary tools in the arsenal of biologists. But while quantitative approaches are now providing fundamental insights into biological systems, the college curriculum for biologists has not caught up, and most biology majors are never exposed to the computational and probabilistic mathematical approaches that dominate in biological research.
With Quantifying Life, Dmitry A. Kondrashov offers an accessible introduction to the breadth of mathematical modeling used in biology today. Assuming only a foundation in high school mathematics, Quantifying Life takes an innovative computational approach to developing mathematical skills and intuition. Through lessons illustrated with copious examples, mathematical and programming exercises, literature discussion questions, and computational projects of various degrees of difficulty, students build and analyze models based on current research papers and learn to implement them in the R programming language. This interplay of mathematical ideas, systematically developed programming skills, and a broad selection of biological research topics makes Quantifying Life an invaluable guide for seasoned life scientists and the next generation of biologists alike.
Quantifying Systemic Risk
Edited by Joseph G. Haubrich and Andrew W. Lo University of Chicago Press, 2013 Library of Congress HG106.Q36 2012 | Dewey Decimal 338.5
In the aftermath of the recent financial crisis, the federal government has pursued significant regulatory reforms, including proposals to measure and monitor systemic risk. However, there is much debate about how this might be accomplished quantitatively and objectively—or whether this is even possible. A key issue is determining the appropriate trade-offs between risk and reward from a policy and social welfare perspective given the potential negative impact of crises.
One of the first books to address the challenges of measuring statistical risk from a system-wide persepective, Quantifying Systemic Risk looks at the means of measuring systemic risk and explores alternative approaches. Among the topics discussed are the challenges of tying regulations to specific quantitative measures, the effects of learning and adaptation on the evolution of the market, and the distinction between the shocks that start a crisis and the mechanisms that enable it to grow.
Rational Expectations and Econometric Practice was first published in 1981. Minnesota Archive Editions uses digital technology to make long-unavailable books once again accessible, and are published unaltered from the original University of Minnesota Press editions.
Assumptions about how people form expectations for the future shape the properties of any dynamic economic model. To make economic decisions in an uncertain environment people must forecast such variables as future rates of inflation, tax rates, government subsidy schemes and regulations. The doctrine of rational expectations uses standard economic methods to explain how those expectations are formed.
This work collects the papers that have made significant contributions to formulating the idea of rational expectations. Most of the papers deal with the connections between observed economic behavior and the evaluation of alternative economic policies.
Robert E. Lucas, Jr., is professor of economics at the University of Chicago. Thomas J. Sargent is professor of economics at the University of Minnesota and adviser to the Federal Reserve Bank of Minnesota.
Macroeconomics is in disarray. No one approach is dominant, and an increasing divide between theory and empirics is evident.
This book presents both a critique of mainstream macroeconomics from a structuralist perspective and an exposition of modern structuralist approaches. The fundamental assumption of structuralism is that it is impossible to understand a macroeconomy without understanding its major institutions and distributive relationships across productive sectors and social groups.
Lance Taylor focuses his critique on mainstream monetarist, new classical, new Keynesian, and growth models. He examines them from a historical perspective, tracing monetarism from its eighteenth-century roots and comparing current monetarist and new classical models with those of the post-Wicksellian, pre-Keynesian generation of macroeconomists. He contrasts the new Keynesian vision with Keynes's General Theory, and analyzes contemporary growth theories against long traditions of thought about economic development and structural change.
Table of Contents:
1. Social Accounts and Social Relations 1. A Simple Social Accounting Matrix 2. Implications of the Accounts 3. Disaggregating Effective Demand 4. A More Realistic SAM 5. Stock-Flow Relationships 6. A SAM and Asset Accounts for the United States 7. Further Thoughts
2. Prices and Distribution 1. Classical Macroeconomics 2. Classical Theories of Price and Distribution 3. Neoclassical Cost-Based Prices 4. Hat Calculus, Measuring Productivity Growth, and Full Employment Equilibrium 5. Mark-up Pricing in the Product Market 6. Efficiency Wages for Labor 7. New Keynesian Crosses and Methodological Reservations 8. First Looks at Inflation
3. Money, Interest, and Inflation 1. Money and Credit 2. Diverse Interest Theories 3. Interest Rate Cost-Push 4. Real Interest Rate Theory 5. The Ramsey Model 6. Dynamics on a Flying Trapeze 7. The Overlapping Generations Growth Model 8. Wicksell's Cumulative Process Inflation Model 9. More on Inflation Taxes
4. Effective Demand and Its Real and Financial Implications 1. The Commodity Market 2. Macro Adjustment via Forced Saving and Real Balance Effects 3. Real Balances, Input Substitution, and Money Wage Cuts 4. Liquidity Preference and Marginal Efficiency of Capital 5. Liquidity Preference, Fisher Arbitrage, and the Liquidity Trap 6. The System as a Whole 7. The IS/LM Model 8. Keynes and Friends on Financial Markets 9. Financial Markets and Investment 10. Consumption and Saving 11 "Disequilibrium" Macroeconomics 12. A Structuralist Synopsis
5. Short-Term Model Closure and Long-Term Growth 1. Model "Closures" in the Short Run 2. Graphical Representations and Supply-Driven Growth 3. Harrod, Robinson, and Related Stories 4. More Stable Demand-Determined Growth
6. Chicago Monetarism, New Classical Macroeconomics, and Mainstream Finance 1. Methodological Caveats 2. A Chicago Monetarist Model 3. A Cleaner Version of Monetarism 4. New Classical Spins 5. Dynamics of Government Debt 6. Ricardian Equivalence 7. The Business Cycle Conundrum 8. Cycles from the Supply Side 9. Optimal Behavior under Risk 10. Random Walk, Equity Premium, and the Modigliani-Miller Theorem 11. More on Modigliani-Miller 12. The Calculation Debate and Super-Rational Economics
7. Effective Demand and the Distributive Curve 1. Initial Observations 2. Inflation, Productivity Growth, and Distribution 3. Absorbing Productivity Growth 4. Effects of Expansionary Policy 5. Financial Extensions 6. Dynamics of the System 7. Comparative Dynamics 8. Open Economy Complications
8. Structuralist Finance and Money 1. Banking History and Institutions 2. Endogenous Finance 3. Endogenous Money via Bank Lending 4. Money Market Funds and the Level of Interest Rates 5. Business Debt and Growth in a Post-Keynesian World 6. New Keynesian Approaches to Financial Markets
9. A Genus of Cycles 1. Goodwin's Model 2. A Structuralist Goodwin Model 3. Evidence for the United States 4. A Contractionary Devaluation Cycle 5. An Inflation Expectations Cycle 6. Confidence and Multiplier 7. Minsky on Financial Cycles 8. Excess Capacity, Corporate Debt Burden, and a Cold Douche 9. Final Thoughts
10. Exchange Rate Complications 1. Accounting Conundrums 2. Determining Exchange Rates 3. Asset Prices, Expectations, and Exchange Rates 4. Commodity Arbitrage and Purchasing Power Parity 5. Portfolio Balance 6. Mundell-Fleming 7. IS/LM Comparative Statics 8. UIP and Dynamics 9. Open Economy Monetarism 10. Dornbusch 11. Other Theories of the Exchange Rate 12. A Developing Country Debt Cycle 13. Fencing in the Beast
11. Growth and Development Theories 1. New Growth Theories and Say's Law 2. Distribution and Growth 3. Models with Binding Resource or Sectoral Supply Constraints 4. Accounting for Growth 5. Other Perspectives 6. The Mainstream Policy Response 7. Where Theory Might Sensibly Go
Reconstructing Macroeconomics is a stunning intellectual achievement. It surveys an astonishing range of macroeconomic problems and approaches in a compact, coherent critical framework with unfailing depth, wit, and subtlety. Lance Taylor's pathbreaking work in structural macroeconomics and econometrics sets challenging standards of rigor, realism, and insight for the field. Taylor shows why the structuralist and Keynesian insistence on putting accounting consistency, income distribution, and aggregate demand at the center of macroeconomic analysis is indispensable to understanding real-world macroeconomic events in both developing and developed economies. The book is full of new results, modeling techniques, and shrewd suggestions for further research. Taylor's scrupulous and balanced appraisal of the whole range of macroeconomic schools of thought will be a source of new perspectives to macroeconomists of every persuasion. --Duncan K. Foley, New School University
Lance Taylor has produced a masterful and comprehensive critical survey of existing macro models, both mainstream and structuralist, which breaks considerable new ground. The pace is brisk, the level is high, and the writing is entertaining. The author's sense of humor and literary references enliven the discussion of otherwise arcane and technical, but extremely important, issues in macro theory. This book is sure to become a standard reference that future generations of macroeconomists will refer to for decades to come. --Robert Blecker, American University
While there are other books dealing with heterodox macroeconomics, this book surpasses them all in the quality of its presentation and in the careful treatment and criticism of orthodox macroeconomics including its recent contributions. The book is unique in the way it systematically covers heterodox growth theory and its relations to other aspects of heterodox macroeconomics using a common organizing framework in terms of accounting relations, and in the way it compares the theories with mainstream contributions. Another positive and novel feature of the book is that it takes a long view of the development of economic ideas, which leads to a more accurate appreciation of the real contributions by recent theoretical developments than is possible in a presentation that ignores the history of macroeconomics. --Amitava Dutt, University of Notre Dame
This book is an introduction to microwave and RF signal modeling and measurement techniques for field effect transistors. It assumes only a basic course in electronic circuits and prerequisite knowledge for readers to apply the techniques and improve the performance of integrated circuits, reduce design cycles and increase their chance at first time success. The first chapters offer a general overview and discussion of microwave signal and noise matrices, and microwave measurement techniques. The following chapters address modeling techniques for field effect transistors and cover models such as: small signal, large signal, noise, and the artificial neural network based.
This book is a systematic and detailed exposition of different analytical techniques used in studying two of the canonical problems, the wave scattering by wedges or cones with impedance boundary conditions. It is the first reference on novel, highly efficient analytical-numerical approaches for wave diffraction by impedance wedges or cones. This text includes calculations of the diffraction or excitation coefficients, including their uniform versions, for the diffracted waves from the edge of the wedge or from the vertex of the cone; study of the far-field behavior in diffraction by impedance wedges or cones, reflected waves, space waves from the singular points of the boundary (from edges or tips), and surface waves; and the applicability of the reported solution procedures and formulae to existing software packages designed for solving real-world high-frequency problems encountered in antenna, wave propagation, and radar cross section. This book is for researchers in wave phenomena physics, radio, optics and acoustics engineers, applied mathematicians and specialists in mathematical physics and specialists in quantum scattering of many particles.
J. David Singer's legendary Correlates of War project represented the first comprehensive effort by political scientists to gather and analyze empirical data about the causes of war. In doing so, Singer and his colleagues transformed the face of twentieth-century political science. Their work provoked some of the most important debates in modern international relations -- about the rules governing territory, international intervention, and the so-called "democratic peace."
Editor Paul F. Diehl has now convened some of the world's foremost international conflict analysis specialists to reassess COW's contribution to our understanding of global conflict. Each chapter takes one of COW's pathbreaking ideas and reevaluates it in light of subsequent world events and developments in the field. The result is a critical retrospective that will reintroduce Singer's important and still-provocative findings to a new generation of students and specialists.
Paul F. Diehl is Professor of Political Science and University Distinguished Scholar at the University of Illinois, Urbana-Champaign.
Sea Clutter: Scattering, the K Distribution and Radar Performance, 2nd Edition gives an authoritative account of our current understanding of radar sea clutter. Topics covered include the characteristics of radar sea clutter, modelling radar scattering by the ocean surface, statistical models of sea clutter, the simulation of clutter and other random processes, detection of small targets in sea clutter, imaging ocean surface features, radar detection performance calculations, CFAR detection, and the specification and measurement of radar performance. The calculation of the performance of practical radar systems is presented in sufficient detail for the reader to be able to tackle related problems with confidence. In this second edition the contents have been fully updated and reorganised to give better access to the different types of material in the book. Extensive new material has been added on the Doppler characteristics of sea clutter and detection processing; bistatic sea clutter measurements; electromagnetic scattering theory of littoral sea clutter and bistatic sea clutter; the use of models for predicting radar performance; and use of the K distribution in other fields.
This book provides an authoritative account of the current understanding of radar sea clutter, describing its phenomenology, EM scattering and statistical modelling and simulation, and their use in the design of detection systems and the calculation and practical evaluation of radar performance.
The book pays particular attention to the compound K distribution model developed by the authors during the past 20 years. The evidence for this model, its mathematical formulation and development and practical application to the specification, design and evaluation of radar systems are all discussed. In addition, the book sets the previously empirical development of the K distribution model in the wider context of recent advances in the calculation of low grazing angle electromagnetic scattering and oceanographic modelling of the statistics of the sea surface.
The authors discuss in detail the prediction of the performance of specified radar systems; at the same time, their presentation of the underlying physical principles and analytic and computational techniques employed in these calculations is sufficiently comprehensive for the reader to be well equipped to tackle related problems with confidence.
These features, and appendices reviewing pertinent mathematical background material and the calculation of low grazing angle scattering by corrugated surfaces, make this book invaluable to specialist radar engineers and academic researchers, while being of considerable interest to the wider applied physics and mathematics communities.
J. Schlabbach The Institution of Engineering and Technology, 2005 Library of Congress TK3226.S376 2005 | Dewey Decimal 621.319
The calculation of short-circuit currents is a central task for Power System engineers, as they are essential parameters for the design of electrical equipment and installations, the operation of power systems and the analysis of outages and faults.
Short-circuit Currents gives an overview of the components within power systems with respect to the parameters needed for short-circuit current calculation. It also explains how to use the system of symmetrical components to analyse different types of short-circuits in power systems. The thermal and elctromagnetic effects of short-circuit currents on equipment and installations, short-time interference problems and measures for the limitation of short-circuit currents are also discussed. Detailed calculation procedures and typical data of equipment are provided in a separate chapter for easy reference, and worked examples are included throughout.
Many realistic engineering systems are large in dimension and stiff for computation. Their analysis and control require extensive numerical algorithms. The methodology of singular perturbations and time scales (SPTS), crowned with the remedial features of order reduction and stiffness relief is a powerful technique to achieve computational simplicity.
This book presents the twin topics of singular perturbation methods and time scale analysis to problems in systems and control. The heart of the book is the singularly perturbed optimal control systems, which are notorious for demanding excessive computational costs.
The book addresses both continuous control systems (described by differential equations) and discrete control systems (characterised by difference equations). Another feature is the extensive bibliography, which will hopefully be of great help for future study and research. Also of particular interest is the categorisation of an impressive record of applications of the methodology of SPTS in a wide spectrum of fields, such as circuits and networks, fluid mechanics and flight mechanics, biology and ecology, and robotics.
This book is aimed at graduate students, applied mathematicians, scientists and engineers working in universities and industry.
Social Indicator Models
Kenneth C. Land Russell Sage Foundation, 1975 Library of Congress HN25.S58 | Dewey Decimal 309.0184
Deals in comprehensive fashion with a diverse array of objective and subjective social indicators and shows how these indicators can be used, potentially, to inform and perhaps guide social policy. Written with clarity and authority, it will be of paramount interest to those concerned with the interpretation and analysis of social indicators and to those interested in their use. For the former, it serves as an illuminating introduction to some of the analytical tasks that lie ahead in the study of social indicators. For the latter, it provides a solid foundation upon which future policy analysis may be based.
In The Social Life of Financial Derivatives Edward LiPuma theorizes the profound social dimensions of derivatives markets and the processes, rituals, and belief systems that drive them. In response to the 2008 financial crisis and drawing on his experience trading derivatives, LiPuma outlines how they function as complex devices that organize speculative capital as well as the ways derivative-driven capitalism not only produces the conditions for its own existence, but also penetrates the fabric of everyday life. Framing finance as a form of social life and highlighting the intrinsically social character of financial derivatives, LiPuma deepens our understanding of derivatives so that we may someday use them to serve the public well-being.
The revolution in social scientific theory and practice known as nonlinear dynamics, chaos, or complexity, derived from recent advances in the physical, biological, and cognitive sciences, is now culminating with the widespread use of tools and concepts such as praxis, fuzzy logic, artificial intelligence, and parallel processing. By tracing a number of conceptual threads from mathematics, economics, cybernetics, and various other applied systems theoretics, this book offers a historical framework for how these ideas are transforming the social sciences. Daneke goes on to address a variety of persistent philosophical issues surrounding this paradigm shift, ranging from the nature of human rationality to free will. Finally, he describes this shift as a path for revitalizing the social sciences just when they will be most needed to address the human condition in the new millennium.
Systemic Choices describes how praxis and other complex systems tools can be applied to a number of pressing policy and management problems. For example, simulations can be used to grow a number of robust hybrid industrial and/or technological strategies between cooperation and competition. Likewise, elements of international agreements could be tested for sustainability under adaptively evolving institutional designs. Other concrete applications include strategic management, total quality management, and operational analyses.
This exploration of a wide range of technical tools and concepts will interest economists, political scientists, sociologists, psychologists, and those in the management disciplines such as strategy, organizational behavior, finance, and operations.
Gregory A. Daneke is Professor of Technology Management, Arizona State University, and of Human and Organization Development, The Fielding Institute.
The cross-section method is an analytical tool used in the design of components required for low-loss, highly efficient transmission of electromagnetic waves in nonuniform waveguides. When the waveguide dimensions are large compared with the wavelength, a fully three-dimensional analysis employing modern numerical methods based on finite element, finite difference, finite integration or transmission line matrix formalisms is practically impossible and the cross-section method is the only feasible analysis technique.
The method is not limited to oversized tubular metallic waveguides, but is employed intensively in areas such as fibre optic communications, antenna synthesis, natural waveguides (submarine, tropospheric and seismic), microwave radio links (Earth or space) and the design of absorbing surfaces and it may also be applied to many acoustic problems. The application of the method in special cases such as cut-off and resonant frequencies is covered, as well as the design of oversized waveguide components such as tapers, bends, polarisers and mode converters. Many useful formulas are given for the practical layout of such transmission line components. The use of computers in the application of the method and problems related to numerical analysis are also covered.
Emerging from the tradition of Marshall, Knight, Keynes, and Shackle, Time, Ignorance, and Uncertainty in Economic Models is concerned with the character of formal economic analysis when the notions of logical or mechanical time and probabilistic uncertainty and the relatively complete knowledge basis it requires, are replaced, respectively, by historical time, and nonprobabilistic uncertainty and ignorance. Examining that analytical character by constructing and exploring particular models, this book emphasizes doing actual economic analysis in a framework of historical time, nonprobabilistic uncertainty, and ignorance.
Donald W. Katzner begins with an extensive investigation of the distinction between potential surprise and probability. He presents a modified version of Shackle's model of decision-making in ignorance and examines in considerable detail its "comparative statics" and operationality properties. The meaning of aggregation and simultaneity under these conditions is also explored, and Shackle's model is applied to the construction of models of the consumer, the firm, microeconomics, and macroeconomics. Katzner concludes with discussions of the roles of history, hysteresis, and empirical investigation in economic inquiry.
Time, Ignorance, and Uncertainty in Economic Models will be of interest to economists and others engaged in the study of uncertainty, probability, aggregation, and simultaneity. Those interested in the microeconomics of consumer and firm behavior, general equilibrium, and macroeconomics will also benefit from this book.
Donald W. Katzner is Professor of Economics, University of Massachusetts.
How much should citizens invest in promoting health, and how should resources be allocated to cover the costs? A major contribution to economic approaches to the value of health, this volume brings together classic and up-to-date research by economists and public health experts on theories and measurements of health values, providing useful information for shaping public policy.
Weibull Radar Clutter
Matsuo Sekine The Institution of Engineering and Technology, 1990 Library of Congress TK6580.S395 1990 | Dewey Decimal 621.3848
The material presented in this book is intended to provide the reader with a practical treatment of Weibull distribution as applied to radar systems.
Topics include general derivation of Weibull distribution, measurements of Webull-distributed clutter, comparison of Webull distribution with various distributions including Rayleigh, gamma, log-normal and k- distributions, constant false alarm rate (CFAR) detectors for Weibull clutter, non-parametric CFAR detectors, and signal detection in the time and frequency domains. In particular, the Akaike Information Criterion (AIC), which is a rigorously mathematical fit of the hypothetical distribution to the data, is emphasised.
This book is written primarily for radar engineers. It is hoped that it will also be of value to teachers and graduate students and of interest to all who are working with Weibull distribution in various fields.