This book describes some of the developments in Command, Control and Communication (C3) systems. The topics cover the design of large real-time man-machine systems, which are now a vital area of intensive scientific and financial investment. C3 systems are for complex resource management and planning, and although this has a predominantly military connotation, similar systems are now developing in civil sector applications, public utilities and banking.
Topics discussed include the design and structure of C3 systems, databases, standards, the man-machine interface, and advanced processing, including the sensor data fusion and artificial intelligence. It is the multifaceted nature of C3 that this book seeks to capture. The subject is too vast to survey comprehensively but this text offers the reader an important insight into this critically important aspect of modern technology.
Computer games have become ubiquitous in today’s society. Many scholars have speculated on the reasons for their massive success. Yet we haven’t considered the most basic questions: Why do computer games exist? What specific circumstances led to the creation of this entirely new type of game? What sorts of knowledge facilitated the requisite technological and institutional transformations?
With Computer Game Worlds, Claus Pias sets out to answer these questions. Tracing computer games from their earliest forms to the unstoppable commercial and cultural phenomena they have become today, Pias then provides a careful epistemological reconstruction of the process of playing games, both at computers and by computers themselves. The book makes a valuable theoretical contribution to the ongoing discussion about computer games.
Robust control theory allows for changes in a system whilst maintaining stability and performance. Applications of this technique are very important for dependable embedded systems, making technologies such as drones and other autonomous systems with sophisticated embedded controllers and systems relatively common-place.
The aim of this book is to present the theoretical and practical aspects of embedded robust control design and implementation with the aid of MATLAB® and SIMULINK®. It covers methods suitable for practical implementations, combining knowledge from control system design and computer engineering to describe the entire design cycle. Three extended case studies are developed in depth: embedded control of a tank physical model; robust control of a miniature helicopter; and robust control of two-wheeled robots.
These are taken from the area of motion control but the book may be also used by designers in other areas. Some knowledge of Linear Control Theory is assumed and knowledge of C programming is desirable but to make the book accessible to engineers new to the field and to students, the authors avoid complicated mathematical proofs and overwhelming computer architecture technical details. All programs used in the examples and case studies are freely downloadable to help with the assimilation of the book contents.
The ascendance of communication technologies such as the internet has accentuated the need to improve access, manipulation and translation of written language. One of the main goals of researchers in the field of computational linguistics is to create programs that put to use knowledge of human language in pursuit of technology that can overcome the many obstacles in the interaction between human and computer. In this endeavor, finding automated techniques to parse the complexities of human grammar is a premier problem tackled by human-interface researchers. The intricacy of human grammar poses problems not only of accuracy, but also of efficiency.
This book investigates programs for automatic analysis and production of written human language. These specialized programs use knowledge about the structure and meaning of human language in the form of grammars. Various techniques are proposed which focus on solutions for practical problems in processing of constraint-logic grammars. The solutions are all based on the automatic adaptation or compilation of a grammar rather than a modification of the processing algorithm used. As such they allow the grammar writer to abstract over details of grammar processing and in many cases enable more efficient processing.
Software has become a key component of contemporary life and algorithmic techniques that rank, classify, or recommend anything that fits into digital form are everywhere. This book approaches the field of information ordering conceptually as well as historically. Building on the philosophy of Gilbert Simondon and the cultural techniques tradition, it first examines the constructive and cumulative character of software and shows how software-making constantly draws on large reservoirs of existing knowledge and techniques. It then reconstructs the historical trajectories of a series of algorithmic techniques that have indeed become the building blocks for contemporary practices of ordering. Developed in opposition to centuries of library tradition, coordinate indexing, text processing, machine learning, and network algorithms instantiate dynamic, perspectivist, and interested forms of arranging information, ideas, or people. Embedded in technical infrastructures and economic logics, these techniques have become engines of order that transform the spaces they act upon.
The practice of Model-based Systems Engineering is becoming more widely adopted in industry, academia and commerce and, as the use of modelling matures in the real world, so the need for more guidance on how to model effectively and efficiently becomes more prominent. This book describes a number of systems-level 'patterns' (pre-defined, reusable sets of views) that may be applied using the systems modelling language SysML for the development of any number of different applications and as the foundations for a system model.
Topics covered include: what is a pattern? Interface definition pattern; traceability pattern; test pattern; epoch pattern; life cycle pattern; evidence pattern; description pattern; context pattern; analysis pattern; model maturity pattern; requirements modelling; expanded requirements modelling; process modelling; competence modelling; life cycle modelling; defining patterns; and using patterns for model assessment, model definition and for model retro-fitting.
This book forms a companion volume to both SysML for Systems Engineering - a model-based approach and Model-based Requirements Engineering, both published by the IET. Whereas the previous volumes presented the case for modelling and provided an in-depth overview of SysML, this book focusses on a set of 'patterns' as the basis of an MBSE model and their use in today's systems engineering community.
Modern electrical power systems are facing complex challenges, arising from distributed generation and intermittent renewable energy. Fuzzy logic is one approach to meeting this challenge and providing reliability and power quality.
The book is about fuzzy logic control and its applications in managing, controlling and operating electrical energy systems. It provides a comprehensive overview of fuzzy logic concepts and techniques required for designing fuzzy logic controllers, and then discusses several applications to control and management in energy systems. The book incorporates a novel fuzzy logic controller design approach in both Matlab® and in Matlab Simulink® so that the user can study every step of the fuzzy logic processor, with the ability to modify the code.
Fuzzy Logic Control in Energy Systems is an important read for researchers and practicing engineers in energy engineering and control, as well as advanced students involved with power system research and operation.
Grammatical Framework is a programming language designed for writing grammars, which has the capability of addressing several languages in parallel. This thorough introduction demonstrates how to write grammars in Grammatical Framework and use them in applications such as tourist phrasebooks, spoken dialogue systems, and natural language interfaces. The examples and exercises presented here address several languages, and the readers are shown how to look at their own languages from the computational perspective.
A Guide to MATLAB® Object-Oriented Programming is the first book to deliver broad coverage of the documented and undocumented object-oriented features of MATLAB®. Unlike the typical approach of other resources, this guide explains why each feature is important, demonstrates how each feature is used, and promotes an understanding of the interactions between features.
Assuming an intermediate level of MATLAB programming knowledge, the book not only concentrates on MATLAB coding techniques but also discusses topics critical to general software development. It introduces fundamentals first before integrating these concepts into example applications. In the first section, the book discusses eight basic functions: constructor, subsref, subsasgn, display, struct, fieldnames, get, and set. Building on the previous section, it explores inheritance topics and presents the Class Wizard, a powerful MATLAB class generation tool. The final section delves into advanced strategies, including containers, static variables, and function fronts.
With more than 20 years of experience designing and implementing object-oriented software, the expert author has developed an accessible and comprehensive book that aids readers in creating effective object-oriented software using MATLAB.
There is a growing interest in the development and deployment of surveillance systems in public and private locations. Conventional approaches rely on the installation of wide area CCTV (Closed Circuit Television), but the explosion in the numbers of cameras that have to be monitored, the increasing costs of providing monitoring personnel and the limitations that humans have to maintain sustained levels of concentration severely limit the effectiveness of these systems. Advances in information and communication technologies, such as computer vision for face recognition and human behaviour analysis, digital annotation and storage of video, transmission of video/audio streams over wired and wireless networks, can potentially provide significant improvements in this field.
The book consists of a coherent selection of extended versions of presentations made in two successful IEE symposia on Intelligent Distributed Surveillance Systems (IDSS). It surveys recent development in distributed intelligent surveillance systems and brings together the work of researchers and engineers, system integrators and managers of public and private organisations likely to use such systems.
Modern computing systems of all kinds accumulate various data at an almost unimaginable rate. Alongside the advances in technology that make such storage possible has grown a realisation that buried within this mass of data there may exist some knowledge of considerable value. This could be information critical for a company's business success or something leading to a scientific or medical discovery or breakthrough. Most data is simply stored and never examined, but machine-learning technology has the potential to extract knowledge of value (i.e. data mining).
This book considers knowledge discovery - which has been defined as 'the extraction of implicit, previously unknown and potentially useful information from data' - and data mining. Six chapters examine technical issues of considerable practical importance to the future development of this field; issues such as how to overcome feature interaction problems, analysis of outliers, rule discovery, the use of background knowledge, temporal patterns and online analysis processing. There then follow six chapters which describe applications in fields as diverse as medical and health information, meteorology, organic chemistry and the electricity supply industry.
The book grew from a colloquium held in 1998 by the IEE, co-sponsored by the British Computer Society Specialist Group on Expert Systems (BCS-SGES), the Society for Artificial Intelligence and Simulation of Behaviour (AISB) and the International Society for Artificial Intelligence and Education (AIED). The chapters have been expanded considerably from papers presented, and all have been fully refereed.
Donald E. Knuth CSLI, 1992 Library of Congress QA76.6.K644 1992 | Dewey Decimal 005.11
This anthology of essays from Donald Knuth, "the father of computer science," and the inventor of literate programming includes early essays on related topics such as structured programming, as well as The Computer Journal article that launched literate programming itself. Many examples are given, including excerpts from the programs for TeX and METAFONT. The final essay is an example of CWEB, a system for literate programming in C and related languages.
This volume is first in a series of Knuth's collected works.
This book combines the teaching of the MATLAB® programming language with the presentation and development of carefully selected electrical and computer engineering (ECE) fundamentals. This is what distinguishes it from other books concerned with MATLAB®: it is directed specifically to ECE concerns. Students will see, quite explicitly, how and why MATLAB® is well suited to solve practical ECE problems.
This book is intended primarily for the freshman or sophomore ECE major who has no programming experience, no background in EE or CE, and is required to learn MATLAB® programming. It can be used for a course about MATLAB® or an introduction to electrical and computer engineering, where learning MATLAB® programming is strongly emphasized. A first course in calculus, usually taken concurrently, is essential. The book will also serve EE or CE professionals who need to learn MATLAB® and who prefer learning via examples directly relevant to their work.
The distinguishing feature of this MATLAB® book is that about 15 per cent develops ECE fundamentals gradually, from very basic principles. Because these fundamentals are interwoven throughout, MATLAB® can be applied to solve relevant, practical problems. The plentiful, in-depth example problems to which MATLAB® is applied were carefully chosen so that results obtained with MATLAB® also provide insights about the fundamentals.
Videogame history is not just a history of one successful technology replacing the next. It is also a history of platforms and communities that never quite made it; that struggled to make their voices heard; that aggravated against the conventions of the day; and that never enjoyed the commercial success or recognition of their major counterparts. In *Minor Platforms in Videogame History*, Benjamin Nicoll argues that 'minor' videogame histories are anything but insignificant. Through an analysis of transitional, decolonial, imaginary, residual, and minor videogame platforms, Nicoll highlights moments of difference and discontinuity in videogame history. From the domestication of vector graphics in the early years of videogame consoles to the 'cloning' of Japanese computer games in South Korea in the 1980s, this book explores case studies that challenge taken-for-granted approaches to videogames, platforms, and their histories.
Modal Logic and Process Algebra
Edited by Alban Ponse, Maarten de Rijke, and Yde Venema CSLI, 1995 Library of Congress QA267.3.M63 1995 | Dewey Decimal 005.131
Labelled transition systems are mathematical models for dynamic behaviour, or processes, and thus form a research field of common interest to logicians and theoretical computer scientists. In computer science, this notion is a fundamental one in the formal analysis of programming languages, in particular in process theory. In modal logic, transition systems are the central object of study under the name of Kripke models. This volume collects a number of research papers on modal logic and process theory. Its unifying theme is the notion of a bisimulation. Bisimulations are relations over transition systems, and provide a key tool in identifying the processes represented by these structures. The volume offers an up-to-date overview of perspectives on labeled transition systems and bisimulations.
This book provides a hands-on introduction to model-based requirements engineering and management by describing a set of views that form the basis for the approach. These views take into account each individual requirement in terms of its description, but then also provide each requirement with meaning by putting it into the correct 'context'. A requirement that has been put into a context is known as a 'use case' and may be based upon either stakeholders or levels of hierarchy in a system. Each use case must then be analysed and validated by defining a combination of scenarios and formal mathematical and logic-based proofs that provide the rigour required for safety-critical and mission-critical systems.
The book also looks at the crucial question of modelling notations for requirements modelling and includes discussions on the use and application of SysML, text and tabular formats.
Pragmatic issues, such as tailoring the approach for short, non-critical projects to massive, mission-critical projects is discussed to show how the techniques introduced in the book can be applied on real-life projects and systems. The use of multiple tools will also be discussed, along with examples of how an effective process can lead to realisation by any tool.
For any organisation to be successful in an increasingly competitive and global working environment, it is essential that there is a clear understanding of all aspects of the business. Given that no two organisations are exactly alike, there is no definitive understanding of exactly what these aspects are as they will depend on the organisation's nature, size and so on. Some of the aspects of the business that must be considered include: process models, process descriptions, competencies, standards, methodologies, infrastructure, people and business goals.
It is important that these different aspects of the business are not only understood, but also that they are consistent and congruent with one another. The creation of an effective Enterprise Architecture (EA) provides a means by which an organisation can obtain such an understanding.
This book looks at the practical needs of creating and maintaining an effective EA within a twenty-first-century business through the use of pragmatic modelling. The book introduces the concepts behind enterprise architectures, teaches the modelling notation needed to effectively realise an enterprise architecture and explores the concepts more fully through a real-life enterprise architecture.
Nonlinear Optimization in Electrical Engineering with Applications in MATLAB® provides an introductory course on nonlinear optimization in electrical engineering, with a focus on applications such as the design of electric, microwave, and photonic circuits, wireless communications, and digital filter design.
Basic concepts are introduced using a step-by-step approach and illustrated with MATLAB® codes that the reader can use and adapt. Topics covered include:
classical optimization methods
one dimensional optimization
unconstrained and constrained optimization
space mapping optimization
adjoint variable methods
Nonlinear Optimization in Electrical Engineering with Applications in MATLAB® is essential reading for advanced students in electrical engineering.
Reflection, the capacity to represent our ideas and to make them the object of our own thoughts, has for many centuries been recognized as a key mark of human intelligence. The very success and extension of reflective ideas in logic and computer science underscores the need for conceptual foundations.
This book proposes a general theory of reflective logics and reflective declarative programming languages. This theory provides a conceptual foundation for judging the extent to which a computational system is reflective. Manuel Clavel presents a proof of the reflective nature of rewriting logic and provides examples of the potential for reflective programming in a number of novel computer applications. These applications are implemented in Maude, a reflective programming language and environment based on rewriting logic that can define, represent and execute a breadth of logics, languages and models of computation. A general method to easily build theorem-proving tools in Maude is also proposed and illustrated. The book goes on to promote the notion of a "universal theory" that can simulate the deductions of all representable theories within any given logic.
Semi-custom IC Design and VLSI
P.J. Hicks The Institution of Engineering and Technology, 1983 Library of Congress TK7874.S4175 1983 | Dewey Decimal 621.395
The contents of this book were first presented as a series of lectures at the first IEE Vacation School on Semi-Custom IC Design and VLSI held at the University of Edinburgh on 4-8 July 1983. The earlier chapters provide an introduction to silicon IC technology and include descriptions of the various processing techniques employed in the manufacture of microelectronic components. Different types of semi-custom IC are then reviewed and the factors that have to be considered in choosing a semi-custom technique are examined in detail. Logic design is next presented as an activity that is best carried out at a higher level of abstraction than the customary/logic gate level by using the algorithmic state machine (ASM) method. In the sections that follow, computer aids to design and design automation tools are introduced as essential requirements for the rapid and error-free design of semicustom ICs. Testing strategies and the need to design for testability are also covered in some detail.
Although a heavy emphasis is placed on the design of semi-custom ICs, consideration is also given to the ways in which custom VLSI circuits will be designed in future. The merits of the programmable logic array (PLA) as a VLSI building-block are put forward, and the silicon compiler is presented as possibly the ultimate 'semi-custom' technique.
The authors who have contributed to this volume are specialists in their field who can claim many years of experience either in the microelectronics industry or in universities throughout the UK.
The Success of Open Source
Steven Weber Harvard University Press, 2004 Library of Congress QA76.76.S46W43 2004 | Dewey Decimal 005.3
Much of the innovative programming that powers the Internet, creates operating systems, and produces software is the result of “open source” code, that is, code that is freely distributed—as opposed to being kept secret—by those who write it. Leaving source code open has generated some of the most sophisticated developments in computer technology, including, most notably, Linux and Apache, which pose a significant challenge to Microsoft in the marketplace. As Steven Weber discusses, open source’s success in a highly competitive industry has subverted many assumptions about how businesses are run, and how intellectual products are created and protected.
Traditionally, intellectual property law has allowed companies to control knowledge and has guarded the rights of the innovator, at the expense of industry-wide cooperation. In turn, engineers of new software code are richly rewarded; but, as Weber shows, in spite of the conventional wisdom that innovation is driven by the promise of individual and corporate wealth, ensuring the free distribution of code among computer programmers can empower a more effective process for building intellectual products. In the case of Open Source, independent programmers—sometimes hundreds or thousands of them—make unpaid contributions to software that develops organically, through trial and error.
Weber argues that the success of open source is not a freakish exception to economic principles. The open source community is guided by standards, rules, decisionmaking procedures, and sanctioning mechanisms. Weber explains the political and economic dynamics of this mysterious but important market development.
SysML for Systems Engineering
Jon Holt The Institution of Engineering and Technology, 2008 Library of Congress MLCM 2018/47409 (T) | Dewey Decimal 620.001171
Systems modelling is an essential enabling technique for any systems engineering enterprise. These modelling techniques, in particular the unified modelling language (UML), have been employed widely in the world of software engineering and very successfully in systems engineering for many years. However, in recent years there has been a perceived need for a tailored version of the UML that meets the needs of today's systems engineering professional. This book provides a pragmatic introduction to the systems engineering modelling language, the SysML, aimed at systems engineering practitioners at any level of ability, ranging from students to experts. The theoretical aspects and syntax of SysML are covered and each concept is explained through a number of example applications. The book also discusses the history of the SysML and shows how it has evolved over a number of years. All aspects of the language are covered and are discussed in an independent and frank manner, based on practical experience of applying the SysML in the real world.
This new edition of this popular text has been fully updated to reflect SysML 1.3, the latest version of the standard, and the discussion has been extended to show the power of SysML as a tool for systems engineering in an MBSE context. Beginning with a thorough introduction to the concepts behind MBSE, and the theoretical aspects and syntax of SysML, the book then describes how to implement SysML and MBSE in an organisation, and how to model real projects effectively and efficiently, illustrated using an extensive case study.
Chris Mitchell The Institution of Engineering and Technology, 2005 Library of Congress QA76.9.A25T746 2005 | Dewey Decimal 005.8
As computers are increasingly embedded, ubiquitous and wirelessly connected, security becomes imperative. This has led to the development of the notion of a 'trusted platform', the chief characteristic of which is the possession of a trusted hardware element which is able to check all or part of the software running on this platform. This enables parties to verify the software environment running on a remote trusted platform, and hence to have some trust that the data sent to that machine will be processed in accordance with agreed rules.
This new text introduces recent technological developments in trusted computing, and surveys the various current approaches to providing trusted platforms. It also includes application examples based on recent and ongoing research. The core of the book is based on an open workshop on Trusted Computing, held at Royal Holloway, University of London, UK.
Trusted Platform Modules (TPMs) are small, inexpensive chips which provide a limited set of security functions. They are most commonly found as a motherboard component on laptops and desktops aimed at the corporate or government markets, but can also be found on many consumer-grade machines and servers, or purchased as independent components. Their role is to serve as a Root of Trust - a highly trusted component from which we can bootstrap trust in other parts of a system. TPMs are most useful for three kinds of tasks: remotely identifying a machine, or machine authentication; providing hardware protection of secrets, or data protection; and providing verifiable evidence about a machine's state, or attestation.
This book describes the primary uses for TPMs, and practical considerations such as when TPMs can and should be used, when they shouldn't be, what advantages they provide, and how to actually make use of them, with use cases and worked examples of how to implement these use cases on a real system. Topics covered include when to use a TPM; TPM concepts and functionality; programming introduction; provisioning: getting the TPM ready to use; first steps: TPM keys; machine authentication; data protection; attestation; other TPM features; software and specifications; and troubleshooting. Appendices contain basic cryptographic concepts; command equivalence and requirements charts; and complete code samples.
Up until a few years ago there were many different modelling languages available to software developers. However, this vast array of choice only served to hinder communication and as a result the Unified Modelling Language (UML) was born. Although the UML has its roots firmly in the software world, the benefits of adopting a standard visual notation have been recognised in many other fields, not least of which is the field of systems engineering. This book concentrates on systems-based applications, rather than the traditional software applications that are more usually associated with the UML. Now fully updated to reflect the changes to UML for its version 2.0 release, this new edition has been substantially re-written and includes new material on systems architectures and life cycle management.
Development of computer science techniques has significantly enhanced computational electromagnetic methods in recent years. The multi-core CPU computers and multiple CPU work stations are popular today for scientific research and engineering computing. How to achieve the best performance on the existing hardware platforms, however, is a major challenge. In addition to the multi-core computers and multiple CPU workstations, distributed computing has become a primary trend due to the low cost of the hardware and the high performance of network systems. In this book we introduce a general hardware acceleration technique that can significantly speed up FDTD simulations and their applications to engineering problems without requiring any additional hardware devices.
Workers in India program software applications, transcribe medical dictation online, chase credit card debtors, and sell mobile phones, diet pills, and mortgages for companies based in other countries around the world. While their skills and labor migrate abroad, these workers remain Indian citizens, living and working in India. A. Aneesh calls this phenomenon “virtual migration,” and in this groundbreaking study he examines the emerging “transnational virtual space” where labor and vast quantities of code and data cross national boundaries, but the workers themselves do not. Through an analysis of the work of computer programmers in India working for the American software industry, Aneesh argues that the programming code connecting globally dispersed workers through data servers and computer screens is the key organizing structure behind the growing phenomenon of virtual migration. This “rule of code,” he contends, is a crucial and underexplored aspect of globalization.
Aneesh draws on the sociology of science, social theory, and research on migration to illuminate the practical and theoretical ramifications of virtual migration. He combines these insights with his extensive ethnographic research in offices in three locations in India—in Delhi, Gurgaon, and Noida—and one in New Jersey. Aneesh contrasts virtual migration with “body shopping,” the more familiar practice of physically bringing programmers from other countries to work on site, in this case, bringing them from India to New Jersey. A significant contribution to the social theory of globalization, Virtual Migration maps the expanding transnational space where globalization is enacted via computer programming code.