This volume presents a selection of the best papers presented at the forty-first annual Conference on Computer Applications and Quantitative Methods in Archaeology. The theme for the conference was “Across Space and Time” and the papers explore a multitude of topics related to that concept, including databases, the semantic Web, geographical information systems, data collection and management, and more.
Ada, Countess of Lovelace (1815–52), daughter of romantic poet Lord Byron and the highly educated Anne Isabella, is sometimes called the world’s first computer programmer, and she has become an icon for women in technology today. But how did a young woman in the nineteenth century, without access to formal schooling or university education, acquire the knowledge and expertise to become a pioneer of computer science?
Although it was an unusual pursuit for women at the time, Ada Lovelace studied science and mathematics from a young age. This book uses previously unpublished archival material to explore her precocious childhood—from her curiosity about the science of rainbows to her design for a steam-powered flying horse—as well as her ambitious young adulthood. Active in Victorian London’s social and scientific elite alongside Mary Somerville, Michael Faraday, and Charles Dickens, Ada Lovelace became fascinated by the computing machines of Charles Babbage, whose ambitious, unbuilt invention known as the “Analytical Engine” inspired Lovelace to devise a table of mathematical formulae which many now refer to as the “first program.”
Ada Lovelace died at just thirty-six, but her work strikes a chord to this day, offering clear explanations of the principles of computing, and exploring ideas about computer music and artificial intelligence that have been realized in modern digital computers. Featuring detailed illustrations of the “first program” alongside mathematical models, correspondence, and contemporary images, this book shows how Ada Lovelace, with astonishing prescience, first investigated the key mathematical questions behind the principles of modern computing.
This collection considers new phenomena emerging in a convergence environment from the perspective of adaptation studies. The contributions take the most prominent methods within the field to offer reconsiderations of theoretical concepts and practices in participatory culture, transmedia franchises, and new media adaptations. The authors discuss phenomena ranging from mash-ups of novels and YouTube cover songs to negotiations of authorial control and interpretative authority between media producers and fan communities to perspectives on the fictional and legal framework of brands and franchises. In this fashion, the collection expands the horizons of both adaptation and transmedia studies and provides reassessments of frequently discussed (BBC’s Sherlock or the LEGO franchise) and previously largely ignored phenomena (self-censorship in transnational franchises, mash-up novels, or YouTube cover videos).
The term 'advanced robotics' came in the 1980s to describe the application of advanced sensors and new developments in cognitive science and artificial intelligence to the traditional robot. Today, advanced robots have come far beyond the limitations of the crude 'pick-and-place' machines of the 1980s assembly line, and have a vast range of applications in manufacturing, construction and health care, as well as hostile environments such as space, underwater and nuclear applications.
Advances in Cognitive Systems
Samia Nefti The Institution of Engineering and Technology, 2010 Library of Congress BF311.A317 2010 | Dewey Decimal 153
This book has been inspired by the portfolio of recent scientific outputs from a range of European and national research initiatives in cognitive science. It presents an overview of recent developments in cognition research and unites the various emerging research strands within a single text as a reference point for progress in the subject. It also provides guidance for new researchers on the breadth of the field and the interconnections between the various research strands identified here.
This book describes some of the developments in Command, Control and Communication (C3) systems. The topics cover the design of large real-time man-machine systems, which are now a vital area of intensive scientific and financial investment. C3 systems are for complex resource management and planning, and although this has a predominantly military connotation, similar systems are now developing in civil sector applications, public utilities and banking.
Advances in Modal Logic, Volume 1
Edited by Marcus Kracht, Maarten de Rijke, Heinrich Wansing, and Michael Zakhary CSLI, 1998 Library of Congress BC199.M6A38 1998 | Dewey Decimal 160
Modal logic originated in philosophy as the logic of necessity and possibility. Nowadays it has reached a high level of mathematical sophistication and found many applications in a variety of disciplines, including theoretical and applied computer science, artificial intelligence, the foundations of mathematics, and natural language syntax and semantics.
This volume represents the proceedings of the first international workshop on Advances in Modal Logic, held in Berlin, Germany, October 8-10, 1996. It offers an up-to-date perspective on the field, with contributions covering its proof theory, its applications in knowledge representation, computing and mathematics, as well as its theoretical underpinnings.
"This collection is a useful resource for anyone working in modal logic. It contains both interesting surveys and cutting-edge technical results"
--Edwin D. Mares
The Bulletin of Symbolic Logic, March 2002
By specializing in a vertical market, companies can better understand their customers and bring more insight to clients in order to become an integral part of their businesses. This approach requires dedicated tools, which is where artificial intelligence (AI) and machine learning (ML) will play a major role. By adopting AI software and services, businesses can create predictive strategies, enhance their capabilities, better interact with customers, and streamline their business processes.
Although some IoT systems are built for simple event control where a sensor signal triggers a corresponding reaction, many events are far more complex, requiring applications to interpret the event using analytical techniques to initiate proper actions. Artificial intelligence of things (AIoT) applies intelligence to the edge and gives devices the ability to understand the data, observe the environment around them, and decide what to do best with minimum human intervention. With the power of AI, AIoT devices are not just messengers feeding information to control centers. They have evolved into intelligent machines capable of performing self-driven analytics and acting independently. A smart environment uses technologies such as wearable devices, IoT, and mobile internet to dynamically access information, connect people, materials and institutions, and then actively manages and responds to the ecosystem's needs in an intelligent manner.
The continuing growth in the size and complexity of VLSI devices requires a parallel development of well-designed, efficient CAD tools. The majority of commercially available tools are based on an algorithmic approach to the problem and there is a continuing research effort aimed at improving these. The sheer complexity of the problem has, however, led to an interest in examining the applicability of expert systems and other knowledge based techniques to certain problems in the area and a number of results are becoming available. The aim of this book is to sample the present state-of-the-art in CAD for VLSI and it covers both newly developed algorithms and applications of techniques from the artificial intelligence community. The editors believe it will prove of interest to all engineers concerned with the design and testing of integrated circuits and systems.
Making arrowheads, blades, and other stone tools was once a survival skill and is still a craft practiced by thousands of flintknappers around the world. In the United States, knappers gather at regional “knap-ins” to socialize, exchange ideas and material, buy and sell both equipment and knapped art, and make stone tools in the company of others. In between these gatherings, the knapping community stays connected through newsletters and the Internet. In this book, avid knapper and professional anthropologist John Whittaker offers an insider’s view of the knapping community. He explores why stone tools attract modern people and what making them means to those who pursue this art. He describes how new members are incorporated into the knapping community, how novices learn the techniques of knapping and find their roles within the group, how the community is structured, and how ethics, rules, and beliefs about knapping are developed and transmitted. He also explains how the practice of knapping relates to professional archaeology, the trade in modern replicas of stone tools, and the forgery of artifacts. Whittaker’s book thus documents a fascinating subculture of American life and introduces the wider public to an ancient and still rewarding craft.
Winner of the Elizabeth Agee Prize in American Literature
An audacious, interdisciplinary study that combines the burgeoning fields of digital aesthetics and eco-criticism
In Animal, Vegetable, Digital, Elizabeth Swanstrom makes a confident and spirited argument for the use of digital art in support of ameliorating human engagement with the environment and suggests a four-part framework for analyzing and discussing such applications.
Through close readings of a panoply of texts, artworks, and cultural artifacts, Swanstrom demonstrates that the division popular culture has for decades observed between nature and technology is artificial. Not only is digital technology not necessarily a brick in the road to a dystopian future of environmental disaster, but digital art forms can be a revivifying bridge that returns people to a more immediate relationship to nature as well as their own embodied selves.
To analyze and understand the intersection of digital art and nature, Animal, Vegetable, Digital explores four aesthetic techniques: coding, collapsing, corresponding, and conserving. “Coding” denotes the way artists use operational computer code to blur distinctions between the reader and text, and, hence, the world. Inviting a fluid conception of the boundary between human and technology, “collapsing” voids simplistic assumptions about the human body’s innate perimeter. The process of translation between natural and human-readable signs that enables communication is described as “corresponding.” “Conserving” is the application of digital art by artists to democratize large- and small-scale preservation efforts.
A fascinating synthesis of literary criticism, communications and journalism, science and technology, and rhetoric that draws on such disparate phenomena as simulated environments, video games, and popular culture, Animal, Vegetable, Digital posits that partnerships between digital aesthetics and environmental criticism are possible that reconnect humankind to nature and reaffirm its kinship with other living and nonliving things.
Markets run on information. Buyers make decisions by relying on their knowledge of the products available, and sellers decide what to produce based on their understanding of what buyers want. But the distribution of market information has changed, as consumers increasingly turn to sources that act as intermediaries for information—companies like Yelp and Google. Antitrust Law in the New Economy considers a wide range of problems that arise around one aspect of information in the marketplace: its quality.
Sellers now have the ability and motivation to distort the truth about their products when they make data available to intermediaries. And intermediaries, in turn, have their own incentives to skew the facts they provide to buyers, both to benefit advertisers and to gain advantages over their competition. Consumer protection law is poorly suited for these problems in the information economy. Antitrust law, designed to regulate powerful firms and prevent collusion among producers, is a better choice. But the current application of antitrust law pays little attention to information quality.
Mark Patterson discusses a range of ways in which data can be manipulated for competitive advantage and exploitation of consumers (as happened in the LIBOR scandal), and he considers novel issues like “confusopoly” and sellers’ use of consumers’ personal information in direct selling. Antitrust law can and should be adapted for the information economy, Patterson argues, and he shows how courts can apply antitrust to address today’s problems.
Snapchat. WhatsApp. Ashley Madison. Fitbit. Tinder. Periscope. How do we make sense of how apps like these-and thousands of others-have embedded themselves into our daily routines, permeating the background of ordinary life and standing at-the-ready to be used on our smartphones and tablets? When we look at any single app, it's hard to imagine how such a small piece of software could be particularly notable. But if we look at a collection of them, we see a bigger picture that reveals how the quotidian activities apps encompass are far from banal: connecting with friends (and strangers and enemies), sharing memories (and personally identifying information), making art (and trash), navigating spaces (and reshaping places in the process). While the sheer number of apps is overwhelming, as are the range of activities they address, each one offers an opportunity for us to seek out meaning in the mundane. Appified is the first scholarly volume to examine individual apps within the wider historical and cultural context of media and cultural studies scholarship, attuned to issues of politics and power, identity and the everyday.
An engrossing origin story for the personal computer—showing how the Apple II’s software helped a machine transcend from hobbyists’ plaything to essential home appliance.
Skip the iPhone, the iPod, and the Macintosh. If you want to understand how Apple Inc. became an industry behemoth, look no further than the 1977 Apple II. Designed by the brilliant engineer Steve Wozniak and hustled into the marketplace by his Apple cofounder Steve Jobs, the Apple II became one of the most prominent personal computers of this dawning industry.
The Apple II was a versatile piece of hardware, but its most compelling story isn’t found in the feat of its engineering, the personalities of Apple’s founders, or the way it set the stage for the company’s multi-billion-dollar future. Instead, historian Laine Nooney suggests that what made the Apple II iconic was its software. In software, we discover the material reasons people bought computers. Not to hack, but to play. Not to code, but to calculate. Not to program, but to print. The story of personal computing in the United States is not about the evolution of hackers—it’s about the rise of everyday users.
Recounting a constellation of software creation stories, Nooney offers a new understanding of how the hobbyists’ microcomputers of the 1970s became the personal computer we know today. From iconic software products like VisiCalc and The Print Shop to historic games like Mystery House and Snooper Troops to long-forgotten disk-cracking utilities, The Apple II Age offers an unprecedented look at the people, the industry, and the money that built the microcomputing milieu—and why so much of it converged around the pioneering Apple II.
Increased use of artificial intelligence (AI) is being deployed in many hospitals and healthcare settings to help improve health care service delivery. Machine learning (ML) and deep learning (DL) tools can help guide physicians with tasks such as diagnosis and detection of diseases and assisting with medical decision making.
CAA is the foremost conference on digital archaeology, and this volume offers a comprehensive and up-to date reference to the state of the art. This volume contains a selection of the best papers presented at the 40th Annual Conference of Computer Applications and Quantitative Methods in Archaeology (CAA), held in Southampton from 26 to 29 March 2012. The papers, all written and peer-reviewed by experts in the field of digital archaeology, explore a multitude of topics to showcase ground-breaking technologies and best practice from various archaeological and informatics disciplines, with a variety of case studies from all over the world.Download the Table of Contents and a sample chapter
Probes the development of information management after World War II and its consequences for public memory and human agency
We are now living in the richest age of public memory. From museums and memorials to the vast digital infrastructure of the internet, access to the past is only a click away. Even so, the methods and technologies created by scientists, espionage agencies, and information management coders and programmers have drastically delimited the ways that communities across the globe remember and forget our wealth of retrievable knowledge.
In Architects of Memory: Information and Rhetoric in a Networked Archival Age, Nathan R. Johnson charts turning points where concepts of memory became durable in new computational technologies and modern memory infrastructures took hold. He works through both familiar and esoteric memory technologies—from the card catalog to the book cart to Zatocoding and keyword indexing—as he delineates histories of librarianship and information science and provides a working vocabulary for understanding rhetoric’s role in contemporary memory practices.
This volume draws upon the twin concepts of memory infrastructure and mnemonic technê to illuminate the seemingly opaque wall of mundane algorithmic techniques that determine what is worth remembering and what should be forgotten. Each chapter highlights a conflict in the development of twentieth-century librarianship and its rapidly evolving competitor, the discipline of information science. As these two disciplines progressed, they contributed practical techniques and technologies for making sense of explosive scientific advancement in the wake of World War II. Taming postwar science became part and parcel of practices and information technologies that undergird uncountable modern communication systems, including search engines, algorithms, and databases for nearly every national clearinghouse of the twenty-first century.
From a technological perspective, these essays address current theories of consciousness and subjective experience, embracing new ideas from the physical sciences alongside more spiritual and artistic aspects of human existence.
This volume develops from the studies published in Roy Ascott's highly successful Reframing Consciousness, documenting the very latest research from those connected with the CAiiA-STAR centre and its associated conferences. Their work embodies artistic and theoretical research in new media and telematics including aspects of artificial life, robotics, technoetics, performance, computer music and intelligent architecture, to growing international acclaim.
Tracing the evolution of the Italian avant-garde’s pioneering experiments with art and technology and their subversion of freedom and control
In postwar Italy, a group of visionary artists used emergent computer technologies as both tools of artistic production and a means to reconceptualize the dynamic interrelation between individual freedom and collectivity. Working contrary to assumptions that the rigid, structural nature of programming limits subjectivity, this book traces the multifaceted practices of these groundbreaking artists and their conviction that technology could provide the conditions for a liberated social life.
Situating their developments within the context of the Cold War and the ensuing crisis among the Italian left, Arte Programmata describes how Italy’s distinctive political climate fueled the group’s engagement with computers, cybernetics, and information theory. Creating a broad range of immersive environments, kinetic sculptures, domestic home goods, and other multimedia art and design works, artists such as Bruno Munari, Enzo Mari, and others looked to the conceptual frameworks provided by this new technology to envision a way out of the ideological impasses of the age.
Showcasing the ingenuity of Italy’s earliest computer-based art, this study highlights its distinguishing characteristics while also exploring concurrent developments across the globe. Centered on the relationships between art, technology, and politics, Arte Programmata considers an important antecedent to the digital age.
Earth observation (EO) involves the collection, analysis, and presentation of data in order to monitor and assess the status and changes in natural and built environments. This technology has many applications including weather forecasting, tracking biodiversity, measuring land-use change, monitoring and responding to natural disasters, managing natural resources, monitoring emerging diseases and health risks, and predicting, adapting to and mitigating climate change.
Research in artificial intelligence has developed many techniques and methodologies that can be either adapted or used directly to solve complex power system problems. A variety of such problems are covered in this book including reactive power control, alarm analysis, fault diagnosis, protection systems and load forecasting. Methods such as knowledge-based (expert) systems, fuzzy logic, neural networks and genetic algorithms are all first introduced and then investigated in terms of their applicability in the power systems field. The book, therefore, serves as both an introduction to the use of artificial intelligence techniques for those from a power systems background and as an overview of the power systems implementation area for those from an artificial intelligence computing or control background. It is structured so that it is suitable for various levels of reader, covering basic principles as well as applications and case studies. The most popular methods and the most fruitful application fields are considered in more detail. The book contains contributions from top international authors and will be an extremely useful text for all those with an interest in the field.
Artificial Intelligence is a seemingly neutral technology, but it is increasingly used to manage workforces and make decisions to hire and fire employees. Its proliferation in the workplace gives the impression of a fairer, more efficient system of management. A machine can't discriminate, after all. Augmented Exploitation explores the reality of the impact of AI on workers' lives. While the consensus is that AI is a completely new way of managing a workplace, the authors show that, on the contrary, AI is used as most technologies are used under capitalism: as a smokescreen that hides the deep exploitation of workers. Going beyond platform work and the gig economy, the authors explore emerging forms of algorithmic governance and AI-augmented apps that have been developed to utilise innovative ways to collect data about workers and consumers, as well as to keep wages and worker representation under control. They also show that workers are not taking this lying down, providing case studies of new and exciting form of resistance that are springing up across the globe.
Critical systems and infrastructure rely heavily on ICT systems and networks where security issues are a major concern. Authentication methods verify that messages come from trusted sources and guarantee the smooth flow of information and data. In this edited reference, the authors present state-of-art research and development in authentication technologies including challenges and applications for Cloud Technologies, IoT and Big Data. Topics covered include authentication; cryptographic algorithms; digital watermarking; biometric authentication; block ciphers with applications in IoT; identification schemes for Cloud and IoT; authentication issues for Cloud applications; cryptography engines for Cloud based on FPGA; and data protection laws.
From hidden connections in big data to bots spreading fake news, journalism is increasingly computer-generated. An expert in computer science and media explains the present and future of a world in which news is created by algorithm.
Amid the push for self-driving cars and the roboticization of industrial economies, automation has proven one of the biggest news stories of our time. Yet the wide-scale automation of the news itself has largely escaped attention. In this lively exposé of that rapidly shifting terrain, Nicholas Diakopoulos focuses on the people who tell the stories—increasingly with the help of computer algorithms that are fundamentally changing the creation, dissemination, and reception of the news.
Diakopoulos reveals how machine learning and data mining have transformed investigative journalism. Newsbots converse with social media audiences, distributing stories and receiving feedback. Online media has become a platform for A/B testing of content, helping journalists to better understand what moves audiences. Algorithms can even draft certain kinds of stories. These techniques enable media organizations to take advantage of experiments and economies of scale, enhancing the sustainability of the fourth estate. But they also place pressure on editorial decision-making, because they allow journalists to produce more stories, sometimes better ones, but rarely both.
Automating the News responds to hype and fears surrounding journalistic algorithms by exploring the human influence embedded in automation. Though the effects of automation are deep, Diakopoulos shows that journalists are at little risk of being displaced. With algorithms at their fingertips, they may work differently and tell different stories than they otherwise would, but their values remain the driving force behind the news. The human–algorithm hybrid thus emerges as the latest embodiment of an age-old tension between commercial imperatives and journalistic principles.
Automating technologies threaten to usher in a workless future. But this can be a good thing—if we play our cards right.
Human obsolescence is imminent. The factories of the future will be dark, staffed by armies of tireless robots. The hospitals of the future will have fewer doctors, depending instead on cloud-based AI to diagnose patients and recommend treatments. The homes of the future will anticipate our wants and needs and provide all the entertainment, food, and distraction we could ever desire.
To many, this is a depressing prognosis, an image of civilization replaced by its machines. But what if an automated future is something to be welcomed rather than feared? Work is a source of misery and oppression for most people, so shouldn’t we do what we can to hasten its demise? Automation and Utopia makes the case for a world in which, free from need or want, we can spend our time inventing and playing games and exploring virtual realities that are more deeply engaging and absorbing than any we have experienced before, allowing us to achieve idealized forms of human flourishing.
The idea that we should “give up” and retreat to the virtual may seem shocking, even distasteful. But John Danaher urges us to embrace the possibilities of this new existence. The rise of automating technologies presents a utopian moment for humankind, providing both the motive and the means to build a better future.