skip to content


Works: 6 works in 6 publications in 1 language and 6 library holdings
Publication Timeline
Studies and Analyses of Aided Adversarial Decision Making. Phase 2: Research on Human Trust in Automation ( file )
1 edition published in 1998 in English and held by 1 library worldwide
This report describes the second phase of work conducted at the Center for Multi-source Information Fusion at the State University of New York at Buffalo. This work focused on Aided Adversarial Decision Making (AADM) in Information Warfare (IW) environments. Previous work examined informational dependencies and vulnerabilities in AADM to offensive IW operations. In particular, human trust in automated, information warfare environments was identified as a factor which may contribute to these vulnerabilities and dependencies. Given that offensive IW operations may interfere with automated, data-fusion based decision aids, it is necessary to understand how personnel may rely on or trust these aids when appropriate (e.g., when the information provided by the aids is sound), and recognize the need to seek other information (i.e., to "distrust' the aid) when the information system has been attacked. To address these questions, this report details background research in the areas of human trust in automated systems and sociological findings on human trust, details the development of an empirically-based scale to measure trust, provides a framework for investigating issues of human trust and its effect on performance in an AADM-IW environment, and describes the requirements for a laboratory designed to conduct these investigations
Foundations for an Empirically Determined Scale of Trust in Automated Systems ( file )
1 edition published in 1998 in English and held by 1 library worldwide
One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A three-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study was performed, in order to better understand similarities and differences in the concepts of trust and distrust, and between the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than comprising different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation
Intentional Systems, Intentional Stance, and Explanations of Intentional Behavior ( file )
1 edition published in 2000 in English and held by 1 library worldwide
This paper is a limited overview of a preliminary research task carried out for the National Security Agency (NSA) having to do with developing a basic understanding of the notions and concepts of intent, intentionality, and intentional behavior. These notions clearly have significant implications for intelligence analysis and especially predictive intelligence; modern-day asymmetric threats and operations other than war make the understanding and exploitation of these notions yet more urgent. The projects eventual goal is to develop and conduct research-oriented experiments with a prototype automated intent-estimating aid based on fused SIGINT-type data, as a path toward developing deeper knowledge about these concepts. This research effort is in support of NSA's WARGODDESS program which is developing a SIGINT-Fusion capability for operational use. As will be seen from the paper, this first study effort started with a review of the quite-basic research literature in the philosophical, cognitive, and legal arenas, with the purpose of evolving possibly new but minimally consistent and synthesized understanding of these basic concepts. It will also be seen that such understanding will not come easily and will require a serious continuing research program if optimal progress is to be made
On the Scientific Foundations of Level 2 Fusion ( file )
1 edition published in 2004 in English and held by 1 library worldwide
Empirical Investigations of Trust-Related Systems Vulnerabilities in Aided, Adversarial Decision Making ( file )
1 edition published in 2000 in English and held by 1 library worldwide
In modern military environments, command-and-control decisions are increasingly supported by information systems which collect, analyze, and display information from multiple sources and sensors, to give decision-makers real time information about an evolving tactical situation. Aided-adversarial decision-making (AADM) refers to military command and control decision in such environments, in which computerized aids are available to groups of co-located and distributed decision-makers, and in which there is a potential for adversarial forces to tamper with and disrupt such aids. In aided, adversarial, decision-making environments, various threats from and offensive opportunities for information Warfare (IW) activities may exist. In these situations, it is crucial to understand the effect of degraded or altered information on human decision-makers, particularly when that information may be intentionally manipulated. The research described in this report continues two prior phases of research which focused on defining, characterizing, and (where possible) modeling the dependencies and vulnerabilities of AADM on components of information, and considered the role of the human decision maker in AADM, developing a theoretical framework to investigate issues of trust in AADM, and a scale to measure human-automation trust. This report presents research which describes and further develops the theoretical approach begun earlier, describes the completed trust scale, and describes an experimental test bed and an initial experiment which tested the theoretical framework developed. Additionally, an initial description of how cultural issues in AADM can be represented by formalisms in different decision-making models is presented and experimentation in the area of graphical data presentation and trust in AADM is described
Comparison of Techniques for Ground Target Tracking ( file )
1 edition published in 2000 in English and held by 1 library worldwide
There have been a lot of studies addressing target-tracking problems, in which targets like aircraft and missiles can move freely in the air without hard spatial constraints. Tracking ground targets is a completely different case. Variable terrain structures not only limit the target's moving capability, but also degrade the quality of measurement data. This paper describes an exploratory research project which studied the tracking of a single ground target via traditional and atypical approaches. Traditional Kalman techniques taking into account the additional information provided by the ground restrictions in the tracking process, a road network in our study, were implemented. Additionally, another tracker using the Hidden Markov Model (HMM) with transition array was also developed under the same scenario. The results showed that Kalman techniques with available road information significantly outperform the conventional Kalman approaches in terms of longitudinal and transversal errors at the time when the target maneuvers. The proposed adaptive HMM tracker, composed of some regional HMM trackers, is not sensitive to transversal maneuvers, but may yield large longitudinal errors at the time when the target approaches the boundary of each subscenario
English (6)
Close Window

Please sign in to WorldCat 

Don't have an account? You can easily create a free account.