skip to content

Jian, Jiun-Yin

Works: 5 works in 5 publications in 1 language and 6 library holdings
Roles: Author
Classifications: AC801,
Publication Timeline
Publications about Jiun-Yin Jian
Publications by Jiun-Yin Jian
Most widely held works by Jiun-Yin Jian
An analysis and theoretical development of mental learning curves by Jiun-Yin Jian( Archival Material )
1 edition published in 2004 in English and held by 2 libraries worldwide
Foundations for an empirically determined scale of trust in computerized systems : distinguishing concepts and types of trust by Jiun-Yin Jian( Archival Material )
1 edition published in 1998 in English and held by 1 library worldwide
Empirical Investigations of Trust-Related Systems Vulnerabilities in Aided, Adversarial Decision Making ( file )
1 edition published in 2000 in English and held by 0 libraries worldwide
In modern military environments, command-and-control decisions are increasingly supported by information systems which collect, analyze, and display information from multiple sources and sensors, to give decision-makers real time information about an evolving tactical situation. Aided-adversarial decision-making (AADM) refers to military command and control decision in such environments, in which computerized aids are available to groups of co-located and distributed decision-makers, and in which there is a potential for adversarial forces to tamper with and disrupt such aids. In aided, adversarial, decision-making environments, various threats from and offensive opportunities for information Warfare (IW) activities may exist. In these situations, it is crucial to understand the effect of degraded or altered information on human decision-makers, particularly when that information may be intentionally manipulated. The research described in this report continues two prior phases of research which focused on defining, characterizing, and (where possible) modeling the dependencies and vulnerabilities of AADM on components of information, and considered the role of the human decision maker in AADM, developing a theoretical framework to investigate issues of trust in AADM, and a scale to measure human-automation trust. This report presents research which describes and further develops the theoretical approach begun earlier, describes the completed trust scale, and describes an experimental test bed and an initial experiment which tested the theoretical framework developed. Additionally, an initial description of how cultural issues in AADM can be represented by formalisms in different decision-making models is presented and experimentation in the area of graphical data presentation and trust in AADM is described
Foundations for an Empirically Determined Scale of Trust in Automated Systems ( file )
1 edition published in 1998 in English and held by 0 libraries worldwide
One component in the successful use of automated systems is the extent to which people trust the automation to perform effectively. In order to understand the relationship between trust in computerized systems and the use of those systems, we need to be able to effectively measure trust. Although questionnaires regarding trust have been used in prior studies, these questionnaires were theoretically rather than empirically generated and did not distinguish between three potentially different types of trust: human-human trust, human-machine trust, and trust in general. A three-phased experiment, comprising a word elicitation study, a questionnaire study, and a paired comparison study was performed, in order to better understand similarities and differences in the concepts of trust and distrust, and between the different types of trust. Results indicated that trust and distrust can be considered opposites, rather than comprising different concepts. Components of trust, in terms of words related to trust, were similar across the three types of trust. Results obtained from a cluster analysis were used to identify 12 potential factors of trust between people and automated systems. These 12 factors were then used to develop a proposed scale to measure trust in automation
Studies and Analyses of Aided Adversarial Decision Making. Phase 2: Research on Human Trust in Automation ( file )
1 edition published in 1998 in English and held by 0 libraries worldwide
This report describes the second phase of work conducted at the Center for Multi-source Information Fusion at the State University of New York at Buffalo. This work focused on Aided Adversarial Decision Making (AADM) in Information Warfare (IW) environments. Previous work examined informational dependencies and vulnerabilities in AADM to offensive IW operations. In particular, human trust in automated, information warfare environments was identified as a factor which may contribute to these vulnerabilities and dependencies. Given that offensive IW operations may interfere with automated, data-fusion based decision aids, it is necessary to understand how personnel may rely on or trust these aids when appropriate (e.g., when the information provided by the aids is sound), and recognize the need to seek other information (i.e., to "distrust' the aid) when the information system has been attacked. To address these questions, this report details background research in the areas of human trust in automated systems and sociological findings on human trust, details the development of an empirically-based scale to measure trust, provides a framework for investigating issues of human trust and its effect on performance in an AADM-IW environment, and describes the requirements for a laboratory designed to conduct these investigations
English (5)
Close Window

Please sign in to WorldCat 

Don't have an account? You can easily create a free account.