WorldCat Identities

GEORGIA INST OF TECH ATLANTA SCHOOL OF INFORMATION AND COMPUTER SCIENCE

Overview
Works: 52 works in 58 publications in 1 language and 60 library holdings
Publication Timeline
.
Most widely held works by GEORGIA INST OF TECH ATLANTA SCHOOL OF INFORMATION AND COMPUTER SCIENCE
Research Program in Fully Distributed Processing Systems( Book )

5 editions published between 1981 and 1982 in English and held by 5 WorldCat member libraries worldwide

The Georgia Tech Research Program in Fully Distributed Processing Systems is a comprehensive investigation of data processing systems in which both the physical and logical components are extremely loosely coupled while operating with a high degree of control autonomy at the component level. The definition of the specific class of multiple computer systems being investigated, and the operational characteristics and features of those systems is motivated by the desire to advance the state-of-the-art for that class of systems that will deliver a high proportion of the benefits currently being claimed for distributed processing systems. The scope of individual topics being investigated under this program ranges from formal modeling and theoretical studies to empirical examinations of prototype systems and simulation models. Also included within the scope of the program are areas such as the utilization of FDPS's and their interaction with management operations and structure
Distributed and decentralized control in fully distributed processing systems by P Enslow( Book )

3 editions published between 1981 and 1983 in English and held by 3 WorldCat member libraries worldwide

An essential component of a Fully Distributed Processing System (FDPS) is the distributed and decentralized control. This component unifies the management of the resources of the FDPS and provides system transparency to the user. In this dissertation the problems of distributed and decentralized control are analyzed and fundamental characteristics of an FDPS executive control are identified. Several models of control have been constructed in order to demonstrate the variety of resource management strategies available to system designers and provide some insight into the relative merits of the various strategies. The performance of four control models has been analyzed by means of simulation experiments
On Mutation by Allen Troy Acree( Book )

1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide

Program Mutation is a method for testing computer programs which is effective at uncovering errors and is less expensive to apply than other techniques. Working mutation systems have demonstrated that mutation analysis can be performed at an attractive cost on realistic programs. In this work, the effectiveness of the method is studied by experiments with programs in the target application spaces. (Author)
Interprocess Communication in Highly Distributed Systems - A Workshop Report - 20 to 22, November 1978( Book )

1 edition published in 1979 in English and held by 2 WorldCat member libraries worldwide

Interprocess Communication (IPC) has been recognized as a critical issue in the design and implementation of all modern operating systems. IPC policies and mechanisms are even more central in the design of highly distributed processing systems - systems exhibiting short-term dynamic changes in the availability of physical and logical resources as well as interconnection topology. A workshop on this subject was held at the Georgia Institute of Technology in November 1979. Four working groups, (1) Addressing, Naming, and Security, (2) Interprocess Synchronization, (3) Interprocess Mechanisms, and (4) Theory and Formalism, addressed the current state of the art in these areas as well as problems and future research directions. This report incorporates much of the material and working papers from those fields as well as selected references useful in understanding the topic. (Author)
Statistical Measures of Software Reliability by Richard A DeMillo( Book )

1 edition published in 1980 in English and held by 1 WorldCat member library worldwide

Estimating program reliability presents many of the same problems as measuring software performance and cost: the central technical issue concerns the existence of an independent objective scale upon which may be based a qualitative judgement of the ability of a given program to function as intended in a specified environment over a specified time interval. Several scales have already been proposed. While these concepts may have independent interest, they fail to capture the most significant aspect of reliability estimation as it applies to software: most software is unreliable by these standards, but the degree of unreliability is not quantified. A useful program which has not been proved correct is unreliable, but so is, say, the null program (unless by some perversity of specification the null program satisfies the designer); an operationally meaningful scale of reliability should distinguish these extremes. In the sequel, we will sketch the outlines of the traditional theory that is most relevant to software reliability estimation, give a brief critical analysis of the use of the traditional theory in measuring reliability, and describe another use of the R(t) measure which we believe more closely fits the intuitive requirements of the scale we asked for above
Problem Solving and Learning in a Natural Task Domain( Book )

1 edition published in 1988 in English and held by 1 WorldCat member library worldwide

Based on work done in Year 1 of the contract analyzing protocols of students solving diagnostic problems, work in Year 2 of the contract has taken two directions: the creation of AI simulation models to explain several of the learning processes used by students and the creation of an experimental tool and formulation of experiments to find out more about how people learn during problem solving and instruction. This report is divided into two sections. In the first section, the experimental tool and experiments are discussed. In section two, beginning on page 27, some of the AI simulation models are presented. (SDW)
Social Processes and Proofs of Theorems and Programs (Revised Version)( Book )

1 edition published in 1978 in English and held by 1 WorldCat member library worldwide

It has been extensively argued that the art and science of programming should strive to become more like mathematics. In this paper we argue that this point of view is correct, but that the reasons usually given for it are wrong. We present our view that mathematics is, rather than a formal process, an ongoing social process and that the formalistic view of mathematics is misleading and destructive for proving software. (Author)
Software project forecasting by Richard A DeMillo( Book )

1 edition published in 1980 in English and held by 1 WorldCat member library worldwide

We have argued that a major use of software metrics is in the forecasting problem for software projects. By analogy with weather forecasting, we may characterize the current state of knowledge in software forecasting as the gathering of portents. While these may be useful and sometimes decisive in project management, they are prescientific and qualitative. Further, it seems very unlikely that the portents can be developed into a useful theory of forecasting. To develop scientific forecasting tools, a rational way of predicting the future from historical primary data is required. It is also important that the primary data and the measurements used to obtain it satisfy some basic methodological requirements -- for example, the hypotheses developed from the measurements should be meaningful in the sense implied by measurement theory. The statistical approach, seeking to predict future events on the basis of historical patterns, seems to be an attractive short range approach to the forecasting problem. The goal of the exact method is to be able to apply largescale computation to many micropredictions to synthesize a quantitative forecast
Operational survivability in gracefully degrading distributed processing systems by Edith Waisbrot Martin( Book )

1 edition published in 1980 in English and held by 1 WorldCat member library worldwide

A simulator was designed and developed which will model possible distributed system network topologies, distributed system application topologies and their effect on application system performance as the configuration of the distributed system network is continuously and arbitrarily reduced. The objective of the model is to aid in development of a measure of survivability which can subsequently be used to evaluate and compare alternative distributed system designs for specific battlefield applications
Testing COBOL Programs by Mutation. Volume I. Introduction to the CMS. 1 System( Book )

1 edition published in 1980 in English and held by 1 WorldCat member library worldwide

Program mutation is a testing technique which has been applied to Fortran programs(ABDLS). This thesis will describe the application of mutation to the Cobol language in an automated program mutation system. The thesis will describe the development of a Cobol Mutation System (CMS. 1), its testing using Fortran mutation analysis, and the subset of Cobol that is supported by CMS. 1. The internal representation selected to represent the Cobol source statements and a description of the mutant operators that are implemented in CMS. 1 will also be supplied. (Author)
Automating the Exchange of Military Personnel Data among Selected Army Organizations. Volume I( Book )

1 edition published in 1981 in English and held by 1 WorldCat member library worldwide

The mandate for this study was broad and general. In brief, AIRMICS was elected to conduct research into the feasibility of establishing data processing procedures that would support the automated exchange of common personnel data among selected Army organizations. As originally scoped out by General Crosby, the study was to conduct exploratory research into alternatives for improved data resource sharing. In particular, the objective was to eliminate undesirable off-line data interfaces and to delineate the technology required for real-time, on-line data exchange throughout the Army personnel community. In confronting this challenge, AIRMICS proposed a two-phase study effort that would first assess the nature of the interface problem and then recommend alternative technologies for dealing with it. Phase One of the project had the objective of critically reviewing the current personnel data management systems in place at MILPERCEN and at other selected Army organizations that had major interfaces with MILPERCEN. Specifically, the research team proposed examining the commonalities in the personnel data being exchanged in a framework that would show the general flow of data through the various Army systems and sub-systems used for its management
Global States of a Distributed System( Book )

1 edition published in 1981 in English and held by 1 WorldCat member library worldwide

A global state of a distributed transaction system is consistent if no transactions are in progress. A global checkpoint is a transaction which must view a globally consistent system state for correct operation. We present an algorithm for adding global checkpoint transactions to an arbitrary distributed transaction system. The algorithm is non-intrusive in the sense that check-point transactions do not interfere with ordinary transactions in progress; however, the checkpoint transactions still produce meaningful results. (Author)
Queing Networks with Finite Capacities by Ian Fuat Akyildiz( Book )

1 edition published in 1989 in English and held by 1 WorldCat member library worldwide

Performance has been a major issue in the design and implementation of systems such as: computer systems, production systems, communication networks and flexible manufacturing systems. The success of failure of such systems is judged by the degree to which performance objectives are met. Thus, tools and techniques for predicting performance measures are of great interest. In the last two decades it has been demonstrated several times that performance can be evaluated and/or predicted well by queuing models which can be solved either by simulation or analytical methods. Simulation is the most general and powerful technique for studying and predicting system performance. However, the high cost of running the simulation programs and uncertain statistical accuracy, makes simulation less attractive. Compared to simulation, analytical methods are more restrictive but have the advantage that it is less costly to compute numerical results. Moreover, they can be implemented very quickly, thus it is very easy to give interpretations to the relationships between model parameters and performance measures. Analytical methods have proved invaluable in modeling a variety of computer systems, computer networks, flexible manufacturing systems, etc. (kr)
The Average Length of Paths Embedded in Trees( Book )

1 edition published in 1977 in English and held by 1 WorldCat member library worldwide

Let A sub n be defined so that the n x n array is embeddable in binary trees by dilating average path length by at most a factor of A sub n. It is shown that as n approaches infinity the limit of A sub n = 0. (Author)
Heuristics for determining equivalence of program mutations by Douglas Baldwin( Book )

1 edition published in 1979 in English and held by 1 WorldCat member library worldwide

A mutant M of a program P is a program derived from P by making some well defined simple change in P. Some initial investigations on automatically detecting equivalent mutants of a program are presented. The idea is based on the observation that compiler optimization can be considered a process of altering a program to an equivalent but more efficient mutant of the program. Thus the inverse of compiler optimization techniques can be seen as, in essence, equivalent mutant detection. (Author)
Automating the Exchange of Military Personnel Data among Selected Army Organizations. Phase 1. Scope and Characteristics of the Army Personnel Data Systems Interface( Book )

1 edition published in 1980 in English and held by 1 WorldCat member library worldwide

Section 1 outlines the background of the project, and discusses the conceptual difficulties inherent in scoping out a problem so massive in relation to the project resources and time available. Section 2 delineates a number of data base management concepts which the researchers believe to represent a reasonable view of how the Army wishes to proceed in the development of plans for new information systems to meet future manpower management needs. Section 3 reviews the findings that the researchers have made during the course of their orientation visits to the various organizations enumerated previously, and identifies what seem to be the major obstacles that the Army will face as it attempts to make the transition from the present to the future. Section 4 summarizes the researchers' assessment of the overall management information systems problem as it now exists and discusses the issues which must be resolved before the Army can successfully proceed to upgrade its current personnel data systems. Section 5, which concludes the report, contains a set of appendices containing data collected during the course of the study
A lower bound for the time to assure interactive consistency by Michael J Fischer( Book )

1 edition published in 1981 in English and held by 1 WorldCat member library worldwide

The problem of 'assuring interactive consistency' is defined in (PSL). It is assumed that there are n isolated processors, of which at most m are faulty. The processors can communicate by means of two-party messages, using a medium which is reliable and of negligible delay. The sender of a message is always identifiable by the receiver. Each processor p has a private value sigma(p). The problem is to devise an algorithm that will allow each processor p to compute a value for each processor r, such that (a) if p and r are nonfaulty then p computes r's private value sigma(r), and (b) all the nonfaulty processors compute the same value for each processor r. It is shown in (PSL) that if n <3m + 1, then there is no algorithm which assures interactive consistency. On the other hand, if n> or = 3m + 1, then an algorithm does exist. The algorithm presented in (PSL) uses m + 1 rounds of communication, and thus can be said to require 'time' m + 1. An obvious question is whether fewer rounds of communication suffice to solve the problem. In this paper, we answer this question in the negative. That is, we show that any algorithm which assures interactive consistency in the presence of m faulty processors requires at least m + 1 rounds of communication. (Author)
Mutation analysis as a tool for software quality assurance by Richard A DeMillo( Book )

1 edition published in 1980 in English and held by 1 WorldCat member library worldwide

A protocol for using mutation analysis as a tool for software quality assurance is described. The results of experiments on the reliability of this method are also described. (Author)
Capitalizing on Failure through Case-Based Inference( Book )

1 edition published in 1987 in English and held by 1 WorldCat member library worldwide

Previous failures to solve problems can be a powerful aid in helping a problem solver to improve. When prior cases in which an error was made are recalled (e.g. common sense mediation of everyday disputes and menu planning), the reasoner may consider whether the same potential for error exists in the new case. As a result, reasoning is directed to that part of the current problem that was responsible for the previous error, sometimes changing the problem solver's focus. Focus may also be directed toward gathering knowledge to evaluate the potential for error in the current case. A case with an error may also suggest a correct solution for the new problem. The combination of these helps the problem solver to avoid repeating mistakes and suggests shortcuts in reasoning that avoid the trial and error of previous cases. Keywords: Artificial intelligence; Cognitive science; Knowledge theory; Man machine systems
Portability of Large COBOL Programs: The COBOL Programmer's Workbench( )

1 edition published in 1979 in English and held by 0 WorldCat member libraries worldwide

The COBOL Programmer's Workbench is a fully integrated collection of automated software tools designed to substantially aid in the design, implementation, test, and maintenance of COBOL data processing systems, especially those that must run on a variety of target host operating environments. The Workbench also assists in the preparation and maintenance of all supporting documentation. One of the most important capabilities of the Workbench is the automatic preparation of a set of equivalent but compiler- unique versions of a baseline program that has been written in Workbench COBOL. The research here included an investigation of the problems of converting a baseline program into compiler-unique versions, an initial study of the use of reusable modules in-line COBOL code, a limited feasibility demonstration of these capabilities, and a preliminary study for the design of COBOL
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.88 (from 0.71 for Queing Net ... to 0.99 for Research P ...)

Languages
English (26)