WorldCat Identities

CARNEGIE-MELLON UNIV PITTSBURGH PA Dept. of COMPUTER SCIENCE

Overview
Works: 989 works in 1,045 publications in 1 language and 1,114 library holdings
Classifications: QA268, 001.642
Publication Timeline
.
Most widely held works by CARNEGIE-MELLON UNIV PITTSBURGH PA Dept. of COMPUTER SCIENCE
An efficient context-free parsing algorithm. by Jay Earley( Book )

1 edition published in 1968 in English and held by 3 WorldCat member libraries worldwide

This paper describes a parsing algorithm for context-free grammars, which is of interest because of its efficiency. The algorithm runs in time proportional to n cubed (where n is the length of the input string) on all context-free grammars. It runs in time proportional to n squared on unambiguous grammars, and we actually show that it is n squared on a considerably larger class of grammars than this, but not on all grammars. These two results are not new, but they have been attained previously by two different algorithms, both of which require the grammar to be put into a special form before they are applicable. The algorithm runs in linear time on a class of grammars which includes LR(K) grammars and finite unions of them (and the LR(K) grammars include those of essentially all published algorithms which run in time n), and a large number of other grammars. These time n grammars in a practical sense include almost all unambiguous grammars, many ambiguous ones, and probably all programming language grammars. We present a method for compiling a recognizer from a time n grammar which runs much faster than our original algorithm would have, working directly with the grammar as it is recognized. We show some undecidability results about the class of grammars that are compilable by this method. (Author)
Lecture Notes in Computer Science by Gerhard Goos( Book )

1 edition published in 1984 in English and held by 3 WorldCat member libraries worldwide

Logics of Programs, as a field of study, touches on a wide variety of activities in computer science and mathematics. It draws on mathematical foundations of formal logic, semantics, and complexity theory, and finds practical application in the areas of program specification, verification, and programming language design. The Logics of Programs Workshop was conceived as a forum for the informal sharing of problems. results, techniques, and new applications in these areas, with special emphasis on bridging whatever abyss may exist between the theoreticians and the pragmatists. The workshop was held on June 6-8, 1983 at CArnegie Mellon University. 38 technical papers were presented, representing the entire spectrum activity in Logics of Programs from model theory to languages for the design of digital circuits
Adaptive Systems for the Dynamic Run-time Optimization of Programs by Gilbert Joseph Hansen( Book )

1 edition published in 1974 in English and held by 3 WorldCat member libraries worldwide

This thesis investigates adaptive compiler systems that perform, during program execution, code optimizations based on the dynamic behavior of the program as opposed to current approaches that employ a fixed code generation strategy, i.e., one in which a predetermined set of code optimizations are applied at compile-time to an entire program. The main problems associated with such adaptive systems are studied in general: which optimizations to apply to what parts of the program and when. Two different optimization strategies result: an ideal scheme which is not practical to implement, and a more basic scheme that is. The design of a practical system is discussed for the FORTRAN IV language. The system was implemented and tested with programs having different behavioral characteristics. (Modified author abstract)
The description, simulation, and automatic implementation of digital computer processors by John A Darringer( Book )

2 editions published in 1969 in English and held by 3 WorldCat member libraries worldwide

The dissertation reports an investigation in the area of automated computer design. A language is developed for describing the behavior of digital computer processors irrespective of their eventual implementation. Algol 60 is used as a base language and several features are added including (1) register data types and operators to allow the convenient and accurate description of the register computations, which occur in all processors, (2) 'time blocks' to permit the specification of the delays involved in operations, and (3) 'if ever statements' to allow the description of parallel operations. Programs are presented for compiling a description into a subset of Algol for simulation and for translating it into a hardware specification for actual implementation. The hardware specification consists of a list of hardware elements, a table of interconnections among the elements, and a state table description of a controller that will sequence the flow of data through the hardware network. A small existing computer is described at several levels in the language, the processor is simulated and implemented at each level, and finally the performance of the programs is evaluated. (Author)
Support for Distributed Transactions in the TABS (Transaction Based Systems) Prototype( Book )

1 edition published in 1984 in English and held by 3 WorldCat member libraries worldwide

The TABS (Transaction Based Systems) Prototype is an experimental facility at Carnegie-Mellon University that provides operating system-level support for distributed transactions that operate on shared abstract types. It is hoped that the facility will simplify the construction of highly available and reliable distributed applications. The paper describes the TABS system model, the TABS prototype's structure, and certain aspects of its operation. The paper concludes with a discussion of the status of the project and a preliminary evaluation
Automatically generating abstractions for problem solving by Craig A Knoblock( Book )

2 editions published in 1991 in English and held by 3 WorldCat member libraries worldwide

A major source of inefficiency in automated problem solvers is their inability to decompose problems and work on the more difficult parts first. This issue can be addressed by employing a hierarchy of abstract problem spaces to focus the search. Instead of solving a problem in the original problem space, a problem is first solved in an abstract space, and the abstract solution is then refined at successive levels in the hierarchy. While this use of abstraction can significantly reduce search, it is often difficult to find good abstractions, and the abstractions must be manually engineered by the designer of a problem domain
Information processing research by Allen Newell( Book )

3 editions published between 1979 and 1988 in English and held by 3 WorldCat member libraries worldwide

This report documents DARPA-supported basic research in Carnegie-Mellon University's Computer Science Department during the period 1 January 1981 through December 1983, extended to 31 December 1984. Each chapter discusses one of seven major research areas. Sections within a chapter present the area's general context, the specific problems addressed, our contributions and their significance, and an annotated bibliography. Keywords: Distributed Processing, Image Understanding, Machine Intelligence, Distributed Sensor Network, Cooperative User Interface, Integrated VLSI Systems, Natural Language, Shadow Geometry, Gradient Space, Code Optimization Compiling Techniques, Parallel Architectures Network Interprocess Communication, Dynamic Load Balancing Flexible Parsing, Integrated Speech, Natural Language Algorithm Design Theory
Worst-Case Analyses of Self-Organizing Sequential Search Heuristics by Jon Louis Bentley( Book )

1 edition published in 1983 in English and held by 3 WorldCat member libraries worldwide

The performance of sequential search can be enhanced by the use of heuristics that move elements closer to the front of the list as they are found. Previous analyses have characterized the performance of such heuristics probabilitically. In this paper we show that the heuristics can also be analyzed in the worst-case sense, and that the relative merit of the heuristics under this analysis is different than in the probabilistic analyses. Simulations show that the relative merit of the heuristics on real data is closer to that of the new worst-case analyses rather than that of the previous probabilistic analyses. (Author)
General Theory of Optimal Error Algorithms and Analytic Complexity. Part B. Iterative Information Model by J. F Traub( Book )

2 editions published between 1977 and 1978 in English and held by 3 WorldCat member libraries worldwide

This is the second of a series of papers in which we construct an information based general theory of optimal error algorithms and analytic computational complexity and study applications of the general theory. In our first paper we studied a general information' model; here we study an 'iterative information' model. We give a general paradigm, based on the pre-image set of an information operator, for obtaining a lower bound on the error of any algorithm using this information. We show that the order of information provides an upper bound on the order of any algorithm using this information. This upper bound order leads to a lower bound on the complexity index
ALPHARD: Toward a Language to Support Structured Programs by William Allan Wulf( Book )

1 edition published in 1974 in English and held by 3 WorldCat member libraries worldwide

This report discusses the programming language tools needed to support the expression of 'well-structured' programs. In particular it deals with the tools needed to express abstractions and their realizations; to this end it introduces the concept of a 'form' to subsume the notions of type (mode), macro, procedure, generator, and coercion. An extendedexample is given together with the sketch of a proof of the example. The proof is included to support the contention that formal verification is substantially simplified when the abstractions and their realization are retained in the program text. (Author)
RAIDframe: A Rapid Prototyping Tool for RAID Systems by W. V Courtright II( Book )

2 editions published in 1997 in English and held by 2 WorldCat member libraries worldwide

Redundant disk arrays provide highly-available, high performance disk storage to a wide variety of applications. Because these applications often have distinct cost, performance, capacity and availability requirements, researchers continue to develop new array architectures. RAIDframe was developed to assist researchers in the implementation and evaluation of these new architectures. It was designed specifically to reduce the burden of implementation by restricting code changes to mapping, algorithms and other junctions that are known to be specific to an array architectures. Algorithms are executed using a general mechanism which automates the recovery from device errors, such as a failed disk read. RAIDframe enables a single implementation to be evaluated in a self-contained simulator, or against real disks as either a user process or a functional device driver
Filesystems for network-attached secure disks by Garth A Gibson( Book )

2 editions published in 1997 in English and held by 2 WorldCat member libraries worldwide

Network-attached storage enables network-striped data transfers directly between client and storage to provide clients with scalable bandwidth on large transfers. Network-attached storage also decouples policy and enforcement of access control, avoiding unnecessary reverification of protection checks, reducing file manager work and increasing scalability. It eliminates the expense of a server computer devoted to copying data between peripheral network and client network. This architecture better matches storage technology's sustained data rates, now 80 Mb/s and growing at 40% per year. Finally, it enables self-managing storage to counter the increasing cost of data management. The availability of cost-effective network-attached storage depends on it becoming a storage commodity, which in turn depends on its utility to a broad segment of the storage market. Specifically, multiple distributed and parallel file systems must benefit from network-attached storage's requirement for secure, direct access between client and storage, for reusable, asynchronous access protection checks, and for increased license to efficiently manage underlying storage media
The Measured Network Traffic of Compiler-Parallelized Programs by Peter A Dinda( Book )

2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide

Using workstations interconnected by a LAN as a distributed parallel computer is becoming increasingly common. At the same time, parallelizing compilers are making such systems easier to program, Understanding the traffic of compiler-parallelized programs running on networks is vital for network planning and for designing quality of service interfaces and mechanisms for new networks. To provide a basis for such understanding, we measured the traffic of six dense-matrix applications written in a dialect of High Performance Fortran and compiled with the Fx parallelizing compiler. The traffic of these programs is profoundly different from typical network traffic. In particular, the programs exhibit global collective communication patterns, correlated traffic along many connections, constant burst sizes, and periodic burstiness with bandwidth dependent periodicity. The traffic of these programs can be characterized by the power spectra of their instantaneous average bandwidth. These spectra can be simplified to form analytic models to generate similar traffic
Improving demonstration using better interaction techniques by Richard McDaniel( Book )

2 editions published in 1997 in English and held by 2 WorldCat member libraries worldwide

Programming by demonstration (PBD) can be used to create tools and methods that eliminate the need to learn difficult computer languages. Gamut is a new PBD tool that can create a broader range of interactive software, including games, simulations, and educational software, than other PBD tools. To do this, Gamut uses advanced interaction techniques that make it easier for a software author to express all needed aspects of one's program. These techniques include a simplified way to demonstrate new examples, called nudges, and a way to highlight objects to show they are important. Also, Gamut includes new objects and metaphors like the deck of cards metaphor for demonstrating collections of objects and randomness, guide objects for drawing relationships that the system would find too difficult to guess, and temporal ghosts which simplify showing relationships with the recent past
Formalization and automatic derivation of code generators by R. G. G Cattell( Book )

2 editions published in 1978 in English and held by 2 WorldCat member libraries worldwide

This work is concerned with automatic derivation of code generators, which translate a parse-tree-like representation of programs into sequences of instructions for a computer defined by a machine description. In pursuing this goal, the following are presented: (1) a model of machines and a notation for their description; (2) a model of code generation, and its use in optimizing compilers; and (3) an axiom system of tree equivalences, and an algorithm for derivation of translators based on tree transformations (this is the main work of the thesis). The algorithms and representations are implemented to demonstrate their practicality as a means for generation of code generators. (Author)
The Fox Project: Advanced Development of Systems Software( Book )

10 editions published between 1996 and 1999 in English and held by 2 WorldCat member libraries worldwide

The long term objectives of the Carnegie Mellon Fox Project are to improve the design and construction of systems software and to further the development of advanced programming language technology. We use principles and techniques from the mathematical foundations of programming languages, including semantics, type theory, and logic, to design and implement systems software, including operating systems, network protocols, and distributed systems. Much of the implementation work is conducted in the Standard ML (SML) language, a modern functional programming language that provides polymorphism, first class functions, exception handling, garbage collection, a parameterized module system, static typing, and a formal semantics. This Project involves several faculty members and spans a wide range of research areas, from (1) advanced compiler development to (2) language design to (3) software system safety infrastructure
Situation-Dependent Learning for Interleaved Planning and Robot Execution by Karen Zita Haigh( Book )

2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide

This dissertation presents the complete integrated planning, executing and learning robotic agent ROGUE. Physical domains are notoriously hard to model completely and correctly. Robotics researchers have developed learning algorithms to successfully tune operational parameters. Instead of improving low-level actuator control, our work focusses instead at the planning stages of the system. The thesis provides techniques to directly process execution experience, and to learn to improve planning and execution performance. ROGUE accepts multiple, asynchronous task requests, and interleaves task planning with real-world robot execution. This dissertation describes how ROGUE prioritizes tasks, suspends and interrupts tasks, and opportunistically achieves compatible tasks. We present how ROGUE interleaves planning and execution to accomplish its tasks, monitoring and compensating for failure and changes in the environment. ROGUE analyzes execution experience to detect patterns in the environment that affect plan quality. ROGUE extracts learning opportunities from massive, continual, probabilistic execution traces. ROGUE then correlates these learning opportunities with environmental features, thus detecting patterns in the form of situation-dependent rules. We present the development and use of these rules for two very different planners: the path planner and the task planner. We present empirical data to show the effectiveness of ROGUE
Natural programming : project overview and proposal by Brad A Myers( Book )

2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide

End-users must write programs to control many different kinds of applications. Examples include multimedia authoring, controlling robots, defining manufacturing processes, setting up simulations, programming agents, scripting, etc. The languages used today for these tasks are usually difficult to learn and are based on professional programming languages. This is in spite of years of research highlighting the problems with these languages for novice programmers. The Natural Programming Project is developing general principles, methods, and programming language designs that will significantly reduce the amount of learning and effort needed to write programs for people who are not professional programmers. These principles are based on a thorough analysis of previous empirical studies of programmers, as well as new studies designed to discover the most natural programming paradigms. Our proposed research is to extend these results, and apply them to different domains. The result will be new programming languages and environments that are demonstrably superior for users
MacFS: A Portable Macintosh File System Library by Peter A Dinda( Book )

2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide

We have created a Macintosh file system library which is portable to a variety of operating systems and platforms. It presents a programming interface sufficient for creating a user level API as well as file system drivers for operating systems that support them. We implemented and tested such a user level API and utility programs based on it as well as an experimental Unix Virtual File System. We describe the Macintosh Hierarchical File System and our implementation and note that the design is not well suited to reentrancy and that its complex data structures can lead to slow implementations in multiprogrammed environments. Performance measurements show that our implementation is faster than the native Macintosh implementation at creating, deleting, reading and writing files with small request sizes, but slower than the Berkeley Fast File System (FFS). However, the native Macintosh implementation can perform large read and write operations faster that either our implementation or FFS
Mediating among diverse data formats by John Mark Ockerbloom( Book )

2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide

The growth of the Internet and other global networks has made large quantities of data available in a wide variety of formats. Unfortunately, most programs are only able to interpret a small number of formats, and cannot take advantage of data in unfamiliar formats. As the Internet grows, new applications arise, and legacy data persists, the diversity of formats will continue to increase, worsening the problem. Current approaches to data diversity fail to scale up gracefully, or fail to handle the full heterogeneity of data and data sources found on the Internet. I have developed a data model and a system of mediator agents that support the widespread use of diverse data formats much more effectively than current approaches do. In this thesis, I describe and evaluate the design and implementation of this data model, known as the Typed Object Model (or TOM), and the system of mediators that supports it. TOM is a read-only object-oriented data model that describes the abstract structure of data formats, their concrete representations, and relations between formats. TOM is supported by a distributed network of mediator agents (known as type brokers) that maintain information about data formats, and provide uniform access to conversions and other operations on those formats. Type brokers plan complex conversion strategies that can involve multiple servers, and ensure that conversions preserve information needed by clients. Data providers can also register new formats, operations, and conversions with type brokers in a decentralized manner, and make them usable anywhere on the Internet. TOM type brokers now work with hundreds of data formats, often through integration of off-the-shelf programs. TOM also supports a wide variety of applications and interfaces, such as the Web-based TOM Conversion Service, that have users worldwide
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.81 (from 0.61 for Formalizat ... to 0.97 for The Fox Pr ...)

Languages
English (43)