WorldCat Identities

Fahlman, Scott E.

Overview
Works: 21 works in 73 publications in 1 language and 588 library holdings
Classifications: QA76.73.L23, 001.6424
Publication Timeline
Key
Publications about  Scott E Fahlman Publications about Scott E Fahlman
Publications by  Scott E Fahlman Publications by Scott E Fahlman
Most widely held works by Scott E Fahlman
NETL, a system for representing and using real-world knowledge by Scott E Fahlman ( Book )
22 editions published between 1979 and 2003 in English and Undetermined and held by 451 WorldCat member libraries worldwide
COMMON LISP : the language by Guy L Steele ( Book )
9 editions published between 1984 and 1990 in English and Undetermined and held by 46 WorldCat member libraries worldwide
Sur le rayonnage le 18 mai 2005
A system for representing and using real-world knowledge by Scott E Fahlman ( Book )
6 editions published between 1975 and 1977 in English and held by 12 WorldCat member libraries worldwide
This report describes a knowledge-base system in which the information is stored in a network of small parallel processing elements--node and link units--which are controlled by an external serial computer. Discussed is NETL, a language for storing real-world information in such a network. A simulator for the parallel network system has been implemented in MACLISP, and an experimental version of NETL is running on this simulator. A number of test-case results and simulated timings will be presented. (Author)
A planning system for robot construction tasks by Scott E Fahlman ( Book )
5 editions published in 1973 in English and Undetermined and held by 12 WorldCat member libraries worldwide
The paper describes a system which plans the construction of specified structures out of simple objects such as toy blocks. The planning is done using a 3-D model of the work space. A powerful control structure allows the use of such techniques as sub-assembly, temporary scaffolding, and counterweights in the construction. (Author)
The hashnet interconnection scheme by Scott E Fahlman ( Book )
4 editions published in 1980 in English and Undetermined and held by 9 WorldCat member libraries worldwide
An empirical study of learning speed in back-propagation networks by Scott E Fahlman ( Book )
2 editions published in 1988 in English and held by 8 WorldCat member libraries worldwide
The Cascade-Correlation learning architecture by Scott E Fahlman ( Book )
4 editions published in 1990 in English and held by 7 WorldCat member libraries worldwide
Abstract: "Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology, Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network determines its own size and topology, it retains the structures it has built even if the training set changes, and it requires no back-propagation of error signals through the connections of the network."
Internal design of CMU Common Lisp on the IBM RT PC by David B McDonald ( Book )
3 editions published between 1987 and 1988 in English and held by 7 WorldCat member libraries worldwide
Thesis progress report : a system for representing and using real-world knowledge by Scott E Fahlman ( Book )
2 editions published in 1975 in Undetermined and English and held by 6 WorldCat member libraries worldwide
Information processing research by Allen Newell ( Book )
1 edition published in 1992 in English and held by 6 WorldCat member libraries worldwide
Abstract: "This report documents a broad program of basic and applied information processing research conducted by Carnegie Mellon's School of Computer Science during the period 15 July 1987 through 14 July 1990, and extended through 31 December 1990. We present in detail our seven major research areas: Artificial Intelligence, Image Understanding, Reliable Distributed Systems, Programming Environments, Reasoning About Programs, Uniform Workstation Interfaces, and Very Large Scale Integration. Sections in each chapter present the area's general research context, the specific problems we addressed, our contributions and their significance, and a bibliography for each chapter."
Learning with limited numerical precision using the cascade- correlation algorithm by Markus Hoehfeld ( Book )
2 editions published in 1991 in English and held by 6 WorldCat member libraries worldwide
Abstract: "A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of limited numerical precision can be used with existing learning algorithms. We present an empirical study of the effects of limited precision in Cascade-Correlation networks on three different learning problems. We show that learning can fail abruptly as the precision of network weights or weight-update calculations is reduced below 12 bits. We introduce techniques for dynamic rescaling and probabilistic rounding that allow reliable convergence down to 6 bits of precision, with only a gradual reduction in the quality of the solutions."
Reducing network depth in the cascade-correlation learning architecture by Shumeet Baluja ( Book )
2 editions published in 1994 in English and held by 6 WorldCat member libraries worldwide
Abstract: "The Cascade-Correlation learning algorithm constructs a multi-layer artificial neural network as it learns to perform a given task. The resulting network's size and topology are chosen specifically for this task. In the resulting 'cascade' networks, each new hidden unit receives incoming connections from all input and pre-existing hidden units. In effect, each new unit adds a new layer to the network. This allows Cascade-Correlation to create complex feature detectors, but it typically results in a network that is deeper, in terms of the longest path from input to output, than is necessary to solve the problem efficiently. In this paper we investigate a simple variation of Cascade-Correlation that will build deep nets if necessary, but that is biased toward minimizing network depth. We demonstrate empirically, across a range of problems, that this simple technique can reduce network depth, often dramatically. However, we show that this technique does not, in general, reduce the total number of weights or improve the generalization abiity of the resulting networks."
The Recurrent Cascade-Correlation architecture by Scott E Fahlman ( Book )
2 editions published in 1991 in English and held by 5 WorldCat member libraries worldwide
Abstract: "Recurrent Cascade-Correlation (RCC) is a recurrent version of the Cascade-Correlation learning architecture of Fahlman and Lebiere [Fahlman, 1990]. RCC can learn from examples to map a sequence of inputs into a desired sequence of outputs. New hidden units with recurrent connections are added to the network one at a time, as they are needed during training. In effect, the network builds up a finite-state machine tailored specifically for the current problem. RCC retains the advantages of Cascade-Correlation: fast learning, good generalization, automatic construction of a near-minimal multi-layered network, and the ability to learn complex behaviors through a sequence of simple lessons
The Intersection Problem by Scott E Fahlman ( Book )
2 editions published in 1975 in English and held by 2 WorldCat member libraries worldwide
This paper is intended as a supplement to AI MEMO 331, "A System for Representing and Using Real-World Knowledge". It is an attempt to redefine and clarify what I now believe the central theme of the research to be. Briefly, I will present the following points: 1. The operation of set-intersection, performed upon large pre-existing sets, plays a pivotal role in the processes of intelligence. 2. Von Neumann machines intersect large sets very. slowly. Attempts to avoid or speed up these intersections have obscured and distorted the other, non-intersection AI problems. 3. The parallel hardware system described in the earlier memo can be viewed as a conceptual tool for thinking about a world in which set-intersection of this sort is cheap. It thus divides many AI problems by factoring out all elements that arise solely due to set intersection
AI programming technology : languages and machines by Scott E Fahlman ( Book )
1 edition published in 1982 in English and held by 1 WorldCat member library worldwide
Computing Facilities for AI: A Survey of Present and Near-Future Options by Scott E Fahlman ( )
1 edition published in 1981 in English and held by 1 WorldCat member library worldwide
At the recent AAAI conference at Stanford, it became apparent that many new AI research centers are being established around the country in industrial and governmental settings and in universities that have not paid much attention to AI in the past. At the same time, many of the established AI centers are in the process of converting from older facilities, primarily based on Decsystem-10 and Decsystem-20 machines, to a variety of newer options. At present, unfortunately, there is no simple answer to the question of what machines, operating systems, and languages a new or upgrading AI facility should use, and this situation has led to a great deal of confusion and anxiety on the part of those researchers and administrators who are faced with making this choice. In this article I will survey the major alternatives available at present and those that are clearly visible on the horizon, and I will try to indicate the advantages and disadvantages of each for AI work. This is mostly information that we have gathered at CMU in the course of planning for our own future computing needs, but the opinions expressed are my own
Common LISP reference manual by Guy L Steele ( Book )
1 edition published in 1983 in Undetermined and held by 1 WorldCat member library worldwide
An efficient common Lisp for the IBM RT PC by David B McDonald ( Book )
1 edition published in 1987 in English and held by 1 WorldCat member library worldwide
Information Processing Research ( )
1 edition published in 1992 in English and held by 1 WorldCat member library worldwide
This report documents a broad program of basic and applied information processing research conducted by Carnegie Mellon's School of Computer Science. The Information Processing Technology Off ice of the Defense Advanced Research Projects Agency (DARPA) supported this work during the period 15 July 1987 through 14 July 1990 and extended the contract to 31 December 1990. Chapters 1 through 7 present in detail our seven major research areas: Artificial Intelligence, Image Understanding, Reliable Distributed Systems, Programming Environments, Reasoning About Programs, Uniform Workstation Interfaces, and Very Large Scale Integration. Sections in each chapter present the area's general research context, the specific problems we addressed, our contributions and their significance, and a bibliography for each chapter
Netl by Scott E Fahlman ( Book )
1 edition published in 1979 in Undetermined and held by 0 WorldCat member libraries worldwide
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.77 (from 0.00 for Computing ... to 0.92 for An empiric ...)
Alternative Names
Fahlman, Scott 1948-
Fahlman, Scott Elliot 1948-
Languages
English (65)
Covers