STANFORD UNIV CA Dept. of COMPUTER SCIENCE
Overview
Works:  450 works in 457 publications in 1 language and 509 library holdings 

Publication Timeline
.
Most widely held works by
STANFORD UNIV CA Dept. of COMPUTER SCIENCE
Adaptive mesh refinement for hyperbolic partial differential equations by
M. J Berger(
Book
)
2 editions published between 1982 and 1983 in English and held by 4 WorldCat member libraries worldwide
The authors present an adaptive method based on the idea of multiple, component grids for the solution of hyperbolic partial differential equations using finite difference techniques. Based upon Richardsontype estimates of the truncation error, refined grids are created or existing ones removed to attain a given accuracy for a minimum amount of work. Their approach is recursive in that fine grids can themselves contain even finer grids. The grids with finer mesh width in space also have a smaller mesh width in time, making this a mesh refinement algorithm in time and space. This document includes algorithm, data structures and grid generation procedure, and concludes with numerical examples in one and two space dimensions. (Author)
2 editions published between 1982 and 1983 in English and held by 4 WorldCat member libraries worldwide
The authors present an adaptive method based on the idea of multiple, component grids for the solution of hyperbolic partial differential equations using finite difference techniques. Based upon Richardsontype estimates of the truncation error, refined grids are created or existing ones removed to attain a given accuracy for a minimum amount of work. Their approach is recursive in that fine grids can themselves contain even finer grids. The grids with finer mesh width in space also have a smaller mesh width in time, making this a mesh refinement algorithm in time and space. This document includes algorithm, data structures and grid generation procedure, and concludes with numerical examples in one and two space dimensions. (Author)
Deductive Programming Synthesis(
Book
)
3 editions published between 1989 and 1991 in English and held by 3 WorldCat member libraries worldwide
Program synthesis is the systematic derivation of a computer program to meet a given specification. The specification is a general description of the purpose of the desired program, while the program is a detailed of a method for achieving that purpose. The method is based on a deductive approach, in which the problem of deriving a program is regarded as one of proving a mathematical theorem. The theorem expresses the existence of an object meeting the specified conditions. The proof is restricted to be sufficiently constructive to indicate a method for finding the desired output. That method becomes the basis for a program, which is extracted from the proof. The emphasis of the work has been on automating as much as possible of the program derivation process. Theoremproving methods particularly wellsuited to the program synthesis application have been developed. An interactive programderivation system has been implemented. Applications to database management and planning have been investigated
3 editions published between 1989 and 1991 in English and held by 3 WorldCat member libraries worldwide
Program synthesis is the systematic derivation of a computer program to meet a given specification. The specification is a general description of the purpose of the desired program, while the program is a detailed of a method for achieving that purpose. The method is based on a deductive approach, in which the problem of deriving a program is regarded as one of proving a mathematical theorem. The theorem expresses the existence of an object meeting the specified conditions. The proof is restricted to be sufficiently constructive to indicate a method for finding the desired output. That method becomes the basis for a program, which is extracted from the proof. The emphasis of the work has been on automating as much as possible of the program derivation process. Theoremproving methods particularly wellsuited to the program synthesis application have been developed. An interactive programderivation system has been implemented. Applications to database management and planning have been investigated
Large Time Step ShockCapturing Techniques for Scalar Conservation Laws(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
For a scalar conservation law u sub t = f(u) sub x with f'' of constant sign, the first order upwind difference scheme is a special case of Godunov's method. The method is equivalent to solving a sequence of Riemann problems at each step and averaging the resulting solution over each cell in order to obtain the numerical solution at the next time level. The difference scheme is stable (and the solutions to the associated sequence of Riemann problems do not interact) provided the Courant number nu is less than 1. By allowing and explicitly handling such interactions, it is possible to obtain a generalized method which is stable for nu much larger than 1. In many cases the resulting solution is considerably more accurate than solutions obtained by other numerical methods. In particular, shocks can be correctly computed with virtually no smearing. The generalized method is rather unorthodox and still has some problems associated with it. Nonetheless, preliminary results are quite encouraging. (Author)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
For a scalar conservation law u sub t = f(u) sub x with f'' of constant sign, the first order upwind difference scheme is a special case of Godunov's method. The method is equivalent to solving a sequence of Riemann problems at each step and averaging the resulting solution over each cell in order to obtain the numerical solution at the next time level. The difference scheme is stable (and the solutions to the associated sequence of Riemann problems do not interact) provided the Courant number nu is less than 1. By allowing and explicitly handling such interactions, it is possible to obtain a generalized method which is stable for nu much larger than 1. In many cases the resulting solution is considerably more accurate than solutions obtained by other numerical methods. In particular, shocks can be correctly computed with virtually no smearing. The generalized method is rather unorthodox and still has some problems associated with it. Nonetheless, preliminary results are quite encouraging. (Author)
A Note on Lossless database Decompositions by
Moshe Y Vardi(
Book
)
1 edition published in 1983 in English and held by 2 WorldCat member libraries worldwide
It is known that under a wide variety of assumptions a database decomposition is lossless if and only if the database scheme has a lossless join. Biskup, Dayal, and Bernstein have shown that when the given dependencies are functional then the database scheme has a lossless join if and only if one of the relation scheme is a key for the universal scheme. In this note the investigators supply an alternative proof of that characterization. The proof uses tools from the theory of embedded join dependencies and the theory of tuple and equality generating dependencies, but is, nevertheless, much simpler than the previously published proof. (Author)
1 edition published in 1983 in English and held by 2 WorldCat member libraries worldwide
It is known that under a wide variety of assumptions a database decomposition is lossless if and only if the database scheme has a lossless join. Biskup, Dayal, and Bernstein have shown that when the given dependencies are functional then the database scheme has a lossless join if and only if one of the relation scheme is a key for the universal scheme. In this note the investigators supply an alternative proof of that characterization. The proof uses tools from the theory of embedded join dependencies and the theory of tuple and equality generating dependencies, but is, nevertheless, much simpler than the previously published proof. (Author)
Fast Matrix Multiplication without APAAlgorithms by
Victor Pan(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
The method of trilinear aggregating with implicit canceling for the design of fast matrix multiplication (MM) algorithms is revised and is formally presented with the use of Generating Tables and of linear transformations of the problem of MM. It is shown how to derive the exponent of MM below 2.67 even without the use of approximation algorithms. (Author)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
The method of trilinear aggregating with implicit canceling for the design of fast matrix multiplication (MM) algorithms is revised and is formally presented with the use of Generating Tables and of linear transformations of the problem of MM. It is shown how to derive the exponent of MM below 2.67 even without the use of approximation algorithms. (Author)
The CaratheodoryFejer Method for Real Rational Approximation(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
A 'CaratheodoryFejer method' is presented for nearbest real rational approximation on intervals, based on the eigenvalue (or singular value) analysis of a Hankel matrix of Chebyshev coefficients
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
A 'CaratheodoryFejer method' is presented for nearbest real rational approximation on intervals, based on the eigenvalue (or singular value) analysis of a Hankel matrix of Chebyshev coefficients
A New Approach to Database Logic by
Gabriel Kuper(
Book
)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
In this paper the authors propose a mathematical framework for unifying and generalizing the three principal data models, i.e., the relational, hierarchical and network models. Until recently most work on database theory has focussed on the relational model, mainly due to its elegance and mathematical simplicity compared to the other models. Some of this work has pointed out various disadvantages of the relational model, among them its lack of semantics and the fact that it forces the data to have a flat structure that the real data does not always have. In the model the authors propose here a database scheme is an arbitrary directed graph. As in the format model, leaves (i.e., nodes with no outgoing edges) represent data, and internal nodes have a function r on L, that assigns rvalues to these lvalues, and the authors require that the rvalues be of the correct form, depending on the type of the node
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
In this paper the authors propose a mathematical framework for unifying and generalizing the three principal data models, i.e., the relational, hierarchical and network models. Until recently most work on database theory has focussed on the relational model, mainly due to its elegance and mathematical simplicity compared to the other models. Some of this work has pointed out various disadvantages of the relational model, among them its lack of semantics and the fact that it forces the data to have a flat structure that the real data does not always have. In the model the authors propose here a database scheme is an arbitrary directed graph. As in the format model, leaves (i.e., nodes with no outgoing edges) represent data, and internal nodes have a function r on L, that assigns rvalues to these lvalues, and the authors require that the rvalues be of the correct form, depending on the type of the node
Computation of Matrix Chain Products. Part I, Part II by
T. C Hu(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
This paper considers the computation of matrix chain products of the form M sub (1) x M sub (2) x ... X M sub (n1). If the matrices are of different dimensions, the order in which the product is computed affects the number of operations. An optimum order is an order which minimizes the total number of operations. We present some theorems about an optimum order of computing the matrices. Based on these theorems, and 0(n log n) algorithm for finding an optimum order is presented in part II. (Author)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
This paper considers the computation of matrix chain products of the form M sub (1) x M sub (2) x ... X M sub (n1). If the matrices are of different dimensions, the order in which the product is computed affects the number of operations. An optimum order is an order which minimizes the total number of operations. We present some theorems about an optimum order of computing the matrices. Based on these theorems, and 0(n log n) algorithm for finding an optimum order is presented in part II. (Author)
Applications of Parallel Scheduling to Perfect Graphs by
David Helmbold(
Book
)
1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide
The authors combine a parallel algorithm for the two processor scheduling problem, which runs in polylog time on a polynomial number of processors, with an algorithm to find transitive orientations of graphs where they exist. Both algorithms together solve the maximum clique problem and the maximum coloring problem for cocomparability graphs. These parallel algorithms can also be used to identify permutation graphs and interval graphs, important subclasses of perfect graphs
1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide
The authors combine a parallel algorithm for the two processor scheduling problem, which runs in polylog time on a polynomial number of processors, with an algorithm to find transitive orientations of graphs where they exist. Both algorithms together solve the maximum clique problem and the maximum coloring problem for cocomparability graphs. These parallel algorithms can also be used to identify permutation graphs and interval graphs, important subclasses of perfect graphs
Numerical Linear Algebra(
Book
)
2 editions published between 1980 and 1981 in English and held by 2 WorldCat member libraries worldwide
Research under this contract has been concentrated on major problems in numerical linear algebra. (1) The determination of error bounds for Gaussian elimination. (2) The generalized eigenvalue problem AX = lambda BX and its natural extension to the computation of the canonical form of the pencil A  lambda B where A and B are mxn matrices. In addition, the numerical aspects of various problems in linear system theory and related fields have been studied. (Author)
2 editions published between 1980 and 1981 in English and held by 2 WorldCat member libraries worldwide
Research under this contract has been concentrated on major problems in numerical linear algebra. (1) The determination of error bounds for Gaussian elimination. (2) The generalized eigenvalue problem AX = lambda BX and its natural extension to the computation of the canonical form of the pencil A  lambda B where A and B are mxn matrices. In addition, the numerical aspects of various problems in linear system theory and related fields have been studied. (Author)
The Earth Mover's Distance : lower bounds and invariance under translation by Scott Cohen(
Book
)
2 editions published in 1997 in English and held by 2 WorldCat member libraries worldwide
The Earth Mover's Distance (EMD) between two finite distributions of weight is proportional to the minimum amount of work required to transform one distribution into the other. Current content based retrieval work in the Stanford Vision Laboratory uses the EMD as a common framework for measuring image similarity with respect to color, texture, and shape content. In this report, we present some fast to compute lower bounds on the EMD which may allow a system to avoid exact, more expensive EMD computations during query processing. The effectiveness of the lower bounds is tested in a color based retrieval system. In addition to the lower bound work, we also show how to compute the EMD under translation. In this problem, the points in one distribution are free to translate, and the goal is to find a translation that minimizes the EMD to the other distribution
2 editions published in 1997 in English and held by 2 WorldCat member libraries worldwide
The Earth Mover's Distance (EMD) between two finite distributions of weight is proportional to the minimum amount of work required to transform one distribution into the other. Current content based retrieval work in the Stanford Vision Laboratory uses the EMD as a common framework for measuring image similarity with respect to color, texture, and shape content. In this report, we present some fast to compute lower bounds on the EMD which may allow a system to avoid exact, more expensive EMD computations during query processing. The effectiveness of the lower bounds is tested in a color based retrieval system. In addition to the lower bound work, we also show how to compute the EMD under translation. In this problem, the points in one distribution are free to translate, and the goal is to find a translation that minimizes the EMD to the other distribution
Numerical Methods Based on Additive Splittings for Hyperbolic Partial Differential Equations(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
We derive and analyze several methods for systems of hyperbolic equations with wide ranges of signal speeds. These techniques are also useful for problems whose coefficients have large mean values about which they oscillate with small amplitude. Our methods are based on additive splittings of the operators into components that can be approximated independently on the different time scales, some of which are sometimes treated exactly. The efficiency of the splitting methods is seen to depend on the error incurred in splitting the exact solution operator. This is analyzed and a technique is discussed for reducing this error through a simple change of variables. A procedure for generating the appropriate boundary data for the intermediate solutions is also presented. (Author)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
We derive and analyze several methods for systems of hyperbolic equations with wide ranges of signal speeds. These techniques are also useful for problems whose coefficients have large mean values about which they oscillate with small amplitude. Our methods are based on additive splittings of the operators into components that can be approximated independently on the different time scales, some of which are sometimes treated exactly. The efficiency of the splitting methods is seen to depend on the error incurred in splitting the exact solution operator. This is analyzed and a technique is discussed for reducing this error through a simple change of variables. A procedure for generating the appropriate boundary data for the intermediate solutions is also presented. (Author)
Mapping explanationbased generalization onto Soar by
Paul S Rosenbloom(
Book
)
1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide
Explanationbased generalization (EBG) is a powerful approach to concept formation in which a justifiable concept definition is acquired from a single training example and an underlying theory of how the example is an instance of the concept. Soar is an attempt to build a general cognitive architecture combining general learning, problem solving, and memory capabilities. It includes an independently developed learning mechanism, called chunking, that is similar to but not the same as explanationbased generalization. In this article we clarify the relationship between the explanationbased generalization framework and the Soar/chunking combination by showing how the EBG framework maps onto Soar, how several EBG conceptformation tasks are implemented in Soar, and how several EBG conceptformation tasks are implemented in Soar, and how the Soar approach suggests answers to some of the outstanding issues in explanationbased generalization. (KR)
1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide
Explanationbased generalization (EBG) is a powerful approach to concept formation in which a justifiable concept definition is acquired from a single training example and an underlying theory of how the example is an instance of the concept. Soar is an attempt to build a general cognitive architecture combining general learning, problem solving, and memory capabilities. It includes an independently developed learning mechanism, called chunking, that is similar to but not the same as explanationbased generalization. In this article we clarify the relationship between the explanationbased generalization framework and the Soar/chunking combination by showing how the EBG framework maps onto Soar, how several EBG conceptformation tasks are implemented in Soar, and how several EBG conceptformation tasks are implemented in Soar, and how the Soar approach suggests answers to some of the outstanding issues in explanationbased generalization. (KR)
Optimal Design of Distributed Databases by
Stefano Ceri(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
The distributed information systems area has seen a rapid growth in terms of research interest as well as in terms of practical applications in the past three years. Distributed systems are becoming a reality, however truly distributed databases are still rare. For a large organization with a distributed computer network the problem of distributing a database includes determination of; (1) How can the database be split into components to be allocated to distinct sites, and (2) How much of the data should be replicated and how should the replicated fragments be allocated? In this paper we design models for solving both of the above problems
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
The distributed information systems area has seen a rapid growth in terms of research interest as well as in terms of practical applications in the past three years. Distributed systems are becoming a reality, however truly distributed databases are still rare. For a large organization with a distributed computer network the problem of distributing a database includes determination of; (1) How can the database be split into components to be allocated to distinct sites, and (2) How much of the data should be replicated and how should the replicated fragments be allocated? In this paper we design models for solving both of the above problems
Hypothesis formation and qualitative reasoning in molecular biology by
Peter D Karp(
Book
)
1 edition published in 1989 in English and held by 2 WorldCat member libraries worldwide
This dissertation investigates scientific reasoning from a computational perspective. The investigation focuses on a program of research in molecular biology that culminated in the discovery of a new mechanism of gene regulation in bacteria, called attenuation. The dissertation concentrates on a particular type of reasoning called hypothesis formation. Hypothesisformation problems occur when the outcome of an experiment predicted by a scientific theory does not match that observed by a scientist. I present methods for solving hypothesis formation problems that have been implemented in a computer program called HYPGENE. This work is also concerned with how to represent theories of molecular biology in a computer, and with how to use such theories to predict experimental outcomes; I present a framework for performing these tasks that is implemented in a program called GENSIM. I tested both HYPGENE and GENSIM on sample problems that biologists solved during their research on attenuation. The dissertation includes a historical study of the attenuation research. THis study is novel because it examines a large, complex, and modern program of scientific research. The document treats hypothesis formation as a design problem, and uses design methods to solve hypothesisformation problems. (kr)
1 edition published in 1989 in English and held by 2 WorldCat member libraries worldwide
This dissertation investigates scientific reasoning from a computational perspective. The investigation focuses on a program of research in molecular biology that culminated in the discovery of a new mechanism of gene regulation in bacteria, called attenuation. The dissertation concentrates on a particular type of reasoning called hypothesis formation. Hypothesisformation problems occur when the outcome of an experiment predicted by a scientific theory does not match that observed by a scientist. I present methods for solving hypothesis formation problems that have been implemented in a computer program called HYPGENE. This work is also concerned with how to represent theories of molecular biology in a computer, and with how to use such theories to predict experimental outcomes; I present a framework for performing these tasks that is implemented in a program called GENSIM. I tested both HYPGENE and GENSIM on sample problems that biologists solved during their research on attenuation. The dissertation includes a historical study of the attenuation research. THis study is novel because it examines a large, complex, and modern program of scientific research. The document treats hypothesis formation as a design problem, and uses design methods to solve hypothesisformation problems. (kr)
An Algorithm for Reducing Acyclic Hypergraphs(
Book
)
1 edition published in 1982 in English and held by 2 WorldCat member libraries worldwide
The report gives a description of an algorithm to compute efficiently the Graham reduction of an acyclic hypergraph with sacred nodes. To apply the algorithm we must already have a tree representation of the hypergraphs, and therefore it is useful when we have a fixed hypergraph and wish to compute Graham reductions many times, as we do in the System/U query interpretation algorithm. (Author)
1 edition published in 1982 in English and held by 2 WorldCat member libraries worldwide
The report gives a description of an algorithm to compute efficiently the Graham reduction of an acyclic hypergraph with sacred nodes. To apply the algorithm we must already have a tree representation of the hypergraphs, and therefore it is useful when we have a fixed hypergraph and wish to compute Graham reductions many times, as we do in the System/U query interpretation algorithm. (Author)
QLISP for Parallel Processors(
Book
)
2 editions published between 1989 and 1990 in English and held by 1 WorldCat member library worldwide
The goal of the Qlisp project at Stanford is to gain experience with the sharedmemory, queuebased approach to parallel Lisp, by implementing the Qlisp language on an actual multiprocessor, and by developing a symbolic algebra system as a testbed application. The experiments performed on the simulator included: 1. Algorithms for sorting and basic data structure manipulation for polynomials. 2. Partitioning and scheduling methods for parallel programming. 3. Parallelizing the production rule system OPS5. Computer programs. (jes)
2 editions published between 1989 and 1990 in English and held by 1 WorldCat member library worldwide
The goal of the Qlisp project at Stanford is to gain experience with the sharedmemory, queuebased approach to parallel Lisp, by implementing the Qlisp language on an actual multiprocessor, and by developing a symbolic algebra system as a testbed application. The experiments performed on the simulator included: 1. Algorithms for sorting and basic data structure manipulation for polynomials. 2. Partitioning and scheduling methods for parallel programming. 3. Parallelizing the production rule system OPS5. Computer programs. (jes)
On the synthesis of finitestate acceptors by
Alan W Biermann(
)
1 edition published in 1970 in English and held by 0 WorldCat member libraries worldwide
Two algorithms are presented for solving the following problem: given a finiteset S of strings of symbols, find a finitestate machine which will accept the strings of S and possibly some additional strings which 'resemble' those of S. The approach used is to directly construct the states and transitions of the acceptor machine from the string information. The algorithms include a parameter which enable one to increase the exactness of the resulting machine's behavior as much as desired by increasing the number of states in the machine. The properties of the algorithms are presented and illustrated with a number of examples. The paper gives a method for identifying a finitestate language from a randomly chosen finite subset of the language if the subset is large enough and if a bound is known on the number of states required to recognize the language. Finally, some of the uses of the algorithms and their relationship to the problem of grammatical inference are discussed
1 edition published in 1970 in English and held by 0 WorldCat member libraries worldwide
Two algorithms are presented for solving the following problem: given a finiteset S of strings of symbols, find a finitestate machine which will accept the strings of S and possibly some additional strings which 'resemble' those of S. The approach used is to directly construct the states and transitions of the acceptor machine from the string information. The algorithms include a parameter which enable one to increase the exactness of the resulting machine's behavior as much as desired by increasing the number of states in the machine. The properties of the algorithms are presented and illustrated with a number of examples. The paper gives a method for identifying a finitestate language from a randomly chosen finite subset of the language if the subset is large enough and if a bound is known on the number of states required to recognize the language. Finally, some of the uses of the algorithms and their relationship to the problem of grammatical inference are discussed
Polynomial dual network simplex algorithms by
James B Orlin(
)
1 edition published in 1991 in English and held by 0 WorldCat member libraries worldwide
We show how to use polynomial and strongly polynomial capacity scaling algorithms for the transshipment problem to design a polynomial dual network simplex pivot rule. Our best pivoting strategy leads to an 0(m2 log n) bound on the number of pivots, where n and m denotes the number of nodes and arcs in the input network. If the demands are integral and at most B, we also give an 0(m(m + n log n) min(log nB, m log n))time implementation of a strategy that requires somewhat more pivots
1 edition published in 1991 in English and held by 0 WorldCat member libraries worldwide
We show how to use polynomial and strongly polynomial capacity scaling algorithms for the transshipment problem to design a polynomial dual network simplex pivot rule. Our best pivoting strategy leads to an 0(m2 log n) bound on the number of pivots, where n and m denotes the number of nodes and arcs in the input network. If the demands are integral and at most B, we also give an 0(m(m + n log n) min(log nB, m log n))time implementation of a strategy that requires somewhat more pivots
RLL1 : a representation language language [microvorm] by
R Greiner(
)
1 edition published in 1980 in English and held by 0 WorldCat member libraries worldwide
The field of AI is strewn with knowledge representation languages. The language designer typically designs that language with one particular application domain in mind; as subsequent types of applications are tried, what had originally been useful features are found to be undesirable limitations, and the language is overhauled or scrapped. One remedy to this bleak cycle might be to construct a representation language whose domain is the field of representational languages itself. Toward this end, we designed and implemented RLL1, a framebased Representation Language Language. The components of representation languages in general (such as slots and inheritance mechanisms) and of RLL1 itself, in particular, are encoded declaratively as frames. By modifying these frames, the user can change the semantics of RLL1's components, and significantly alter the overall character of the RLL1 environment. Often a large Artificial Intelligence project begins by designing and implementing a highlevel language in which to easily and precisely specify the nuances of the task. The language designer typically builds his Representation Language around the one particular highlighted application (such as molecular biology for Units (Stefik), or natural language understanding for KRL (Bobrow & Winograd) and OWL (Szolovits, et al.)). For this reason, his language is often inadequate for any subsequent applications, except those which can be cast in a form similar in structure to the initial task. What had originally been useful features are subsequently found to be undesirable limitations. Consider Units' explicit copying of inherited facts or KRL's sophisticated but slow matcher
1 edition published in 1980 in English and held by 0 WorldCat member libraries worldwide
The field of AI is strewn with knowledge representation languages. The language designer typically designs that language with one particular application domain in mind; as subsequent types of applications are tried, what had originally been useful features are found to be undesirable limitations, and the language is overhauled or scrapped. One remedy to this bleak cycle might be to construct a representation language whose domain is the field of representational languages itself. Toward this end, we designed and implemented RLL1, a framebased Representation Language Language. The components of representation languages in general (such as slots and inheritance mechanisms) and of RLL1 itself, in particular, are encoded declaratively as frames. By modifying these frames, the user can change the semantics of RLL1's components, and significantly alter the overall character of the RLL1 environment. Often a large Artificial Intelligence project begins by designing and implementing a highlevel language in which to easily and precisely specify the nuances of the task. The language designer typically builds his Representation Language around the one particular highlighted application (such as molecular biology for Units (Stefik), or natural language understanding for KRL (Bobrow & Winograd) and OWL (Szolovits, et al.)). For this reason, his language is often inadequate for any subsequent applications, except those which can be cast in a form similar in structure to the initial task. What had originally been useful features are subsequently found to be undesirable limitations. Consider Units' explicit copying of inherited facts or KRL's sophisticated but slow matcher
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Associated Subjects
Algorithms Artificial intelligence Artificial intelligenceBiological applications Automatic hypothesis formation Bilinear forms Color Computer algorithms Computer networks Database management Difference equations Differential equations, Hyperbolic Electronic data processingDistributed processing File organization (Computer science) Forms, Trilinear Image processing Information retrieval Linear programming Logic programming Machine learning Mathematical optimization Matrices MatricesData processing Molecular biologyData processing Molecular geneticsData processing Multiprocessors Network analysis (Planning) Numerical analysis Parallel processing (Electronic computers) Perfect graphs Reasoning Tensor products
Languages