WorldCat Identities

STANFORD UNIV CA Dept. of COMPUTER SCIENCE

Overview
Works: 471 works in 482 publications in 1 language and 534 library holdings
Classifications: QA76, 658.4032
Publication Timeline
.
Most widely held works by STANFORD UNIV CA Dept. of COMPUTER SCIENCE
Adaptive mesh refinement for hyperbolic partial differential equations by Marsha J Berger( Book )

2 editions published between 1982 and 1983 in English and held by 4 WorldCat member libraries worldwide

The authors present an adaptive method based on the idea of multiple, component grids for the solution of hyperbolic partial differential equations using finite difference techniques. Based upon Richardson-type estimates of the truncation error, refined grids are created or existing ones removed to attain a given accuracy for a minimum amount of work. Their approach is recursive in that fine grids can themselves contain even finer grids. The grids with finer mesh width in space also have a smaller mesh width in time, making this a mesh refinement algorithm in time and space. This document includes algorithm, data structures and grid generation procedure, and concludes with numerical examples in one and two space dimensions. (Author)
Deductive Programming Synthesis( Book )

3 editions published between 1989 and 1991 in English and held by 3 WorldCat member libraries worldwide

Program synthesis is the systematic derivation of a computer program to meet a given specification. The specification is a general description of the purpose of the desired program, while the program is a detailed description of a method for achieving that purpose. The method is based on a deductive approach, in which the problem of deriving a program is regarded as one of proving a mathematical theorem. The theorem expresses the existence of an object meeting the specified conditions. The proof is restricted to be sufficiently constructive to indicate a method for finding the desired output. That method becomes the basis for a program, which is extracted from the proof. The emphasis of the work has been on automating as much as possible of the program derivation process. Theorem-proving methods particularly well-suited to the program synthesis application have been developed. An interactive program-derivation system has been implemented. Applications to database management and planning have been investigated
The Caratheodory-Fejer Method for Real Rational Approximation( Book )

1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide

A 'Caratheodory-Fejer method' is presented for near-best real rational approximation on intervals, based on the eigenvalue (or singular value) analysis of a Hankel matrix of Chebyshev coefficients
Applications of Parallel Scheduling to Perfect Graphs by David Helmbold( Book )

1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide

The authors combine a parallel algorithm for the two processor scheduling problem, which runs in polylog time on a polynomial number of processors, with an algorithm to find transitive orientations of graphs where they exist. Both algorithms together solve the maximum clique problem and the maximum coloring problem for co-comparability graphs. These parallel algorithms can also be used to identify permutation graphs and interval graphs, important subclasses of perfect graphs
The Earth Mover's Distance : lower bounds and invariance under translation by Scott Cohen( Book )

2 editions published in 1997 in English and held by 2 WorldCat member libraries worldwide

The Earth Mover's Distance (EMD) between two finite distributions of weight is proportional to the minimum amount of work required to transform one distribution into the other. Current content based retrieval work in the Stanford Vision Laboratory uses the EMD as a common framework for measuring image similarity with respect to color, texture, and shape content. In this report, we present some fast to compute lower bounds on the EMD which may allow a system to avoid exact, more expensive EMD computations during query processing. The effectiveness of the lower bounds is tested in a color based retrieval system. In addition to the lower bound work, we also show how to compute the EMD under translation. In this problem, the points in one distribution are free to translate, and the goal is to find a translation that minimizes the EMD to the other distribution
Computation of Matrix Chain Products. Part I, Part II( Book )

1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide

This paper considers the computation of matrix chain products of the form M sub (1) x M sub (2) x ... X M sub (n-1). If the matrices are of different dimensions, the order in which the product is computed affects the number of operations. An optimum order is an order which minimizes the total number of operations. We present some theorems about an optimum order of computing the matrices. Based on these theorems, and 0(n log n) algorithm for finding an optimum order is presented in part II. (Author)
Mapping explanation-based generalization onto Soar by Paul S Rosenbloom( Book )

1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide

Explanation-based generalization (EBG) is a powerful approach to concept formation in which a justifiable concept definition is acquired from a single training example and an underlying theory of how the example is an instance of the concept. Soar is an attempt to build a general cognitive architecture combining general learning, problem solving, and memory capabilities. It includes an independently developed learning mechanism, called chunking, that is similar to but not the same as explanation-based generalization. In this article we clarify the relationship between the explanation-based generalization framework and the Soar/chunking combination by showing how the EBG framework maps onto Soar, how several EBG concept-formation tasks are implemented in Soar, and how several EBG concept-formation tasks are implemented in Soar, and how the Soar approach suggests answers to some of the outstanding issues in explanation-based generalization. (KR)
Hypothesis formation and qualitative reasoning in molecular biology by Stanford University( Book )

1 edition published in 1989 in English and held by 2 WorldCat member libraries worldwide

This dissertation investigates scientific reasoning from a computational perspective. The investigation focuses on a program of research in molecular biology that culminated in the discovery of a new mechanism of gene regulation in bacteria, called attenuation. The dissertation concentrates on a particular type of reasoning called hypothesis formation. Hypothesis-formation problems occur when the outcome of an experiment predicted by a scientific theory does not match that observed by a scientist. I present methods for solving hypothesis formation problems that have been implemented in a computer program called HYPGENE. This work is also concerned with how to represent theories of molecular biology in a computer, and with how to use such theories to predict experimental outcomes; I present a framework for performing these tasks that is implemented in a program called GENSIM. I tested both HYPGENE and GENSIM on sample problems that biologists solved during their research on attenuation. The dissertation includes a historical study of the attenuation research. THis study is novel because it examines a large, complex, and modern program of scientific research. The document treats hypothesis formation as a design problem, and uses design methods to solve hypothesis-formation problems. (kr)
TABLOG : a new approach to logic programming by Yonathan Malachi( Book )

1 edition published in 1985 in English and held by 2 WorldCat member libraries worldwide

TABLOG is a programming language based on first-order predicate logic with equality that combines relational and functional programming. In addition to featuring both the advantages of functional notation and the power of unification as a binding mechanism, TABLOG also supports a more general subset of standard first-order logic than PROLOG and most other logic-programming languages. The Manna-Waldinger deductive-tableau proof system is employed as an interpreter for TABLOG in the same way that PROLOG uses a resolution proof system. Unification is used by TABLOG to match a query with a line in the program and to bind arguments. The basic rules of deduction used for computing are a nonclausal resolution rule that generalizes classical resolution to arbitrary first-order sentences and an equality rule that is a generalization of narrowing and paramodulation. In this article we described the basic features of TABLOG and its (implemented) sequential interpreter, and we discuss some of its properties. We give examples to demonstrate when TABLOG is better than a functional language like LISP and when it is better than a relational language like PROLOG. (kr)
Numerical Linear Algebra( Book )

2 editions published between 1980 and 1981 in English and held by 2 WorldCat member libraries worldwide

Research in this program has concentrated on the generalized eigenvalue problem and its natural extension to the computation of the associated canonical form. Furthermore, there has been an extensive effort to study the matrix equation arising in control engineering such as controllability observability decomposition and the solution of the Riccati equations. In particular, error bounds for the computed eigenvalues and eigenvectors of the generalized eigenvalue problem have been devised. In addition, a numerically stable algorithm has been developed for computing the orthonormal bases for deflating subspace of a regular pencil. A method has been developed to obtain any desired ordering of eigenvalues in the quasi-triangular forms. (Author)
Stability Analysis of Finite Difference Schemes for the Advection-Diffusion Equation( Book )

1 edition published in 1983 in English and held by 2 WorldCat member libraries worldwide

We present a collection of stability results for finite difference approximations to the advection-diffusion equation sub ut = a sub ux + b sub uxx. The results are for centered difference schemes in space and include explicit schemes in time up to fourth order and schemes that use different space and time discretizations for the advective and diffusive terms. The results are derived from a uniform framework based on the Schur-Cohn theory of simple von Neumann Polynomials and are necessary and sufficient for the stability of the Cauchy problem. Some of the results are believed to be new. (Author)
Optimal Design of Distributed Databases by Stefano Ceri( Book )

1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide

The distributed information systems area has seen a rapid growth in terms of research interest as well as in terms of practical applications in the past three years. Distributed systems are becoming a reality, however truly distributed databases are still rare. For a large organization with a distributed computer network the problem of distributing a database includes determination of; (1) How can the database be split into components to be allocated to distinct sites, and (2) How much of the data should be replicated and how should the replicated fragments be allocated? In this paper we design models for solving both of the above problems
Numerical Methods Based on Additive Splittings for Hyperbolic Partial Differential Equations( Book )

1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide

We derive and analyze several methods for systems of hyperbolic equations with wide ranges of signal speeds. These techniques are also useful for problems whose coefficients have large mean values about which they oscillate with small amplitude. Our methods are based on additive splittings of the operators into components that can be approximated independently on the different time scales, some of which are sometimes treated exactly. The efficiency of the splitting methods is seen to depend on the error incurred in splitting the exact solution operator. This is analyzed and a technique is discussed for reducing this error through a simple change of variables. A procedure for generating the appropriate boundary data for the intermediate solutions is also presented. (Author)
A Note on Lossless database Decompositions by Moshe Y Vardi( Book )

1 edition published in 1983 in English and held by 2 WorldCat member libraries worldwide

It is known that under a wide variety of assumptions a database decomposition is lossless if and only if the database scheme has a lossless join. Biskup, Dayal, and Bernstein have shown that when the given dependencies are functional then the database scheme has a lossless join if and only if one of the relation scheme is a key for the universal scheme. In this note the investigators supply an alternative proof of that characterization. The proof uses tools from the theory of embedded join dependencies and the theory of tuple and equality generating dependencies, but is, nevertheless, much simpler than the previously published proof. (Author)
Beta operations : efficient implementation of a primitive parallel operation by E. R Cohn( Book )

1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide

The ever decreasing cost of computer processors has created a great interest in multi-processor computers. However, along with the increased power that this parallelism brings, comes increased complexity in programming. One approach to lessening this complexity is to provide the programmer with general purpose parallel primitives that shield him from the structure of the underlying maching. In The Connection Machine Hillis suggests the Beta Operation as a parallel primitive for his hypercube-based machine. This paper explores efficient ways to perform this operator on several different well known architectures including the hypercube. It presents some lower bounds associated with the problem
QLISP for Parallel Processors( Book )

2 editions published between 1989 and 1990 in English and held by 1 WorldCat member library worldwide

The goal of the Qlisp project at Stanford is to gain experience with the shared-memory, queue-based approach to parallel Lisp, by implementing the Qlisp language on an actual multiprocessor, and by developing a symbolic algebra system as a testbed application. The experiments performed on the simulator included: 1. Algorithms for sorting and basic data structure manipulation for polynomials. 2. Partitioning and scheduling methods for parallel programming. 3. Parallelizing the production rule system OPS5. Computer programs. (jes)
On the synthesis of finite-state acceptors by Alan W Biermann( )

1 edition published in 1970 in English and held by 0 WorldCat member libraries worldwide

Two algorithms are presented for solving the following problem: given a finite-set S of strings of symbols, find a finite-state machine which will accept the strings of S and possibly some additional strings which 'resemble' those of S. The approach used is to directly construct the states and transitions of the acceptor machine from the string information. The algorithms include a parameter which enable one to increase the exactness of the resulting machine's behavior as much as desired by increasing the number of states in the machine. The properties of the algorithms are presented and illustrated with a number of examples. The paper gives a method for identifying a finite-state language from a randomly chosen finite subset of the language if the subset is large enough and if a bound is known on the number of states required to recognize the language. Finally, some of the uses of the algorithms and their relationship to the problem of grammatical inference are discussed
Recent developments in the complexity of combinatorial algorithms by Robert E Tarjan( )

1 edition published in 1980 in English and held by 0 WorldCat member libraries worldwide

Several major advances in the area of combinatorial algorithms include improved algorithms for matrix multiplication and maximum network flow, a polynomial-time algorithm for linear programming, and steps toward a polynomial-time algorithm for graph isomorphism. This paper surveys these results and suggests directions for future research. Included is a discussion of recent work by the author and his students on dynamic dictionaries, network flow problems, and related questions
RLL-1 : a representation language language [microvorm] by R Greiner( )

1 edition published in 1980 in English and held by 0 WorldCat member libraries worldwide

The field of AI is strewn with knowledge representation languages. The language designer typically designs that language with one particular application domain in mind; as subsequent types of applications are tried, what had originally been useful features are found to be undesirable limitations, and the language is overhauled or scrapped. One remedy to this bleak cycle might be to construct a representation language whose domain is the field of representational languages itself. Toward this end, we designed and implemented RLL-1, a frame-based Representation Language Language. The components of representation languages in general (such as slots and inheritance mechanisms) and of RLL-1 itself, in particular, are encoded declaratively as frames. By modifying these frames, the user can change the semantics of RLL-1's components, and significantly alter the overall character of the RLL-1 environment. Often a large Artificial Intelligence project begins by designing and implementing a high-level language in which to easily and precisely specify the nuances of the task. The language designer typically builds his Representation Language around the one particular highlighted application (such as molecular biology for Units (Stefik), or natural language understanding for KRL (Bobrow & Winograd) and OWL (Szolovits, et al.)). For this reason, his language is often inadequate for any subsequent applications, except those which can be cast in a form similar in structure to the initial task. What had originally been useful features are subsequently found to be undesirable limitations. Consider Units' explicit copying of inherited facts or KRL's sophisticated but slow matcher
Polynomial dual network simplex algorithms by James B Orlin( )

1 edition published in 1991 in English and held by 0 WorldCat member libraries worldwide

We show how to use polynomial and strongly polynomial capacity scaling algorithms for the transshipment problem to design a polynomial dual network simplex pivot rule. Our best pivoting strategy leads to an 0(m2 log n) bound on the number of pivots, where n and m denotes the number of nodes and arcs in the input network. If the demands are integral and at most B, we also give an 0(m(m + n log n) min(log nB, m log n))-time implementation of a strategy that requires somewhat more pivots
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.86 (from 0.73 for Polynomial ... to 1.00 for Deductive ...)

Languages
English (26)