YALE UNIV NEW HAVEN CT Dept. of COMPUTER SCIENCE
Overview
Works:  291 works in 313 publications in 1 language and 346 library holdings 

Publication Timeline
.
Most widely held works by
YALE UNIV NEW HAVEN CT Dept. of COMPUTER SCIENCE
LocalMesh, LocalOrder, Adaptive Finite Element Methods with a Posteriori Error Estimators for Elliptic Partial Differential
Equations by Alan Weiser(
Book
)
1 edition published in 1981 in English and held by 4 WorldCat member libraries worldwide
The traditional error estimates for the finite element solution of elliptic partial differential equations are a priori, and little information is available from them about the actual error in a specific approximation to the solution. In recent years, locallycomputable a posteriori error estimators have been developed, which apply to the actual errors committed by the finite element method for a given discretization. These estimators lead to algorithms in which the computer itself adaptively decides how and when to generate discretizations. So far, for twodimensional problems, the computergenerated discretizations have tended to use either local mesh refinement, or local order refinement, but not both. In this thesis, we present a new class of localmesh, localorder, square finite elements which can easily accommodate computerchosen discretizations. We present several new locallycomputable a posteriori error estimators which, under reasonable assumptions, asymptotically yield upper bounds on the actual errors committed, and algorithms in which the computer uses the error estimators to adaptively produce sequences of localmesh, localorder discretizations
1 edition published in 1981 in English and held by 4 WorldCat member libraries worldwide
The traditional error estimates for the finite element solution of elliptic partial differential equations are a priori, and little information is available from them about the actual error in a specific approximation to the solution. In recent years, locallycomputable a posteriori error estimators have been developed, which apply to the actual errors committed by the finite element method for a given discretization. These estimators lead to algorithms in which the computer itself adaptively decides how and when to generate discretizations. So far, for twodimensional problems, the computergenerated discretizations have tended to use either local mesh refinement, or local order refinement, but not both. In this thesis, we present a new class of localmesh, localorder, square finite elements which can easily accommodate computerchosen discretizations. We present several new locallycomputable a posteriori error estimators which, under reasonable assumptions, asymptotically yield upper bounds on the actual errors committed, and algorithms in which the computer uses the error estimators to adaptively produce sequences of localmesh, localorder discretizations
Iterative Solution of Indefinite Symmetric Systems by Methods Using Orthogonal Polynomials over Two Disjoint Intervals(
Book
)
1 edition published in 1981 in English and held by 3 WorldCat member libraries worldwide
It is shown in this paper that certain orthogonal polynomials over two disjoint intervals can be particularly useful for solving large symmetric indefinite linear systems or for finding a few interior eigenvalues of a large symmetric matrix. There are several advantages of the proposed approach over the techniques which are based upon the polynomials having the least uniform norm in two intervals. While a theoretical comparison will show that the norms of the minimal polynomial of degree n in the least squares sense differs from the minimax polynomial of the same degree by a factor not exceeding 2(n+1)to the 0.5 power, the least squares polynomials are by far easier to compute and to use thanks to their three term recurrence relation. A number of suggestions will be made for the problem of estimating the optimal parameters and several numerical experiments will be reported. (Author)
1 edition published in 1981 in English and held by 3 WorldCat member libraries worldwide
It is shown in this paper that certain orthogonal polynomials over two disjoint intervals can be particularly useful for solving large symmetric indefinite linear systems or for finding a few interior eigenvalues of a large symmetric matrix. There are several advantages of the proposed approach over the techniques which are based upon the polynomials having the least uniform norm in two intervals. While a theoretical comparison will show that the norms of the minimal polynomial of degree n in the least squares sense differs from the minimax polynomial of the same degree by a factor not exceeding 2(n+1)to the 0.5 power, the least squares polynomials are by far easier to compute and to use thanks to their three term recurrence relation. A number of suggestions will be made for the problem of estimating the optimal parameters and several numerical experiments will be reported. (Author)
PROUST: KnowledgeBased Program Understanding by
W. Lewis Johnson(
Book
)
2 editions published in 1983 in English and held by 3 WorldCat member libraries worldwide
This paper describes a program called PROUST which does online analysis and understanding of Pascal programs written by novice programmers. PROUST takes as input a program and a nonalgorithm description of the program requirements, and finds the most likely mapping between the requirements and the code. This mapping is in essence a reconstruction of the design and implementation steps that the programmer went through in writing the program. A knowledge base of programming plans and strategies, together with common bugs associated with them, is used in constructing this mapping. Bugs are discovered in the process of relating plans to the code; PROUST can therefore give deep explanations of program bugs by relating the buggy code to its underlying intentions. (Author)
2 editions published in 1983 in English and held by 3 WorldCat member libraries worldwide
This paper describes a program called PROUST which does online analysis and understanding of Pascal programs written by novice programmers. PROUST takes as input a program and a nonalgorithm description of the program requirements, and finds the most likely mapping between the requirements and the code. This mapping is in essence a reconstruction of the design and implementation steps that the programmer went through in writing the program. A knowledge base of programming plans and strategies, together with common bugs associated with them, is used in constructing this mapping. Bugs are discovered in the process of relating plans to the code; PROUST can therefore give deep explanations of program bugs by relating the buggy code to its underlying intentions. (Author)
Algorithms for Computing the Sample Variance: Analysis and Recommendations by
Tony F Chan(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
The problem of computing the variance of a sample of N data points may be difficult for certain data sets, particularly when N is large and the variance is small. The authors present a survey of possible algorithms and their roundoff error bounds, including some new analysis for computations with shifted data. Experimental results confirm these bounds and illustrate the dangers of some algorithms. Specific recommendations are made as to which algorithm should be used in various contexts. (Author)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
The problem of computing the variance of a sample of N data points may be difficult for certain data sets, particularly when N is large and the variance is small. The authors present a survey of possible algorithms and their roundoff error bounds, including some new analysis for computations with shifted data. Experimental results confirm these bounds and illustrate the dangers of some algorithms. Specific recommendations are made as to which algorithm should be used in various contexts. (Author)
Spectral Deferred Correction Methods for Ordinary Differential Equations by A Dutt(
Book
)
2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide
We introduce a new class of methods for the Cauchy problem for ordinary differential equations (ODEs). We begin by converting the original ODE into the corresponding Picard equation and apply a deferred correction procedure in the integral formulation, driven by either the explicit or the implicit Euler marching scheme. The approach results in algorithms of essentially arbitrary order accuracy for both nonstiff and stiff problems; their performance is illustrated with several numerical examples. For nonstiff problems, the stability behavior of the obtained explicit schemes is very satisfactory and algorithms with orders between 8 and 20 should be competitive with the best existing ones. In our preliminary experiments with stiff problems, a simple adaptive implementation of the method demonstrates performance comparable to that of a stateoftheart extrapolation code (at least, at moderate to high precision). Deferred correction approach based on the Picard equation appears to be a promising candidate for further investigation
2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide
We introduce a new class of methods for the Cauchy problem for ordinary differential equations (ODEs). We begin by converting the original ODE into the corresponding Picard equation and apply a deferred correction procedure in the integral formulation, driven by either the explicit or the implicit Euler marching scheme. The approach results in algorithms of essentially arbitrary order accuracy for both nonstiff and stiff problems; their performance is illustrated with several numerical examples. For nonstiff problems, the stability behavior of the obtained explicit schemes is very satisfactory and algorithms with orders between 8 and 20 should be competitive with the best existing ones. In our preliminary experiments with stiff problems, a simple adaptive implementation of the method demonstrates performance comparable to that of a stateoftheart extrapolation code (at least, at moderate to high precision). Deferred correction approach based on the Picard equation appears to be a promising candidate for further investigation
Time Map Maintenance(
Book
)
1 edition published in 1983 in English and held by 2 WorldCat member libraries worldwide
This paper describes a mechanism for dealing with the representation of events and their effects occurring in and over time. The mechanism, which I refer to as a time map manager, is shown to be useful in problem solvers requiring an ability to reason about time and causality. In addition to describing the theory and its implementation I will demonstrate a programming technique and related discipline based upon the use of data dependencies. This technique supports the design of complex control structures capable of recording the conditions under which information is stored and subsequently responding in highly directed ways when those conditions are changed. (Author)
1 edition published in 1983 in English and held by 2 WorldCat member libraries worldwide
This paper describes a mechanism for dealing with the representation of events and their effects occurring in and over time. The mechanism, which I refer to as a time map manager, is shown to be useful in problem solvers requiring an ability to reason about time and causality. In addition to describing the theory and its implementation I will demonstrate a programming technique and related discipline based upon the use of data dependencies. This technique supports the design of complex control structures capable of recording the conditions under which information is stored and subsequently responding in highly directed ways when those conditions are changed. (Author)
A Projection Method for Partial Pole Assignment in Linear State Feedback by
Y Saad(
Book
)
1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide
A projection method is proposed for the partial pole placement in continuous time linear control systems. The procedure is of interest in the common situation where the system is very large and only a few of its poles must be assigned. It is based on computing an orthonormal basis of the left invariant subspace associated with the eigenvalues to be assigned and then solving a small inverse eigenvalue problem resulting from projecting the initial problem into that subspace. Also presented is an equivalent version of this method, which can be regarded as a variant of the Wielandt deflation techniques used in eigenvalue methods. (Author)
1 edition published in 1986 in English and held by 2 WorldCat member libraries worldwide
A projection method is proposed for the partial pole placement in continuous time linear control systems. The procedure is of interest in the common situation where the system is very large and only a few of its poles must be assigned. It is based on computing an orthonormal basis of the left invariant subspace associated with the eigenvalues to be assigned and then solving a small inverse eigenvalue problem resulting from projecting the initial problem into that subspace. Also presented is an equivalent version of this method, which can be regarded as a variant of the Wielandt deflation techniques used in eigenvalue methods. (Author)
Preconditioned Iterative Methods for Nonselfadjoint or Indefinite Elliptic Boundary Value Problems by
J. H Bramble(
Book
)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
The authors consider a GalerkinFinite Element approximation to a general linear elliptic boundary value problem which may be nonselfadjoint or indefinite. They show how to precondition the equations so that the resulting systems of linear algebraic equations lead to iteration procedures whose iterative convergence rates are independent of the number of unknowns in the solution. (Author)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
The authors consider a GalerkinFinite Element approximation to a general linear elliptic boundary value problem which may be nonselfadjoint or indefinite. They show how to precondition the equations so that the resulting systems of linear algebraic equations lead to iteration procedures whose iterative convergence rates are independent of the number of unknowns in the solution. (Author)
An Approximate Newton Method for Coupled Nonlinear Systems by
Tony F Chan(
Book
)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
The author proposes an approximate Newton method for solving a coupled nonlinear system. The method involves applying the basic iteration S of a general solver for the equation G(u, t)=0 with t fixed. It is therefore wellsuited for problems for which such a solver already exists or can be implemented more efficiently than a solver for the coupled system. The author derives conditions for S under which the method is locally convergent. Basically, if S is sufficiently contractive for G, then convergence for the coupled system is guaranteed. Otherwise, it shown how to construct a S from S for which convergence is assured. These results are applied to continuation methods where N represents a pseudoarclength condition. He show that under certain conditions the algorithm converges if S is convergent for G. (Author)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
The author proposes an approximate Newton method for solving a coupled nonlinear system. The method involves applying the basic iteration S of a general solver for the equation G(u, t)=0 with t fixed. It is therefore wellsuited for problems for which such a solver already exists or can be implemented more efficiently than a solver for the coupled system. The author derives conditions for S under which the method is locally convergent. Basically, if S is sufficiently contractive for G, then convergence for the coupled system is guaranteed. Otherwise, it shown how to construct a S from S for which convergence is assured. These results are applied to continuation methods where N represents a pseudoarclength condition. He show that under certain conditions the algorithm converges if S is convergent for G. (Author)
The automated crystal runtime system : a framework by
Joel Saltz(
Book
)
1 edition published in 1988 in English and held by 2 WorldCat member libraries worldwide
There exists substantial data level parallelism in scientific problems. The Crystal/ACRE(Automated Crystal Runtime Environment) runtime system is an attempt to obtain parallel, implementations for scientific computations, particularly those where the data dependencies are manifest only at runtime. This can preclude compiler based detection of certain types of parallelism. The automated system is structured as follows: An appropriate level of granularity is first selected for the computations. A directed acyclic graph representation of the program is generated on which various aggregation techniques may be employed in order to generate efficient schedules. These schedules are then mapped onto the largest machine. We describe some initial results from experiments conducted on the Intel Hypercube and the Encore Multimax that indicate the usefulness of our approach. Using the runtime system, it will be relatively easy to program different applications and study the performance implications of the various parameters. When the performance data is available, we would like to develop mathematical models that describe the relationships between the various important parameters in the system
1 edition published in 1988 in English and held by 2 WorldCat member libraries worldwide
There exists substantial data level parallelism in scientific problems. The Crystal/ACRE(Automated Crystal Runtime Environment) runtime system is an attempt to obtain parallel, implementations for scientific computations, particularly those where the data dependencies are manifest only at runtime. This can preclude compiler based detection of certain types of parallelism. The automated system is structured as follows: An appropriate level of granularity is first selected for the computations. A directed acyclic graph representation of the program is generated on which various aggregation techniques may be employed in order to generate efficient schedules. These schedules are then mapped onto the largest machine. We describe some initial results from experiments conducted on the Intel Hypercube and the Encore Multimax that indicate the usefulness of our approach. Using the runtime system, it will be relatively easy to program different applications and study the performance implications of the various parameters. When the performance data is available, we would like to develop mathematical models that describe the relationships between the various important parameters in the system
Preconditioned ConjugateGradient Methods for Nonsymmetric Systems of Linear Equations by
Howard C Elman(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
In this paper, we present a class of iterative descent methods for solving large, sparse, nonsymmetric systems of linear equations whose coefficient matrices have positivedefinite symmetric parts. Such problems commonly arise from the discretization of nonselfadjoint elliptic partial differential equations. The methods we consider are modelled after the conjugate gradient method. They require no estimation of parameters and their rate of convergence appears to depend on the spectrum of A rather than ATA. Their convergence can also be accelerated by preconditioning techniques
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
In this paper, we present a class of iterative descent methods for solving large, sparse, nonsymmetric systems of linear equations whose coefficient matrices have positivedefinite symmetric parts. Such problems commonly arise from the discretization of nonselfadjoint elliptic partial differential equations. The methods we consider are modelled after the conjugate gradient method. They require no estimation of parameters and their rate of convergence appears to depend on the spectrum of A rather than ATA. Their convergence can also be accelerated by preconditioning techniques
Singular Value Computations with Systolic Arrays(
Book
)
1 edition published in 1983 in English and held by 2 WorldCat member libraries worldwide
Systolic arrays are constructed for bandwidth reduction and singular value decomposition of m x n matrices, wlog, m <or = n. The underlying algorithms are unconditionally stable. Since input and output occurs, as in previous designs, by diagonals arrays can be directly appended to further reduce the computation time. Consequently, the designs will be most efficient for matrices with a fairly small and dense band
1 edition published in 1983 in English and held by 2 WorldCat member libraries worldwide
Systolic arrays are constructed for bandwidth reduction and singular value decomposition of m x n matrices, wlog, m <or = n. The underlying algorithms are unconditionally stable. Since input and output occurs, as in previous designs, by diagonals arrays can be directly appended to further reduce the computation time. Consequently, the designs will be most efficient for matrices with a fairly small and dense band
Local Uniform Mesh Refinement with Moving Grids by
Yale University(
Book
)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
Local Uniform Mesh Refinement (LUMR) is a powerful technique for solving hyperbolic partial differential equations. However, many problems contain regions where numerical dispersion is very large, such as step fronts. In these regions, mesh refinement is not very efficient. A better approach in these regions is to locally transform the coordinate system to move with the front. This document shows how to combine these two approaches in a way which maintains the advantages of LUMR and the effectiveness of moving grids. Experiments with 2D scalar problems are presented. (Author)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
Local Uniform Mesh Refinement (LUMR) is a powerful technique for solving hyperbolic partial differential equations. However, many problems contain regions where numerical dispersion is very large, such as step fronts. In these regions, mesh refinement is not very efficient. A better approach in these regions is to locally transform the coordinate system to move with the front. This document shows how to combine these two approaches in a way which maintains the advantages of LUMR and the effectiveness of moving grids. Experiments with 2D scalar problems are presented. (Author)
A Hybrid Chebyshev Krylov Subspace Algorithm for Solving Nonsymmetric Systems of Linear Equations(
Book
)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
This document presents an iterative method for solving large sparse nonsymmetric linear systems of equations that enhances Manteuffel's adaptive Chebyshev method with a conjugate gradientlike method. The new method replaces the modified power method for computing needed eigenvalue estimates with Arnoldi's method, which can be used to simultaneously compute eigenvalues and to improve the approximate solution. Convergence analysis and numerical experiments suggest that the method is more efficient than the original adaptive Chebyshev algorithm. (Author)
1 edition published in 1984 in English and held by 2 WorldCat member libraries worldwide
This document presents an iterative method for solving large sparse nonsymmetric linear systems of equations that enhances Manteuffel's adaptive Chebyshev method with a conjugate gradientlike method. The new method replaces the modified power method for computing needed eigenvalue estimates with Arnoldi's method, which can be used to simultaneously compute eigenvalues and to improve the approximate solution. Convergence analysis and numerical experiments suggest that the method is more efficient than the original adaptive Chebyshev algorithm. (Author)
Quarterly Progress Report on Contract N0001491J1577 (Yale University)(
Book
)
7 editions published between 1991 and 1993 in English and held by 1 WorldCat member library worldwide
During this period work continued along several fronts, all related to planning and perception. Prof. Drew McDermott and Michael Beetz, a graduate student, focused on transformational reactive plans, and especially the problem of inserting declarative goals into reactive plans. They were working on a paper summarizing their results, to be submitted to a conference. McDermott and Sean Engelson, a graduate student, worked on experimental testing of algorithms for map building in a mobile robot. The results are summarized in a paper submitted to the IEEE Robotics and Automation Conference. Prof. Gregory Hager implemented a first generation algorithm for computing whether two or more objects could be placed together in a confined space. The algorithm is correct and complete for a class of unstructured objects, and maintain correctness for unstructured objects. It has been tested in simulation and on contours computed from real images. The same idea is extendable to many more sensorbased decision making tacks. He also has been working on fitting and making decisions about composite objects using the same constraintbased ideas. We have also managed to parallelize the algorithm using Linda. At the same time Hager's group has implemented two visual tracking systems. The first is a featurebased tracker that follows high contrast boundaries. The second uses Michael Black's robust Horn & Schunk optic flow method to compute the motion of a small image patch
7 editions published between 1991 and 1993 in English and held by 1 WorldCat member library worldwide
During this period work continued along several fronts, all related to planning and perception. Prof. Drew McDermott and Michael Beetz, a graduate student, focused on transformational reactive plans, and especially the problem of inserting declarative goals into reactive plans. They were working on a paper summarizing their results, to be submitted to a conference. McDermott and Sean Engelson, a graduate student, worked on experimental testing of algorithms for map building in a mobile robot. The results are summarized in a paper submitted to the IEEE Robotics and Automation Conference. Prof. Gregory Hager implemented a first generation algorithm for computing whether two or more objects could be placed together in a confined space. The algorithm is correct and complete for a class of unstructured objects, and maintain correctness for unstructured objects. It has been tested in simulation and on contours computed from real images. The same idea is extendable to many more sensorbased decision making tacks. He also has been working on fitting and making decisions about composite objects using the same constraintbased ideas. We have also managed to parallelize the algorithm using Linda. At the same time Hager's group has implemented two visual tracking systems. The first is a featurebased tracker that follows high contrast boundaries. The second uses Michael Black's robust Horn & Schunk optic flow method to compute the motion of a small image patch
Mathematical Methods for the Implementation of Neural Networks(
Book
)
2 editions published in 1996 in English and held by 1 WorldCat member library worldwide
We present a novel optimizing network architecture with applications in vision, learning, pattern recognition and combinatorial optimization. This architecture is constructed by combining the following techniques: (1) deterministic annealing, (2) selfamplification, (3) algebraic transformations, (4) clocked objectives, and (5) soft assign. Deterministic annealing in conjunction with selfamplification avoids poor local minima and ensures that a vertex of the hypercube is reached. Algebraic transformations and clocked objectives help partition the relaxation into distinct phases. The problems considered have doubly stochastic matrix constraints or minor variations thereof. We introduce a new technique, soft assign, which is used to satisfy this constraint. Experimental results on different problems are presented and discussed
2 editions published in 1996 in English and held by 1 WorldCat member library worldwide
We present a novel optimizing network architecture with applications in vision, learning, pattern recognition and combinatorial optimization. This architecture is constructed by combining the following techniques: (1) deterministic annealing, (2) selfamplification, (3) algebraic transformations, (4) clocked objectives, and (5) soft assign. Deterministic annealing in conjunction with selfamplification avoids poor local minima and ensures that a vertex of the hypercube is reached. Algebraic transformations and clocked objectives help partition the relaxation into distinct phases. The problems considered have doubly stochastic matrix constraints or minor variations thereof. We introduce a new technique, soft assign, which is used to satisfy this constraint. Experimental results on different problems are presented and discussed
Diagonal Forms of Translation Operators for Helmholtz Equation in Three Dimensions(
)
2 editions published in 1992 in English and held by 0 WorldCat member libraries worldwide
The diagonal forms are constructed for the translation operators for the Helmholz equation in three dimensions. While the operators themselves have a fairly complicated structure (described somewhat incompletely by the classical addition theorems for the Bessel functions), their diagonal forms turn out to be quite simple. These diagonal forms are realized as generalized integrals, possess straightforward physical interpretations, and admit stable numerical implementation. This paper uses the obtained analytical apparatus to construct an algorithm for the rapid application to arbitrary vectors of matrices resulting from the discretization of integral equations of the potential theory for the Helmholtz equation in three dimensions. It is an extension to the threedimensional case of the results, where a similar apparatus is developed in the twodimensional case
2 editions published in 1992 in English and held by 0 WorldCat member libraries worldwide
The diagonal forms are constructed for the translation operators for the Helmholz equation in three dimensions. While the operators themselves have a fairly complicated structure (described somewhat incompletely by the classical addition theorems for the Bessel functions), their diagonal forms turn out to be quite simple. These diagonal forms are realized as generalized integrals, possess straightforward physical interpretations, and admit stable numerical implementation. This paper uses the obtained analytical apparatus to construct an algorithm for the rapid application to arbitrary vectors of matrices resulting from the discretization of integral equations of the potential theory for the Helmholtz equation in three dimensions. It is an extension to the threedimensional case of the results, where a similar apparatus is developed in the twodimensional case
New Neural Algorithms for SelfOrganized Learning(
)
2 editions published in 1991 in English and held by 0 WorldCat member libraries worldwide
This interim report describes work completed from December 1988 to November 1991. The original purpose of the research program funded by this grant was to study selforganized systems which adapt and learn. The originally proposed research fell into two main categories: (1). Biological Models of Self Organization; and (2). New SelfOrganized Learning Systems. During the period for which this grant was funded, significant progress was made in both areas. In addition, some new areas related to neural network learning are being explored as an outgrowth of the original proposal. These include: (1). Model Selection Techniques; and (2). Estimating Generalization Performance
2 editions published in 1991 in English and held by 0 WorldCat member libraries worldwide
This interim report describes work completed from December 1988 to November 1991. The original purpose of the research program funded by this grant was to study selforganized systems which adapt and learn. The originally proposed research fell into two main categories: (1). Biological Models of Self Organization; and (2). New SelfOrganized Learning Systems. During the period for which this grant was funded, significant progress was made in both areas. In addition, some new areas related to neural network learning are being explored as an outgrowth of the original proposal. These include: (1). Model Selection Techniques; and (2). Estimating Generalization Performance
KnowledgeBased Planning(
)
2 editions published between 1992 and 1993 in English and held by 0 WorldCat member libraries worldwide
The goal of our project is to study planning for autonomous agents with imperfect sensors in a dynamic world. Such agents must confront several problems: (1) how to synchronize plan execution with plan refinement, (2) how to generate resonable plans quickly for complex goals, and improve them later; (3) how to trade off sensorprocessing time against the quality of information; and (4) and how to learn the structure of the environment as plan execution proceeds
2 editions published between 1992 and 1993 in English and held by 0 WorldCat member libraries worldwide
The goal of our project is to study planning for autonomous agents with imperfect sensors in a dynamic world. Such agents must confront several problems: (1) how to synchronize plan execution with plan refinement, (2) how to generate resonable plans quickly for complex goals, and improve them later; (3) how to trade off sensorprocessing time against the quality of information; and (4) and how to learn the structure of the environment as plan execution proceeds
Final report for contract N0001491J1577 (Yale University)(
)
4 editions published in 1994 in English and held by 0 WorldCat member libraries worldwide
During this period, we made substantial progress on visual tracking for our mobile robotic system, and upgraded our tracking system framework. We have begun construction of a navigation system for our mobile robot based entirely on visual tracking. During this period, we developed SSDbased tracking systems, and used those systems to demonstrate robot motion along a predefined path. This demonstration included automated selection of features to track and tracking of features in realtime as the robot moved. We are currently working on extending the system capabilities, optimizing the tracking functions, and "robustifying" the tracking methods. We have extended our tracking system framework in a number of ways. Earlier versions of the tracking system required tracking networks to be strictly heirarchical. This made it difficult to share features among different system components requiring visual feedback. The new system eases this heirarchical requirement, making it possible to have more fully interconnected networks. We have also developed a typing system for trackers, making it possible to define generical geometric and logical constructions independent of the implementation of the tracking components. These modifications have substantially simplified the construction of visual servoing systems. We continue to build a library of plans and transformations for our work on sensorguided planning. The current work focusses on getting the necessary body of planning knowledge to carry out our main experiments. For this purpose we are implementing failure models for additional kinds of execution failures and plan revision rules indexed by these failures
4 editions published in 1994 in English and held by 0 WorldCat member libraries worldwide
During this period, we made substantial progress on visual tracking for our mobile robotic system, and upgraded our tracking system framework. We have begun construction of a navigation system for our mobile robot based entirely on visual tracking. During this period, we developed SSDbased tracking systems, and used those systems to demonstrate robot motion along a predefined path. This demonstration included automated selection of features to track and tracking of features in realtime as the robot moved. We are currently working on extending the system capabilities, optimizing the tracking functions, and "robustifying" the tracking methods. We have extended our tracking system framework in a number of ways. Earlier versions of the tracking system required tracking networks to be strictly heirarchical. This made it difficult to share features among different system components requiring visual feedback. The new system eases this heirarchical requirement, making it possible to have more fully interconnected networks. We have also developed a typing system for trackers, making it possible to define generical geometric and logical constructions independent of the implementation of the tracking components. These modifications have substantially simplified the construction of visual servoing systems. We continue to build a library of plans and transformations for our work on sensorguided planning. The current work focusses on getting the necessary body of planning knowledge to carry out our main experiments. For this purpose we are implementing failure models for additional kinds of execution failures and plan revision rules indexed by these failures
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Languages