Gay, David M.
Overview
Works:  37 works in 130 publications in 1 language and 599 library holdings 

Genres:  Handbooks and manuals 
Roles:  Author 
Classifications:  QA402.5, 519.702855133 
Publication Timeline
.
Most widely held works by
David M Gay
AMPL : a modeling language for mathematical programming by
Robert Fourer(
Book
)
60 editions published between 1993 and 2009 in English and held by 406 WorldCat member libraries worldwide
60 editions published between 1993 and 2009 in English and held by 406 WorldCat member libraries worldwide
An adaptive nonlinear leastsquares algorithm by
J. E Dennis(
Book
)
11 editions published between 1977 and 1980 in English and Undetermined and held by 12 WorldCat member libraries worldwide
NL2SOL is a modular program for solving the nonlinear leastsquares problem that incorporates a number of novel features. It maintains a secant approximation S to the secondorder part of the leastsquares Hessian and adaptively decides when to use this approximation. We have found it very helpful to "size" S before updating it, something which looks much akin to OrenLuenberger scaling. Rather than resorting to line searches or LevenbergMarquardt modifications, we use the doubledogleg scheme of Dennis and Mei together with a special module for assessing the quality of the step thus computed. We discuss these and other ideas behind NLZSOL and briefly describe its evolution and current implementation
11 editions published between 1977 and 1980 in English and Undetermined and held by 12 WorldCat member libraries worldwide
NL2SOL is a modular program for solving the nonlinear leastsquares problem that incorporates a number of novel features. It maintains a secant approximation S to the secondorder part of the leastsquares Hessian and adaptively decides when to use this approximation. We have found it very helpful to "size" S before updating it, something which looks much akin to OrenLuenberger scaling. Rather than resorting to line searches or LevenbergMarquardt modifications, we use the doubledogleg scheme of Dennis and Mei together with a special module for assessing the quality of the step thus computed. We discuss these and other ideas behind NLZSOL and briefly describe its evolution and current implementation
AMPL, a modeling language for mathematical programming : using the AMPL student edition under MSDOS by
Robert Fourer(
Book
)
6 editions published in 1993 in English and held by 10 WorldCat member libraries worldwide
6 editions published in 1993 in English and held by 10 WorldCat member libraries worldwide
On solving robust and generalized linear regression problems by
David M Gay(
Book
)
2 editions published in 1979 in English and held by 5 WorldCat member libraries worldwide
2 editions published in 1979 in English and held by 5 WorldCat member libraries worldwide
Brown's method and some generalizations, with applications to minimization problems by
David M Gay(
Book
)
4 editions published between 1975 and 1985 in English and held by 5 WorldCat member libraries worldwide
4 editions published between 1975 and 1985 in English and held by 5 WorldCat member libraries worldwide
Implementing Brown's method by
David M Gay(
Book
)
3 editions published in 1975 in English and Undetermined and held by 4 WorldCat member libraries worldwide
3 editions published in 1975 in English and Undetermined and held by 4 WorldCat member libraries worldwide
Computing optimal locally constrained steps by
David M Gay(
Book
)
2 editions published in 1979 in English and held by 4 WorldCat member libraries worldwide
2 editions published in 1979 in English and held by 4 WorldCat member libraries worldwide
AMPL : a modeling language for mathematical programming by
Robert Fourer(
Book
)
3 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
3 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
AMPL : a mathematical programming language by
Robert Fourer(
Book
)
2 editions published between 1987 and 1989 in English and held by 3 WorldCat member libraries worldwide
2 editions published between 1987 and 1989 in English and held by 3 WorldCat member libraries worldwide
AMPL : a modeling language for mathematical programming ; with AMPL Plus 1.6 Student Edition for Microsoft Windows by
Robert Fourer(
Book
)
3 editions published in 1997 in English and held by 3 WorldCat member libraries worldwide
3 editions published in 1997 in English and held by 3 WorldCat member libraries worldwide
On convergence testing in model/trustregion algorithms for unconstrained optimization by
David M Gay(
Book
)
2 editions published in 1982 in Undetermined and English and held by 3 WorldCat member libraries worldwide
2 editions published in 1982 in Undetermined and English and held by 3 WorldCat member libraries worldwide
On Scolnik's proposed polynomialtime linear programming algorithm by
David M Gay(
Book
)
2 editions published in 1973 in Undetermined and English and held by 2 WorldCat member libraries worldwide
2 editions published in 1973 in Undetermined and English and held by 2 WorldCat member libraries worldwide
AMPL PC student Version 2(
)
1 edition published in 1993 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 1993 in English and held by 2 WorldCat member libraries worldwide
Solving systems of nonlinear equations by Broyden's method with projected updates by
David M Gay(
Book
)
3 editions published in 1977 in English and held by 1 WorldCat member library worldwide
We introduce a modification of Broyden's method for finding a zero of n nonlinear equations in n unknowns when analytic derivatives are not available. The method retains the local Qsuperlinear convergence of Broyden's method and has the additional property that if any or all of the equations are linear, it locates a zero of these equations in n+1 or fewer iterations. Limited computational experience suggests that our modification often improves upon Eroyden's method
3 editions published in 1977 in English and held by 1 WorldCat member library worldwide
We introduce a modification of Broyden's method for finding a zero of n nonlinear equations in n unknowns when analytic derivatives are not available. The method retains the local Qsuperlinear convergence of Broyden's method and has the additional property that if any or all of the equations are linear, it locates a zero of these equations in n+1 or fewer iterations. Limited computational experience suggests that our modification often improves upon Eroyden's method
Some convergence properties of Broyden's method by
David M Gay(
Book
)
3 editions published in 1977 in English and held by 1 WorldCat member library worldwide
In 1965 Broyden introduced a family of algorithms called(rankone) quasiNewton methods for iteratively solving systems of nonlinear equations. We show that when any member of this family is applied to an n x n nonsingular system of linear equations and directprediction steps are taken every second iteration, then the solution is found in at most 2n steps. Specializing to the particular family member known as Broyden  good) method, we use this result to show that Broyden's method enjoys local 2nstep Qquadratic convergence on nonlinear problems
3 editions published in 1977 in English and held by 1 WorldCat member library worldwide
In 1965 Broyden introduced a family of algorithms called(rankone) quasiNewton methods for iteratively solving systems of nonlinear equations. We show that when any member of this family is applied to an n x n nonsingular system of linear equations and directprediction steps are taken every second iteration, then the solution is found in at most 2n steps. Specializing to the particular family member known as Broyden  good) method, we use this result to show that Broyden's method enjoys local 2nstep Qquadratic convergence on nonlinear problems
Representing Symmetric Rank Two Updates by
David M Gay(
)
1 edition published in 1976 in English and held by 0 WorldCat member libraries worldwide
"Various quasiNewton methods periodically add a symmetric "correction" matrix of rank at most 2 to a matrix approximating some quantity A of interest (such as the Hessian of an objective function). In this paper we examine several ways to express a symmetric rank 2 matrix [delta] as the sum of rank 1 matrices. We show that it is easy to compute rank 1 matrices [delta1] and [delta2] such that [delta] = [delta1] + [delta2] and [the norm of delta1]+ [the norm of delta2] is minimized, where "." is any inner product norm. Such a representation recommends itself for use in those computer programs that maintain A explicitly, since it should reduce cancellation errors and/or improve efficiency over other representations. In the common case where [delta] is indefinite, a choice of the form [delta1] = [delta2 to the power of T] = [xy to the power of T] appears best. This case occurs for rank 2 quasi Newton updates [delta] exactly when [delta] may be obtained by symmetrizing some rank 1 update; such popular updates as the DFP, BFGS, PSB, and Davidon's new optimally conditioned update fall into this category"NBER website
1 edition published in 1976 in English and held by 0 WorldCat member libraries worldwide
"Various quasiNewton methods periodically add a symmetric "correction" matrix of rank at most 2 to a matrix approximating some quantity A of interest (such as the Hessian of an objective function). In this paper we examine several ways to express a symmetric rank 2 matrix [delta] as the sum of rank 1 matrices. We show that it is easy to compute rank 1 matrices [delta1] and [delta2] such that [delta] = [delta1] + [delta2] and [the norm of delta1]+ [the norm of delta2] is minimized, where "." is any inner product norm. Such a representation recommends itself for use in those computer programs that maintain A explicitly, since it should reduce cancellation errors and/or improve efficiency over other representations. In the common case where [delta] is indefinite, a choice of the form [delta1] = [delta2 to the power of T] = [xy to the power of T] appears best. This case occurs for rank 2 quasi Newton updates [delta] exactly when [delta] may be obtained by symmetrizing some rank 1 update; such popular updates as the DFP, BFGS, PSB, and Davidon's new optimally conditioned update fall into this category"NBER website
Dakota, a multilevel parallel objectoriented framework for design optimization, parameter estimation, uncertainty quantification,
and sensitivity analysis version 4.0 developers manual(
)
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes
DAKOTA, a multilevel parallel objectoriented framework for design optimization, parameter estimation, uncertainty quantification,
and sensitivity analysis version 4.0 reference manual(
)
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications
DAKOTA, a multilevel parellel objectoriented framework for design optimization, parameter estimation, uncertainty quantification,
and sensitivity analysis version 4.0 uers's manual(
)
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies
On Modifying Singular Values to Solve Possible Singular Systems of NonLinear Equations(
)
2 editions published in 1976 in English and held by 0 WorldCat member libraries worldwide
We show that if a certain nondegeneracy assumption holds, it is possible to guarantee the existence of a solution to a system of nonlinear equations f(x) = 0 whose Jacobian matrix J(x) exists but maybe singular. The main idea is to modify small singular values of J(x) in such away that the modified Jacobian matrix J(x) has a continuous pseudoinverse J+(x)and that a solution x of f(x) = 0 may be found by determining an asymptote of the solution to the initial value problem x(0) = x[sub0}, x¿h Ø?0@1A0?(Øt) = J+(x)f(x). We briefly discuss practical (algorithmic) implications of this result. Although the nondegeneracy assumption may fail for many systems of interest (indeed, if the assumption holds and J(x ) is nonsingular, then x is unique), algorithms using(x) may enjoy a larger region of convergence than those that require(an approximation to) J[to the 1 power[(x)
2 editions published in 1976 in English and held by 0 WorldCat member libraries worldwide
We show that if a certain nondegeneracy assumption holds, it is possible to guarantee the existence of a solution to a system of nonlinear equations f(x) = 0 whose Jacobian matrix J(x) exists but maybe singular. The main idea is to modify small singular values of J(x) in such away that the modified Jacobian matrix J(x) has a continuous pseudoinverse J+(x)and that a solution x of f(x) = 0 may be found by determining an asymptote of the solution to the initial value problem x(0) = x[sub0}, x¿h Ø?0@1A0?(Øt) = J+(x)f(x). We briefly discuss practical (algorithmic) implications of this result. Although the nondegeneracy assumption may fail for many systems of interest (indeed, if the assumption holds and J(x ) is nonsingular, then x is unique), algorithms using(x) may enjoy a larger region of convergence than those that require(an approximation to) J[to the 1 power[(x)
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Associated Subjects
Algorithms APL (Computer program language) Differential equations, NonlinearNumerical solutions EquationsNumerical solutions Least squares Least squaresComputer programs Linear programming Mathematical modelsComputer programs Mathematical optimization Mathematical optimizationComputer programs Programming (Mathematics) Programming (Mathematics)Computer programs Programming languages (Electronic computers) Regression analysis Robust statistics