Gay, David M.
Overview
Works:  37 works in 155 publications in 1 language and 669 library holdings 

Genres:  Software Handbooks and manuals 
Roles:  Author 
Classifications:  QA402.5, 519.702855133 
Publication Timeline
.
Most widely held works by
David M Gay
AMPL : a modeling language for mathematical programming by
Robert Fourer(
Book
)
52 editions published between 1993 and 2009 in English and held by 388 WorldCat member libraries worldwide
52 editions published between 1993 and 2009 in English and held by 388 WorldCat member libraries worldwide
AMPL : a modeling language for mathematical programming : with AMPL Plus student edition for microsoft windows by
Robert Fourer(
Book
)
12 editions published between 1993 and 1997 in English and held by 57 WorldCat member libraries worldwide
12 editions published between 1993 and 1997 in English and held by 57 WorldCat member libraries worldwide
An adaptive nonlinear leastsquares algorithm by
J. E Dennis(
Book
)
15 editions published between 1977 and 1980 in English and Undetermined and held by 16 WorldCat member libraries worldwide
NL2SOL is a modular program for solving nonlinear leastsquares problems that incorporates a number of novel features. It maintains a secant approximation S to the secondorder part of the leastsquares Hessian and adaptively decides when to use this approximation. S is 'sized' before updating, something which is similar to OrenLuenberger scaling. The step choice algorithm is based on minimizing a local quadratic model of the sum of squares function constrained to an elliptical trust region centered at the current approximate minimizer. This is accomplished using ideas discussed by More, together with a special module for assessing the quality of the step thus computed. These and other ideas behind NL2SOL are discussed and its evolution and current implementation are also described briefly. (Author)
15 editions published between 1977 and 1980 in English and Undetermined and held by 16 WorldCat member libraries worldwide
NL2SOL is a modular program for solving nonlinear leastsquares problems that incorporates a number of novel features. It maintains a secant approximation S to the secondorder part of the leastsquares Hessian and adaptively decides when to use this approximation. S is 'sized' before updating, something which is similar to OrenLuenberger scaling. The step choice algorithm is based on minimizing a local quadratic model of the sum of squares function constrained to an elliptical trust region centered at the current approximate minimizer. This is accomplished using ideas discussed by More, together with a special module for assessing the quality of the step thus computed. These and other ideas behind NL2SOL are discussed and its evolution and current implementation are also described briefly. (Author)
AMPL : a modeling language for mathematical programming by
Robert Fourer(
Book
)
6 editions published in 1993 in English and held by 8 WorldCat member libraries worldwide
6 editions published in 1993 in English and held by 8 WorldCat member libraries worldwide
On solving robust and generalized linear regression problems by
David M Gay(
Book
)
5 editions published between 1979 and 1980 in English and held by 7 WorldCat member libraries worldwide
Many researchers employ mathematical models. Most models contain parameters, which may be chosen to make the model fit the available data as well as possible (in a sense that depends on the model). In this paper we consider the problem of choosing the parameters for a common class of models in which the desired parameter vector minimizes an (unconstrained) objective function. We briefly give some examples of such problems, then discuss ways to exploit the common structure that these problems share. THis leads us to discussing strategies for solving general unconstrained minimization problems and to point out the advantages of using a socalled 'model/trustregion approach, ' wherein the change made in the current parameter estimate is chosen so as to approximately minimize a local model of the objective function on an estimate of the region about the current iterate where this local model is reliable. For problems in which the residual vector r(x) is a nonlinear function of x, we recommend generalizations of some techniques that have proven worthwhile in nonlinear leastsquares problems in which the optimal residual vector r(x*) may be either large or small
5 editions published between 1979 and 1980 in English and held by 7 WorldCat member libraries worldwide
Many researchers employ mathematical models. Most models contain parameters, which may be chosen to make the model fit the available data as well as possible (in a sense that depends on the model). In this paper we consider the problem of choosing the parameters for a common class of models in which the desired parameter vector minimizes an (unconstrained) objective function. We briefly give some examples of such problems, then discuss ways to exploit the common structure that these problems share. THis leads us to discussing strategies for solving general unconstrained minimization problems and to point out the advantages of using a socalled 'model/trustregion approach, ' wherein the change made in the current parameter estimate is chosen so as to approximately minimize a local model of the objective function on an estimate of the region about the current iterate where this local model is reliable. For problems in which the residual vector r(x) is a nonlinear function of x, we recommend generalizations of some techniques that have proven worthwhile in nonlinear leastsquares problems in which the optimal residual vector r(x*) may be either large or small
Computing optimal locally constrained steps by
David M Gay(
Book
)
6 editions published between 1979 and 1980 in English and held by 7 WorldCat member libraries worldwide
In seeking to solve an unconstrained minimization problem, one often computes steps based on a quadratic approximation q to the objective function. A reasonable way to choose such steps is by minimizing q constrained to a neighborhood of the current iterate. This paper considers ellipsoidal neighborhood and presents a new way to handle certain computational details when the Hessian of q is indefinite, paying particular attention to a special case which may then arise. The proposed step computing algorithm provides an attractive way to deal with negative curvature. Implementations of this algorithm have proved very satisfactory in the nonlinear leastsquares solver NL2SOL. (Author)
6 editions published between 1979 and 1980 in English and held by 7 WorldCat member libraries worldwide
In seeking to solve an unconstrained minimization problem, one often computes steps based on a quadratic approximation q to the objective function. A reasonable way to choose such steps is by minimizing q constrained to a neighborhood of the current iterate. This paper considers ellipsoidal neighborhood and presents a new way to handle certain computational details when the Hessian of q is indefinite, paying particular attention to a special case which may then arise. The proposed step computing algorithm provides an attractive way to deal with negative curvature. Implementations of this algorithm have proved very satisfactory in the nonlinear leastsquares solver NL2SOL. (Author)
Brown's method and some generalizations, with applications to minimization problems by
David M Gay(
Book
)
5 editions published between 1975 and 1985 in English and held by 7 WorldCat member libraries worldwide
Newton's method attempts to find a zero of $f \in C[superscript]{1}(IR[superscript]{n})$ by taking a step which is intended to make all components of $f$ vanish at once. In this respect Newton's method processes the components of $f$ in parallel. Contrasting to this, Brown's method and the generalizations thereof considered in this thesis process the components of $f$ serially, one after another. One major iteration of these methods may be described as follows: given the starting point (i.e. current major iterate) $y_{0}$, linearize the first component $f_{1}$ of $f$ at $y_{0}$ and find a point $y_{1}$ in the $n1$ dimensional hyperplane $H_{1}$ on which this linearization vanishes; in general, having found a point $y_{k} (1 \leq k <n)$ in the $nk$ dimensional hyperplane $H_{k}$ on which the heretofore constructed linearizations vanish, restrict $f_{k+1}$ to $H_{k}$, linearize this restriction at $y_{k}$, and find a point $y_{k+1}$ in the $n(k+1)$ dimensional hyperplane $H_{k+1}$ on which this linearization vanishes; stop when $Y_{n}$ has been found and let $y_{n}$ be the next major iterate. When $f$ is a general nonlinear function and finite differences are used to construct the linearizations, this approach must do work equivalent to approximating only about half the components of $f'$ and thus requires only about half as many function evaluations per major iteration as the corresponding finite difference Newton's method, while still enjoying the same rate of local convergence
5 editions published between 1975 and 1985 in English and held by 7 WorldCat member libraries worldwide
Newton's method attempts to find a zero of $f \in C[superscript]{1}(IR[superscript]{n})$ by taking a step which is intended to make all components of $f$ vanish at once. In this respect Newton's method processes the components of $f$ in parallel. Contrasting to this, Brown's method and the generalizations thereof considered in this thesis process the components of $f$ serially, one after another. One major iteration of these methods may be described as follows: given the starting point (i.e. current major iterate) $y_{0}$, linearize the first component $f_{1}$ of $f$ at $y_{0}$ and find a point $y_{1}$ in the $n1$ dimensional hyperplane $H_{1}$ on which this linearization vanishes; in general, having found a point $y_{k} (1 \leq k <n)$ in the $nk$ dimensional hyperplane $H_{k}$ on which the heretofore constructed linearizations vanish, restrict $f_{k+1}$ to $H_{k}$, linearize this restriction at $y_{k}$, and find a point $y_{k+1}$ in the $n(k+1)$ dimensional hyperplane $H_{k+1}$ on which this linearization vanishes; stop when $Y_{n}$ has been found and let $y_{n}$ be the next major iterate. When $f$ is a general nonlinear function and finite differences are used to construct the linearizations, this approach must do work equivalent to approximating only about half the components of $f'$ and thus requires only about half as many function evaluations per major iteration as the corresponding finite difference Newton's method, while still enjoying the same rate of local convergence
Implementing Brown's method by
David M Gay(
Book
)
4 editions published in 1975 in English and Undetermined and held by 5 WorldCat member libraries worldwide
4 editions published in 1975 in English and Undetermined and held by 5 WorldCat member libraries worldwide
On convergence testing in model/trustregion algorithms for unconstrained optimization by
David M Gay(
Book
)
4 editions published in 1982 in English and Undetermined and held by 4 WorldCat member libraries worldwide
4 editions published in 1982 in English and Undetermined and held by 4 WorldCat member libraries worldwide
AMPL: A modeling language for mathematical programming. Using the AMPL student edition under MSDOS by
Robert Fourer(
Book
)
2 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
2 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
AMPL: A modeling language for mathematical programming by
Robert Fourer(
Book
)
2 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
2 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
More remarks on Scolnik's approach to linear programming by
David M Gay(
Book
)
3 editions published in 1974 in English and held by 3 WorldCat member libraries worldwide
This report briefly discusses certain points in Hugo Scolnik's letter (Spring 1974) to the SIGMAP membership, then examines whether superfluous constraints are responsible for the difficulties in Scolnik's approach to linear programming, and finally discusses possible starting heuristics, based on Scolnik's approach, for the simplex algorithm
3 editions published in 1974 in English and held by 3 WorldCat member libraries worldwide
This report briefly discusses certain points in Hugo Scolnik's letter (Spring 1974) to the SIGMAP membership, then examines whether superfluous constraints are responsible for the difficulties in Scolnik's approach to linear programming, and finally discusses possible starting heuristics, based on Scolnik's approach, for the simplex algorithm
Pictures of Karmarkar's linear programming algorithm by
AT & T Bell Laboratories(
Book
)
3 editions published in 1987 in English and held by 3 WorldCat member libraries worldwide
3 editions published in 1987 in English and held by 3 WorldCat member libraries worldwide
Some Convergence Properties of Broyden's method by
David M Gay(
Book
)
4 editions published in 1977 in English and Undetermined and held by 2 WorldCat member libraries worldwide
In 1965 Broyden introduced a family of algorithms called(rankone) quasiNewton methods for iteratively solving systems of nonlinear equations. We show that when any member of this family is applied to an n x n nonsingular system of linear equations and directprediction steps are taken every second iteration, then the solution is found in at most 2n steps. Specializing to the particular family member known as Broyden  good) method, we use this result to show that Broyden's method enjoys local 2nstep Qquadratic convergence on nonlinear problems
4 editions published in 1977 in English and Undetermined and held by 2 WorldCat member libraries worldwide
In 1965 Broyden introduced a family of algorithms called(rankone) quasiNewton methods for iteratively solving systems of nonlinear equations. We show that when any member of this family is applied to an n x n nonsingular system of linear equations and directprediction steps are taken every second iteration, then the solution is found in at most 2n steps. Specializing to the particular family member known as Broyden  good) method, we use this result to show that Broyden's method enjoys local 2nstep Qquadratic convergence on nonlinear problems
Solving systems of nonlinear equations by Broyden's method with projected updates by
David M Gay(
Book
)
3 editions published in 1977 in English and held by 1 WorldCat member library worldwide
We introduce a modification of Broyden's method for finding a zero of n nonlinear equations in n unknowns when analytic derivatives are not available. The method retains the local Qsuperlinear convergence of Broyden's method and has the additional property that if any or all of the equations are linear, it locates a zero of these equations in n+1 or fewer iterations. Limited computational experience suggests that our modification often improves upon Eroyden's method
3 editions published in 1977 in English and held by 1 WorldCat member library worldwide
We introduce a modification of Broyden's method for finding a zero of n nonlinear equations in n unknowns when analytic derivatives are not available. The method retains the local Qsuperlinear convergence of Broyden's method and has the additional property that if any or all of the equations are linear, it locates a zero of these equations in n+1 or fewer iterations. Limited computational experience suggests that our modification often improves upon Eroyden's method
Dakota, a multilevel parallel objectoriented framework for design optimization, parameter estimation, uncertainty quantification,
and sensitivity analysis : version 4.0 developers manual(
)
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes
On Modifying Singular Values to Solve Possible Singular Systems of NonLinear Equations(
)
2 editions published in 1976 in English and held by 0 WorldCat member libraries worldwide
We show that if a certain nondegeneracy assumption holds, it is possible to guarantee the existence of a solution to a system of nonlinear equations f(x) = 0 whose Jacobian matrix J(x) exists but maybe singular. The main idea is to modify small singular values of J(x) in such away that the modified Jacobian matrix J(x) has a continuous pseudoinverse J+(x)and that a solution x of f(x) = 0 may be found by determining an asymptote of the solution to the initial value problem x(0) = x[sub0}, x¿h Ø?0@1A0?(Øt) = J+(x)f(x). We briefly discuss practical (algorithmic) implications of this result. Although the nondegeneracy assumption may fail for many systems of interest (indeed, if the assumption holds and J(x ) is nonsingular, then x is unique), algorithms using(x) may enjoy a larger region of convergence than those that require(an approximation to) J[to the 1 power[(x)
2 editions published in 1976 in English and held by 0 WorldCat member libraries worldwide
We show that if a certain nondegeneracy assumption holds, it is possible to guarantee the existence of a solution to a system of nonlinear equations f(x) = 0 whose Jacobian matrix J(x) exists but maybe singular. The main idea is to modify small singular values of J(x) in such away that the modified Jacobian matrix J(x) has a continuous pseudoinverse J+(x)and that a solution x of f(x) = 0 may be found by determining an asymptote of the solution to the initial value problem x(0) = x[sub0}, x¿h Ø?0@1A0?(Øt) = J+(x)f(x). We briefly discuss practical (algorithmic) implications of this result. Although the nondegeneracy assumption may fail for many systems of interest (indeed, if the assumption holds and J(x ) is nonsingular, then x is unique), algorithms using(x) may enjoy a larger region of convergence than those that require(an approximation to) J[to the 1 power[(x)
Representing Symmetric Rank Two Updates by
David M Gay(
)
1 edition published in 1976 in English and held by 0 WorldCat member libraries worldwide
"Various quasiNewton methods periodically add a symmetric "correction" matrix of rank at most 2 to a matrix approximating some quantity A of interest (such as the Hessian of an objective function). In this paper we examine several ways to express a symmetric rank 2 matrix [delta] as the sum of rank 1 matrices. We show that it is easy to compute rank 1 matrices [delta1] and [delta2] such that [delta] = [delta1] + [delta2] and [the norm of delta1]+ [the norm of delta2] is minimized, where "." is any inner product norm. Such a representation recommends itself for use in those computer programs that maintain A explicitly, since it should reduce cancellation errors and/or improve efficiency over other representations. In the common case where [delta] is indefinite, a choice of the form [delta1] = [delta2 to the power of T] = [xy to the power of T] appears best. This case occurs for rank 2 quasi Newton updates [delta] exactly when [delta] may be obtained by symmetrizing some rank 1 update; such popular updates as the DFP, BFGS, PSB, and Davidon's new optimally conditioned update fall into this category"NBER website
1 edition published in 1976 in English and held by 0 WorldCat member libraries worldwide
"Various quasiNewton methods periodically add a symmetric "correction" matrix of rank at most 2 to a matrix approximating some quantity A of interest (such as the Hessian of an objective function). In this paper we examine several ways to express a symmetric rank 2 matrix [delta] as the sum of rank 1 matrices. We show that it is easy to compute rank 1 matrices [delta1] and [delta2] such that [delta] = [delta1] + [delta2] and [the norm of delta1]+ [the norm of delta2] is minimized, where "." is any inner product norm. Such a representation recommends itself for use in those computer programs that maintain A explicitly, since it should reduce cancellation errors and/or improve efficiency over other representations. In the common case where [delta] is indefinite, a choice of the form [delta1] = [delta2 to the power of T] = [xy to the power of T] appears best. This case occurs for rank 2 quasi Newton updates [delta] exactly when [delta] may be obtained by symmetrizing some rank 1 update; such popular updates as the DFP, BFGS, PSB, and Davidon's new optimally conditioned update fall into this category"NBER website
DAKOTA, a multilevel parallel objectoriented framework for design optimization, parameter estimation, uncertainty quantification,
and sensitivity analysis : version 4.0 reference manual(
)
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications
DAKOTA, a multilevel parellel objectoriented framework for design optimization, parameter estimation, uncertainty quantification,
and sensitivity analysis : version 4.0 uers's manual(
)
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies
1 edition published in 2006 in English and held by 0 WorldCat member libraries worldwide
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradientbased methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogatebased optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing objectoriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problemsolving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Kernighan, Brian W. Author
 Fourer, Robert Author
 National Bureau of Economic Research
 Welsch, Roy E.
 Dennis, John E. Author
 Schnabel, Robert B.
 Brown, Shannon L.
 Hough, Patricia Diane
 Sandia National Laboratories Researcher
 United States Department of Energy Office of Scientific and Technical Information Distributor
Associated Subjects
Algorithms Differential equations, NonlinearNumerical solutions EquationsNumerical solutions Least squares Least squaresComputer programs Linear programming Mathematical modelsComputer programs Mathematical optimization Mathematical optimizationComputer programs Programming (Mathematics) Programming (Mathematics)Computer programs Programming languages (Electronic computers) Regression analysis Robust statistics