STANFORD UNIV CA SYSTEMS OPTIMIZATION LAB
Overview
Works:  128 works in 132 publications in 1 language and 158 library holdings 

Classifications:  QA9.58, 
Publication Timeline
.
Most widely held works by
STANFORD UNIV CA SYSTEMS OPTIMIZATION LAB
Methods for LargeScale Nonlinear Optimization by
Philip E Gill(
Book
)
2 editions published between 1980 and 1981 in English and held by 3 WorldCat member libraries worldwide
The application of optimization to electrical power technology often requires the numerical solution of systems that are very large and possibly nondifferentiable. A brief survey of the state of the art of numerical optimization is presented in which those methods that are directly applicable to power system problems will be highlighted. The areas of current research that are most likely to yield direct benefit to practical computation are identified. The paper concludes with a survey of available software. (Author)
2 editions published between 1980 and 1981 in English and held by 3 WorldCat member libraries worldwide
The application of optimization to electrical power technology often requires the numerical solution of systems that are very large and possibly nondifferentiable. A brief survey of the state of the art of numerical optimization is presented in which those methods that are directly applicable to power system problems will be highlighted. The areas of current research that are most likely to yield direct benefit to practical computation are identified. The paper concludes with a survey of available software. (Author)
Optimization of Unconstrained Functions with Sparse Hessian Matrices  QuasiNewton Methods(
Book
)
1 edition published in 1981 in English and held by 3 WorldCat member libraries worldwide
Newtontype methods and quasiNewton methods have proven to be very successful in solving dense unconstrained optimization problems. Recently there has been considerable interest in extending these methods to solving large problems when the Hessian matrix has a known a priori sparsity pattern. This paper treats sparse quasiNewton methods in a uniform fashion and shows the effect of loss of positivedefiniteness in generating updates. These sparse quasiNewton methods coupled with a modified Cholesky factorization to take into account the loss of positivedefiniteness when solving the linear systems associated with these methods were tested on a large set of problems. The overall conclusions are that these methods perform poorly in generalthe Hessian matrix becomes indefinite even close to the solution and superlinear convergence is not observed in practice. (Author)
1 edition published in 1981 in English and held by 3 WorldCat member libraries worldwide
Newtontype methods and quasiNewton methods have proven to be very successful in solving dense unconstrained optimization problems. Recently there has been considerable interest in extending these methods to solving large problems when the Hessian matrix has a known a priori sparsity pattern. This paper treats sparse quasiNewton methods in a uniform fashion and shows the effect of loss of positivedefiniteness in generating updates. These sparse quasiNewton methods coupled with a modified Cholesky factorization to take into account the loss of positivedefiniteness when solving the linear systems associated with these methods were tested on a large set of problems. The overall conclusions are that these methods perform poorly in generalthe Hessian matrix becomes indefinite even close to the solution and superlinear convergence is not observed in practice. (Author)
Polynomial Local Improvement Algorithms in Combinatorial Optimization by Craig Aaron Tovey(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
The subject of this report is an analysis of the expected, or average case performance of local improvement algorithms. The first chapter presents the basic model, defines the combinatorial structures which are the basis for the analysis, and describes the randomness assumptions upon which the expectation are based. The second chapter examines these structures in more detail, including an analysis of both best and worst case performance. The third chapter discusses simulation results which predict an approximately linear average case performance, and proves an O(n2 log n) upper bound for two of the random distributions assumed. Chapter Four proves some extensions and sharper versions of this upper bound. The fifth chapter applies the model to principal pivoting algorithms for the linear complementarity problem, and to the simplex method. Although local improvement is not guaranteed to find a global optimum for all problems, most notably those that are NPcomplete, it is nonetheless often used in these cases. Chapter Six discusses these appllications
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
The subject of this report is an analysis of the expected, or average case performance of local improvement algorithms. The first chapter presents the basic model, defines the combinatorial structures which are the basis for the analysis, and describes the randomness assumptions upon which the expectation are based. The second chapter examines these structures in more detail, including an analysis of both best and worst case performance. The third chapter discusses simulation results which predict an approximately linear average case performance, and proves an O(n2 log n) upper bound for two of the random distributions assumed. Chapter Four proves some extensions and sharper versions of this upper bound. The fifth chapter applies the model to principal pivoting algorithms for the linear complementarity problem, and to the simplex method. Although local improvement is not guaranteed to find a global optimum for all problems, most notably those that are NPcomplete, it is nonetheless often used in these cases. Chapter Six discusses these appllications
Procedures for Optimization Problems with a Mixture of Bounds and General Linear Constraints(
Book
)
1 edition published in 1982 in English and held by 2 WorldCat member libraries worldwide
When describing activeset methods for linearly constrained optimization, it is often convenient to treat all constraints in a uniform manner. However, in many problems the linear constraints include simple bounds on the variables as well as general constraints. Special treatment of bound constraints in the implementation of an activeset method yields significant advantages in computational effort and storage requirements. In this paper, we describe how to perform the constraintrelated steps of an activeset method when the constraint matrix is dense and bounds are treated separately. These steps involve updates to the TQ factorization of the working set of constraints and the Cholesky factorization of the projected Hessian (or Hessian approximation). (Author)
1 edition published in 1982 in English and held by 2 WorldCat member libraries worldwide
When describing activeset methods for linearly constrained optimization, it is often convenient to treat all constraints in a uniform manner. However, in many problems the linear constraints include simple bounds on the variables as well as general constraints. Special treatment of bound constraints in the implementation of an activeset method yields significant advantages in computational effort and storage requirements. In this paper, we describe how to perform the constraintrelated steps of an activeset method when the constraint matrix is dense and bounds are treated separately. These steps involve updates to the TQ factorization of the working set of constraints and the Cholesky factorization of the projected Hessian (or Hessian approximation). (Author)
TimeStaged Linear Programs(
Book
)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
The paper outlines some procedures for solving timestaged (staircase) linear programs. Two approaches are discussed: the first based on modifying the block structure of the basis so that there are square nonsingular subblocks along the diagonal; and the second based on the nested decomposition principle except applied to the dual system instead of the primal as proposed by Glassey and by Manne and Ho. (Author)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
The paper outlines some procedures for solving timestaged (staircase) linear programs. Two approaches are discussed: the first based on modifying the block structure of the basis so that there are square nonsingular subblocks along the diagonal; and the second based on the nested decomposition principle except applied to the dual system instead of the primal as proposed by Glassey and by Manne and Ho. (Author)
A Constructive Proof of Tucker's Combinatorial Lemma by
Michael J Todd(
Book
)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
Tucker's combinatorial lemma is concerned with certain labelings of the vertices of a triangulation of the nball. It can be used as a basis for the proof of antipodalpoint theorems in the same way that Sperner's lemma yields Brouwer's theorem. Here we give a constructive proof, which thereby yields algorithms for antipodalpoint problems. The method used is based on an algorithm of Reiser. (Author)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
Tucker's combinatorial lemma is concerned with certain labelings of the vertices of a triangulation of the nball. It can be used as a basis for the proof of antipodalpoint theorems in the same way that Sperner's lemma yields Brouwer's theorem. Here we give a constructive proof, which thereby yields algorithms for antipodalpoint problems. The method used is based on an algorithm of Reiser. (Author)
A Further Investigation of Efficient Heuristic Procedures for Integer Linear Programming with an Interior by
Frederick S Hillier(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
Some heuristic procedures for seeking a good approximate solution of any pure integer linear programming problem are evaluated. It was found that the procedures are extremely efficient, being computationally feasible for problems having hundreds of variables and constraints. Furthermore, they proved to be very effective in identifying good solutions, often obtaining optimal ones. Thus, the procedures provide a way of dealing with the frequently encountered integer programming problems that are beyond the computational capability of existing algorithms. For smaller problems, they also provide an advanced start for accelerating certain primal algorithms, including the author's BoundandScan algorithm and Faaland and Hillier's Accelerated BoundandScan algorithm. In addition, Jeroslow and Smith have found that imbedding the first part of one of these procedures inside the iterative step of a branchandbound algorithm can greatly improve the latter's efficiency in locating solutions whose objective function value is within a specified percentage of that for the optimal solution
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
Some heuristic procedures for seeking a good approximate solution of any pure integer linear programming problem are evaluated. It was found that the procedures are extremely efficient, being computationally feasible for problems having hundreds of variables and constraints. Furthermore, they proved to be very effective in identifying good solutions, often obtaining optimal ones. Thus, the procedures provide a way of dealing with the frequently encountered integer programming problems that are beyond the computational capability of existing algorithms. For smaller problems, they also provide an advanced start for accelerating certain primal algorithms, including the author's BoundandScan algorithm and Faaland and Hillier's Accelerated BoundandScan algorithm. In addition, Jeroslow and Smith have found that imbedding the first part of one of these procedures inside the iterative step of a branchandbound algorithm can greatly improve the latter's efficiency in locating solutions whose objective function value is within a specified percentage of that for the optimal solution
A Numerical Investigation of Ellipsoid Algorithms for LargeScale Linear Programming(
Book
)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
The ellipsoid algorithm associated with Shor, Khachiyan and others has certain theoretical properties that suggest its use as a linear programming algorithm. Some of the practical difficulties are investigated here. A variant of the ellipsoid update is first developed, to take advantage of the range constraints that often occur in linear programs (i.e., constraints of the form l <or = aTx <or = u, where u  l is reasonably small). Methods for storing the ellipsoid matrix are then discussed for both dense and sparse problems. In the largescale case, a major difficulty is that the desired ellipsoid cannot be represented compactly throughout an arbitrary number of iterations. Some schemes are suggested for economizing on storage, but any guarantee of convergence is effectively lost. At this stage there remains little room for optimism that an ellipsoidbased algorithm could complete with the simplex method on problems with a large number of variables. (Author)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
The ellipsoid algorithm associated with Shor, Khachiyan and others has certain theoretical properties that suggest its use as a linear programming algorithm. Some of the practical difficulties are investigated here. A variant of the ellipsoid update is first developed, to take advantage of the range constraints that often occur in linear programs (i.e., constraints of the form l <or = aTx <or = u, where u  l is reasonably small). Methods for storing the ellipsoid matrix are then discussed for both dense and sparse problems. In the largescale case, a major difficulty is that the desired ellipsoid cannot be represented compactly throughout an arbitrary number of iterations. Some schemes are suggested for economizing on storage, but any guarantee of convergence is effectively lost. At this stage there remains little room for optimism that an ellipsoidbased algorithm could complete with the simplex method on problems with a large number of variables. (Author)
Worst Case Analysis of Greedy Heuristics for Integer Programming with NonNegative Data by
G Dobson(
Book
)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
We give a worst case analysis for two greedy heuristics for the integer programming problem minimize cx, Ax> or = b, O <or = x <or = u, x integer, where the entries A, b, and c are all nonnegative. The first heuristic is for the case where the entries in A and b are integral, the second only assumes the rows are scaled so that the smallest nonzero entry is at least 1. In both cases we compare the ratio of the value of the greedy solution to that of the integer optimal. The error bound grows logarithmically in the maximum column sum of A for both heuristics. (Author)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
We give a worst case analysis for two greedy heuristics for the integer programming problem minimize cx, Ax> or = b, O <or = x <or = u, x integer, where the entries A, b, and c are all nonnegative. The first heuristic is for the case where the entries in A and b are integral, the second only assumes the rows are scaled so that the smallest nonzero entry is at least 1. In both cases we compare the ratio of the value of the greedy solution to that of the integer optimal. The error bound grows logarithmically in the maximum column sum of A for both heuristics. (Author)
A heuristic ceiling point algorithm for general integer linear programming by Robert M Saltzman(
Book
)
2 editions published in 1988 in English and held by 2 WorldCat member libraries worldwide
This report describes an exact algorithm for the pure, general integer linear programming problem (ILP). Common applications of this model occur in capital budgeting (project selection), resource allocation and fixedcharge (plant location) problems. The central theme of our algorithm is to enumerate a subset of all solutions called feasible 1ceiling points. A feasible 1ceiling point may be thought of as an integer solution lying on or near the boundary of the feasible region for the LPrelaxation associated with (ILP). Precise definitions of 1ceiling points and the role they play in an integer linear program are presented in a recent report by the authors. One key theorem therein demonstrates that all optimal solutions for an (ILP) whose feasible region is nonempty and bounded are feasible 1ceiling points. Consequently, such a problem may be solved by enumerating just its feasible 1ceiling points. Our approach is to implicitly enumerate 1ceiling points with respect to one constraint at a time while simultaneously considering feasibility. Computational results from applying this incumbentimproving Exact Ceiling Point Algorithm to 48 test problems taken from the literature indicate that this enumeration scheme may hold potential as a practical approach for solving problems with certain types of structure. (KR)
2 editions published in 1988 in English and held by 2 WorldCat member libraries worldwide
This report describes an exact algorithm for the pure, general integer linear programming problem (ILP). Common applications of this model occur in capital budgeting (project selection), resource allocation and fixedcharge (plant location) problems. The central theme of our algorithm is to enumerate a subset of all solutions called feasible 1ceiling points. A feasible 1ceiling point may be thought of as an integer solution lying on or near the boundary of the feasible region for the LPrelaxation associated with (ILP). Precise definitions of 1ceiling points and the role they play in an integer linear program are presented in a recent report by the authors. One key theorem therein demonstrates that all optimal solutions for an (ILP) whose feasible region is nonempty and bounded are feasible 1ceiling points. Consequently, such a problem may be solved by enumerating just its feasible 1ceiling points. Our approach is to implicitly enumerate 1ceiling points with respect to one constraint at a time while simultaneously considering feasibility. Computational results from applying this incumbentimproving Exact Ceiling Point Algorithm to 48 test problems taken from the literature indicate that this enumeration scheme may hold potential as a practical approach for solving problems with certain types of structure. (KR)
Geometric Aspects of the Linear Complementarity Problem by
Richard E Stone(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
A large part of the study of the Linear Complementarity Problem (LCP) has been concerned with matrix classes. A classic result of Samelson, Thrall, and Wesler is that the real square matrices with positive principal minors (Pmatrices) are exactly those matrices M for which the LCP (q, M) has a unique solution for all real vectors q. Taking this geometrical characterization of the Pmatrices and weakening, in an appropriate manner, some of the conditions, we obtain and study other useful and broad matrix classes thus enhancing our understanding of the LCP. In Chapter 2, we consider a generalization of the Pmatrices by defining the class U as all real square matrices M where, if for all vectors x within some open ball around the vector q the LCP (x, M) has a solution, then (q, M) has a unique solution. We develop a characterization of U along with more specialized conditions on a matrix for sufficiency or necessity of being in U. Chapter 3 is concerned with the introduction and characterization of the class INS. The class INS is a generalization of U gotten by requiring that the appropriate LCP's (q, M) have exactly k solutions, for some positive integer k depending only on M. Hence, U is exactly those matrices belonging to INS with k equal to one. Chapter 4 continues the study of the matrices in INS. The range of values for k, the set of q where (q, M) does not have k solutions, and the multiple partitioning structure of the complementary cones associated with the problem are central topics discussed. Chapter 5 discusses these new classes in light of known LCP theory, and reviews its better known matrix classes. Chapter 6 considers some problems which remain open. (author)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
A large part of the study of the Linear Complementarity Problem (LCP) has been concerned with matrix classes. A classic result of Samelson, Thrall, and Wesler is that the real square matrices with positive principal minors (Pmatrices) are exactly those matrices M for which the LCP (q, M) has a unique solution for all real vectors q. Taking this geometrical characterization of the Pmatrices and weakening, in an appropriate manner, some of the conditions, we obtain and study other useful and broad matrix classes thus enhancing our understanding of the LCP. In Chapter 2, we consider a generalization of the Pmatrices by defining the class U as all real square matrices M where, if for all vectors x within some open ball around the vector q the LCP (x, M) has a solution, then (q, M) has a unique solution. We develop a characterization of U along with more specialized conditions on a matrix for sufficiency or necessity of being in U. Chapter 3 is concerned with the introduction and characterization of the class INS. The class INS is a generalization of U gotten by requiring that the appropriate LCP's (q, M) have exactly k solutions, for some positive integer k depending only on M. Hence, U is exactly those matrices belonging to INS with k equal to one. Chapter 4 continues the study of the matrices in INS. The range of values for k, the set of q where (q, M) does not have k solutions, and the multiple partitioning structure of the complementary cones associated with the problem are central topics discussed. Chapter 5 discusses these new classes in light of known LCP theory, and reviews its better known matrix classes. Chapter 6 considers some problems which remain open. (author)
Exact and Approximation Algorithms for a Scheduling Problem by
Gregory Dobson(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
This paper discusses problems that arose in calendaring cases for an appellate court. The first problem is to distribute cases among panels of judges so as to equalize work loads. We give a worst case analysis of a heuristic for this NPcomplete problem. For a given distribution denote by z the heaviest work load. We wish to minimize z. The ratio of the heuristic value zbar to that of the true optimum z* is shown to be zbar/z* <or = (k + 3)/(k + 2) where all the case weights in (0, (1/k)z*), generalizing a result of Graham on multiprocessor scheduling. Under a restrictive assumption on the case weights, some generalizations of this scheduling problem are solved. Characterizations for feasible calendars and polynomial algorithms for finding these feasible solutions are given. Algorithms are given for choosing an optimal subset of the backlogged cases that can be calendared. (Author)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
This paper discusses problems that arose in calendaring cases for an appellate court. The first problem is to distribute cases among panels of judges so as to equalize work loads. We give a worst case analysis of a heuristic for this NPcomplete problem. For a given distribution denote by z the heaviest work load. We wish to minimize z. The ratio of the heuristic value zbar to that of the true optimum z* is shown to be zbar/z* <or = (k + 3)/(k + 2) where all the case weights in (0, (1/k)z*), generalizing a result of Graham on multiprocessor scheduling. Under a restrictive assumption on the case weights, some generalizations of this scheduling problem are solved. Characterizations for feasible calendars and polynomial algorithms for finding these feasible solutions are given. Algorithms are given for choosing an optimal subset of the backlogged cases that can be calendared. (Author)
A NestedDecomposition Approach for Solving StaircaseStructured Linear Programs(
Book
)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
The algorithm solves a Tperiod staircasestructured linear program by applying a compact basisinverse scheme for the Simplex Method in conjunction with a choice mechanism which uses the dual of the Nested Decomposition Principle of Manne and Ho to determine the incoming basic column. A sequence of oneperiod problems is solved in which, typically, information is provided to period t from previous and subsequent periods in the form of surrogate columns and modified righthand side, and surrogate rows and modified cost coefficients, respectively. (Author)
1 edition published in 1980 in English and held by 2 WorldCat member libraries worldwide
The algorithm solves a Tperiod staircasestructured linear program by applying a compact basisinverse scheme for the Simplex Method in conjunction with a choice mechanism which uses the dual of the Nested Decomposition Principle of Manne and Ho to determine the incoming basic column. A sequence of oneperiod problems is solved in which, typically, information is provided to period t from previous and subsequent periods in the form of surrogate columns and modified righthand side, and surrogate rows and modified cost coefficients, respectively. (Author)
Reminiscences about the Origins of Linear Programming(
Book
)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
Some recollections about the early days of linear programming, the contributions of von Neumann, Leontief, Koopmans and others and about some of the extensions that have taken place from 19501970 and some from 19701980 are discussed. Linear programming is viewed as a revolutionary development giving us the ability for the first time to state general objectives and to find, by means of the simplex method, optimal policy decisions to practical decision problems of great complexity. (author)
1 edition published in 1981 in English and held by 2 WorldCat member libraries worldwide
Some recollections about the early days of linear programming, the contributions of von Neumann, Leontief, Koopmans and others and about some of the extensions that have taken place from 19501970 and some from 19701980 are discussed. Linear programming is viewed as a revolutionary development giving us the ability for the first time to state general objectives and to find, by means of the simplex method, optimal policy decisions to practical decision problems of great complexity. (author)
Sparse Matrix Methods in Optimization by
Philip E Gill(
Book
)
1 edition published in 1982 in English and held by 2 WorldCat member libraries worldwide
Optimization algorithms typically require the solution of many systems of linear equations B sub Y sub = b sub. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practice. Unfortunately, most of the offthe shelf solvers are designed for single systems, whereas optimization problems give rise to hundreds or thousands of systems. To avoid refactorization, or to speed the convergence of an iterative method, it is essential to note that B sub is related to B sub  1. The authors review various sparse matrices that arise in optimization, and discuss compromises that are currently being made in dealing with them. Since significant advances continue to be made with singlesystem solvers they give special attention to methods that allow such solvers to be used repeatedly on a sequence of modified systems (e.g., the productform update; use of the Schur complement). The speed of factorizing a matrix then becomes relatively less important than the efficiency of subsequent solves with very many righthand sides. At the same time it is hoped that future improvements to linearequation software will be oriented more specifically to the case of related matrices B sub k. (Author)
1 edition published in 1982 in English and held by 2 WorldCat member libraries worldwide
Optimization algorithms typically require the solution of many systems of linear equations B sub Y sub = b sub. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practice. Unfortunately, most of the offthe shelf solvers are designed for single systems, whereas optimization problems give rise to hundreds or thousands of systems. To avoid refactorization, or to speed the convergence of an iterative method, it is essential to note that B sub is related to B sub  1. The authors review various sparse matrices that arise in optimization, and discuss compromises that are currently being made in dealing with them. Since significant advances continue to be made with singlesystem solvers they give special attention to methods that allow such solvers to be used repeatedly on a sequence of modified systems (e.g., the productform update; use of the Schur complement). The speed of factorizing a matrix then becomes relatively less important than the efficiency of subsequent solves with very many righthand sides. At the same time it is hoped that future improvements to linearequation software will be oriented more specifically to the case of related matrices B sub k. (Author)
NPSOL (Version 4.0): A Fortran Package for Nonlinear Programming. User's Guide(
Book
)
2 editions published in 1986 in English and held by 2 WorldCat member libraries worldwide
This report forms the user's guide for Version 4.0 of NPSOL, a set of Fortran subroutines designed to minimize a smooth function subject to constraints, which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints. (NPSOL may also be used for unconstrained, boundconstrained and linearly constrained optimization.) The user must provide subroutines that define the objective ans constraint functions and (optionally) their gradients. All matrices are treated as dense, and hence NPSOL is not intended for large sparse problems. NPSOL uses a sequential quadratic programming (SQP) algorithm, in which the search direction is the solution of a quadratic programming (QP) subproblem. The algorithm treats bounds, linear constraints and nonlinear constraints separately. The Hessian of each QP subproblem is a positivedefinite quasiNewton approximation to the Hessiau of the Lagrangian function. The steplength at each iteration is required to produce a sufficient decrease in an augmented Lagrangian merit function. Each QP subproblem is solved using a quadratic programming package with several features that improve the efficiency of an SQP algorithm. Keywords: Mathematical software; Nonlinear programming; and Finite difference. (Author)
2 editions published in 1986 in English and held by 2 WorldCat member libraries worldwide
This report forms the user's guide for Version 4.0 of NPSOL, a set of Fortran subroutines designed to minimize a smooth function subject to constraints, which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints. (NPSOL may also be used for unconstrained, boundconstrained and linearly constrained optimization.) The user must provide subroutines that define the objective ans constraint functions and (optionally) their gradients. All matrices are treated as dense, and hence NPSOL is not intended for large sparse problems. NPSOL uses a sequential quadratic programming (SQP) algorithm, in which the search direction is the solution of a quadratic programming (QP) subproblem. The algorithm treats bounds, linear constraints and nonlinear constraints separately. The Hessian of each QP subproblem is a positivedefinite quasiNewton approximation to the Hessiau of the Lagrangian function. The steplength at each iteration is required to produce a sufficient decrease in an augmented Lagrangian merit function. Each QP subproblem is solved using a quadratic programming package with several features that improve the efficiency of an SQP algorithm. Keywords: Mathematical software; Nonlinear programming; and Finite difference. (Author)
Monotone complementarity problem in Hilbert space by
Stanford University(
Book
)
2 editions published in 1990 in English and held by 1 WorldCat member library worldwide
An existence theorem for a complementary problem involving a weakly coercive monotone mapping over an arbitrary closed convex cone in a real Hilbert space is established. (jhd)
2 editions published in 1990 in English and held by 1 WorldCat member library worldwide
An existence theorem for a complementary problem involving a weakly coercive monotone mapping over an arbitrary closed convex cone in a real Hilbert space is established. (jhd)
Users Guide for LCPL. A Program for Solving Linear Complementarity Problems by Lemke's Method(
)
1 edition published in 1976 in English and held by 0 WorldCat member libraries worldwide
This document is a users guide for LCPL, an efficient robust program for solving Linear Complementarity Problems by Lemke's method
1 edition published in 1976 in English and held by 0 WorldCat member libraries worldwide
This document is a users guide for LCPL, an efficient robust program for solving Linear Complementarity Problems by Lemke's method
Large scale sequential quadratic programming algorithms by
Stanford University(
)
1 edition published in 1992 in English and held by 0 WorldCat member libraries worldwide
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to largescale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: (1) The use of a quasiNewton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. (2) The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. (3) The use of a reduced gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal nullspace basis for largescale problems. The continuity condition for this choice is proven. (4) The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an activeset method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems
1 edition published in 1992 in English and held by 0 WorldCat member libraries worldwide
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to largescale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: (1) The use of a quasiNewton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. (2) The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. (3) The use of a reduced gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal nullspace basis for largescale problems. The continuity condition for this choice is proven. (4) The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an activeset method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems
Computing modified Newton directions using a partial Cholesky factorization by
Stanford University(
)
1 edition published in 1993 in English and held by 0 WorldCat member libraries worldwide
The effectiveness of Newton's method for finding an unconstrained minimizer of a strictly convex twice continuously differentiable function has prompted the proposal of various modified Newton methods for the nonconvex case. Linesearch modified Newton methods utilize a linear combination of a descent direction and a direction of negative curvature. If these directions are sufficient in a certain sense, and a suitable linesearch is used, the resulting method will generate limit points that satisfy the secondorder necessary conditions for optimality. We propose an efficient method for computing a descent direction and a direction of negative curvature that is based on a partial Cholesky factorization of the Hessian. This factorization not only gives theoretically satisfactory directions, but also requires only a partial pivoting strategy, i.e., the equivalent of only two rows of the Schur complement need be examined at each step ... Unconstrained minimization, Modified Newton method, Descent direction, Negative curvature, Cholesky factorization
1 edition published in 1993 in English and held by 0 WorldCat member libraries worldwide
The effectiveness of Newton's method for finding an unconstrained minimizer of a strictly convex twice continuously differentiable function has prompted the proposal of various modified Newton methods for the nonconvex case. Linesearch modified Newton methods utilize a linear combination of a descent direction and a direction of negative curvature. If these directions are sufficient in a certain sense, and a suitable linesearch is used, the resulting method will generate limit points that satisfy the secondorder necessary conditions for optimality. We propose an efficient method for computing a descent direction and a direction of negative curvature that is based on a partial Cholesky factorization of the Hessian. This factorization not only gives theoretically satisfactory directions, but also requires only a partial pivoting strategy, i.e., the equivalent of only two rows of the Schur complement need be examined at each step ... Unconstrained minimization, Modified Newton method, Descent direction, Negative curvature, Cholesky factorization
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Languages