An Introduction to Continuous Optimization


An Introduction to Continuous Optimization
By N. Andreasson, A. Evgrafov, M. Patriksson


* Publisher: Studentlitteratur AB
* Number Of Pages: 400
* Publication Date: 2007-11-30
* ISBN-10 / ASIN: 9144044550
* ISBN-13 / EAN: 9789144044552



Product Description:

Optimisation, or mathematical programming, is a fundamental subject within decision science and operations research, in which mathematical decision models are constructed, analysed, and solved. This book's focus lies on providing a basis for the analysis of optimisation models and of candidate optimal solutions, especially for continuous optimisation models. The main part of the mathematical material therefore concerns the analysis and linear algebra that underlie the workings of convexity and duality, and necessary/sufficient local/global optimality conditions for unconstrained and constrained optimisation problems. Natural algorithms are then developed from these optimality conditions, and their most important convergence characteristics are analysed. This book answers many more questions of the form: 'Why/why not?' than 'How?'.This choice of focus is in contrast to books mainly providing numerical guidelines as to how optimisation problems should be solved. We use only elementary mathematics in the development of the book, yet are rigorous throughout. This book provides lecture, exercise and reading material for a first course on continuous optimisation and mathematical programming, geared towards third-year students, and has already been used as such, in the form of lecture notes, for nearly ten years. This book can be used in optimisation courses at any engineering department as well as in mathematics, economics, and business schools. It is a perfect starting book for anyone who wishes to develop his/her understanding of the subject of optimisation, before actually applying it.

Contents:

I Introduction
Modelling and classification
Modelling of optimization problems
A quick glance at optimization history
Classification of optimization models
Conventions
Applications and modelling examples
Defining the field
On optimality conditions
Soft and hard constraints
Definitions
A derivation of the exterior penalty function
A road map through the material
On the background of this book and a didactics statement
Illustrating the theory
Notes and further reading
Exercises
II Fundamentals
Analysis and algebra—A summary
Reductio ad absurdum
Linear algebra
Analysis
Convex analysis
Convexity of sets
Polyhedral theory
Convex hulls
Polytopes
Polyhedra
The Separation Theorem and Farkas’ Lemma
Convex functions
Application: the projection of a vector onto a convex set
Notes and further reading
Exercises
III Optimality Conditions
An introduction to optimality conditions
Local and global optimality
Existence of optimal solutions
A classic result
∗Non-standard results
Special optimal solution sets
Optimality in unconstrained optimization
Optimality for optimization over convex sets
Near-optimality in convex optimization
Applications
Continuity of convex functions
The Separation Theorem
Euclidean projection
Fixed point theorems
Notes and further reading
Exercises
Optimality conditions
Relations between optimality conditions and CQs at a glance
A note of caution
Geometric optimality conditions
The Fritz John conditions
The Karush–Kuhn–Tucker conditions
Proper treatment of equality constraints
Constraint qualifications
Mangasarian–Fromovitz CQ (MFCQ)
Slater CQ
Linear independence CQ (LICQ)
Affine constraints
Sufficiency of the KKT conditions under convexity
Applications and examples
Notes and further reading
Exercises
Lagrangian duality
The relaxation theorem
Lagrangian duality
Lagrangian relaxation and the dual problem
Global optimality conditions
Strong duality for convex programs
Strong duality for linear and quadratic programs
Two illustrative examples
Differentiability properties of the dual function
Subdifferentiability of convex functions
Differentiability of the Lagrangian dual function
∗Subgradient optimization methods
Convex problems
Application to the Lagrangian dual problem
The generation of ascent directions
∗Obtaining a primal solution
Differentiability at the optimal solution
Everett’s Theorem
∗Sensitivity analysis
Analysis for convex problems
Analysis for differentiable problems
Applications
Electrical networks
A Lagrangian relaxation of the traveling salesman
Notes and further reading
Exercises
IV Linear Programming
Linear programming: An introduction
The manufacturing problem
A linear programming model
Graphical solution
Sensitivity analysis
An increase in the number of large pieces available
An increase in the number of small pieces available
A decrease in the price of the tables
The dual of the manufacturing problem
A competitor
A dual problem
Interpretations of the dual optimal solution
Linear programming models
Linear programming modelling
The geometry of linear programming
Standard form
Basic feasible solutions and the Representation Theorem
Adjacent extreme points
Notes and further reading
Exercises
The simplex method
The algorithm
A BFS is known
A BFS is not known: phase I & II
Alternative optimal solutions
Termination
Computational complexity
Notes and further reading
Exercises
LP duality and sensitivity analysis
Introduction
The linear programming dual
Canonical form
Constructing the dual
Linear programming duality theory
Weak and strong duality
Complementary slackness
The dual simplex method
Sensitivity analysis
Perturbations in the objective function
Perturbations in the right-hand side coefficients
Notes and further reading
Exercises
V Algorithms
Unconstrained optimization
Introduction
Descent directions
Introduction
Newton’s method and extensions
The line search problem
A characterization of the line search problem
Approximate line search strategies
Convergent algorithms
Finite termination criteria
A comment on non-differentiability
Trust region methods
Conjugate gradient methods
Conjugate directions
Conjugate direction methods
Generating conjugate directions
Conjugate gradient methods
Extension to non-quadratic problems
A quasi-Newton method: DFP
Convergence rates
Implicit functions
Notes and further reading
Exercises
Optimization over convex sets
Feasible direction methods
The Frank–Wolfe algorithm
The simplicial decomposition algorithm
The gradient projection algorithm
Application: traffic equilibrium
Model analysis
Algorithms and a numerical example
Notes and further reading
Exercises
Constrained optimization
Penalty methods
Exterior penalty methods
Interior penalty methods
Computational considerations
Applications and examples
Sequential quadratic programming
Introduction
A penalty-function based SQP algorithm
A numerical example on the MSQP algorithm
On recent developments in SQP algorithms
A summary and comparison
Notes and further reading
Exercises
VI Appendix
Answers to the exercises
Chapter 1: Modelling and classification
Chapter 3: Convexity
Chapter 4: An introduction to optimality conditions
Chapter 5: Optimality conditions
Chapter 6: Lagrangian duality
Chapter 8: Linear programming models
Chapter 9: The simplex method
Chapter 10: LP duality and sensitivity analysis
Chapter 11: Unconstrained optimization
Chapter 12: Optimization over convex sets
Chapter 13: Constrained optimization
References
Index


http://ifile.it/jrzla6h/9144044550.rar

Related Posts :