首页 > 图书中心 >图书详情

非线性规划(第3版)

本书是控制、规划与优化领域的一本经典教材,在学界影响很大。我社曾引进翻译本书的第二版,现引进第三版,供大家参考学习。

作者:Dimitri P. Bertsekas
定价:169
印次:1-8
ISBN:9787302482345
出版日期:2018.06.01
印刷日期:2022.12.05

本书涵盖非线性规划的主要内容,包括无约束优化、凸优化、拉格朗日乘子理论和算法、对偶理论及方法等,包含了大量的实际应用案例. 本书从无约束优化问题入手,通过直观分析和严格证明给出了无约束优化问题的最优性条件,并讨论了梯度法、牛顿法、共轭方向法等基本实用算法. 进而本书将无约束优化问题的最优性条件和算法推广到具有凸集约束的优化问题中,进一步讨论了处理约束问题的可行方向法、条件梯度法、梯度投影法、双度量投影法、近似算法、流形次优化方法、坐标块下降法等. 拉格朗日乘子理论和算法是非线性规划的核心内容之一,也是本书的重点.

more >

Preface to the Third Edition The third edition of the book is a thoroughly rewritten version of the 1999 second edition. New material was included, some of the old material was discarded, and a large portion of the remainder was reorganized or revised. The total number of pages has increased by about 10 percent. Aside from incremental improvements, the changes aim to bring the book up-to-date with recent research progress, and in harmony with the major developments in convex optimization theory and algorithms that have occurred in the meantime. These developments were documented in three of my books: the 2015 book “Convex Optimization Algorithms,” the 2009 book “Convex Optimization Theory,” and the 2003 book “Convex Analysis and Optimization” (coauthored with Angelia Nedi′c and Asuman Ozdaglar). A major difference is that these books have dealt primarily with convex, possibly nondifferentiable, optimization problems and rely on convex analysis, while the present book focuses primarily on algorithms for possibly nonconvex differentiable problems, and relies on calculus and variational analysis. Having written several interrelated optimization books, I have come to see nonlinear programming and its associated duality theory as the lynchpin that holds together deterministic optimization. I have consequently set as an objective for the present book to integrate the contents of my books, together with internet-accessible material, so that they complement each other and form a unified whole. I have thus provided bridges to my other works with extensive references to generalizations, discussions, and elaborations of the analysis given here, and I have used throughout fairly consistent notation and mathematical level. Another connecting link of my books is that they all share the same style: they rely on rigorous analysis, but they also aim at an intuitive exposition that makes use of geometric visualization. This stems from my belief that success in the practice of optimization strongly depends on the intuitive (as well as the analytical) understanding of the underlying theory and algorithms. Some of the more prominent new features of the present edition are: (a) An expanded coverage of incremental methods and their connections to stochastic gradient methods, based in part on my 2000 joint work with Angelia Nedi′c; see Section 2.4 and Section 7.3.2. (b) A discussion of asynchronous distributed algorithms based in large part on my 1989 “Parallel and Distributed Computation” book (coauthored xvii xviii Preface to the Third Edition with John Tsitsiklis); see Section 2.5. (c) A discussion of the proximal algorithm and its variations in Section 3.6, and the relation with the method of multipliers in Section 7.3. (d) A substantial coverage of the alternating direction method of multipliers (ADMM) in Section 7.4, with a discussion of its many applications and variations, as well as references to my 1989 “Parallel and Distributed Computation” and 2015 “Convex Optimization Algorithms” books. (e) A fairly detailed treatment of conic programming problems in Section 6.4.1. (f) A discussion of the question of existence of solutions in constrained optimization, based on my 2007 joint work with Paul Tseng [BeT07], which contains further analysis; see Section 3.1.2. (g) Additional material on network flow problems in Section 3.8 and 6.4.3, and their extensions to monotropic programming in Section 6.4.2, with references to my 1998 “Network Optimization” book. (h) An expansion of the material of Chapter 4 on Lagrangemultiplier theory, using a strengthened version of the Fritz John conditions, and the notion of pseudonormality, based on my 2002 joint work with Asuman Ozdaglar. (i) An expansion of the material of Chapter 5 on Lagrange multiplier algorithms, with references to my 1982 “Constrained Optimization and Lagrange Multiplier Methods” book. The book contains a few new exercises. As in the second edition, many of the theoretical exercises have been solved in detail and their solutions have been posted in the book’s internet site http://www.athenasc.com/nonlinbook.html These exercises have been marked with the symbolsWWW. Many other exercises contain detailed hints and/or references to internet-accessible sources. The book’s internet site also contains links to additional resources, such as many additional solved exercises from my convex optimization books, computer codes, my lecture slides from MIT Nonlinear Programming classes, and full course contents from the MIT OpenCourseWare (OCW) site. I would like to express my thanks to the many colleagues who contributed suggestions for improvement of the third edition. In particular, let me note with appreciation my principal collaborators on nonlinear programming topics since the 1999 second edition: Angelia Nedi′c, Asuman Ozdaglar, Paul Tseng, Mengdi Wang, and Huizhen (Janey) Yu. Dimitri P. Bertsekas June, 2016

more >
扫描二维码
下载APP了解更多

同系列产品more >

工程电磁场(第9版)

[美]威廉姆·H.哈特(Wil
定 价:109元

查看详情
仿真建模与分析(第5版)

[美]Averill M.Law 著,
定 价:159元

查看详情
反馈控制系统(第5版)

[美] Charles L. Philli
定 价:98元

查看详情
凸优化算法

(美)Dimitri P. Berts
定 价:89元

查看详情
数字逻辑与计算机设计——VHDL语言...

Richard S. Sandige、Mi
定 价:99元

查看详情
图书分类全部图书
more >
  • 本书为解决连续优化问题提供了全面而实用的方法。内容基于严格的数学分析,但尽量用可视化的方法来讲述。本书将重点放在最新的发展以及它们在很多领域的广泛的应用,例如大规模供给系统、信号处理和机器学习等。特色二:编写上主要采用“表达科普化、图表并茂”方式,强调“可操作性”,易学易用。图书出版与课程网站相辅相成,辅导读物和思考题在课程网站上体现。
  • 本书为解决连续优化问题提供了全面而实用的方法。内容基于严格的数学分析,但尽量用可视化的方法来讲述。本书将重点放在最新的发展以及它们在很多领域的广泛的应用,例如大规模供给系统、信号处理和机器学习等。特色二:编写上主要采用“表达科普化、图表并茂”方式,强调“可操作性”,易学易用。图书出版与课程网站相辅相成,辅导读物和思考题在课程网站上体现。
more >
  • Contents

    1. Unconstrained Optimization: Basic Methods . . . . . . p. 1

    1.1. OptimalityConditions . . . . . . . . . . . . . . . . . . . p. 5

    1.1.1. Variational Ideas . . . . . . . . . . . . . . . . . . . . p. 5

    1.1.2. MainOptimalityConditions . . . . . . . . . . . . . . . p. 15

    1.2. GradientMethods –Convergence . . . . . . . . . . . . . . p. 28

    1.2.1. DescentDirections and StepsizeRules . . . . . . . . . . p. 28

    1.2.2. ConvergenceResults . . . . . . . . . . . . . . . . . . p. 49

    1.3. GradientMethods –Rate ofConvergence . . . . . . . . . . p. 67

    1.3.1. The LocalAnalysisApproach . . . . . . . . . . . . . . p. 69

    1.3.2. TheRole of theConditionNumber . . . . . . . . . . . . p. 70

    1.3.3...

精彩书评more >

标题

评论

版权所有(C)2023 清华大学出版社有限公司 京ICP备10035462号 京公网安备11010802042911号

联系我们 | 网站地图 | 法律声明 | 友情链接 | 盗版举报 | 人才招聘