Proximal gradient method
Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. Many interesting problems can be formulated as convex optimization problems of the form
- Comment
- enProximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. Many interesting problems can be formulated as convex optimization problems of the form
- Has abstract
- enProximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. Many interesting problems can be formulated as convex optimization problems of the form where are convex functions defined from where some of the functions are non-differentiable. This rules out conventional smooth optimization techniques likeSteepest descent method, conjugate gradient method etc. Proximal gradient methods can be used instead. These methods proceed by splitting, in that the functions are used individually so as to yield an easily implementable algorithm.They are called proximal because each non smooth function among is involved via its proximityoperator. Iterative Shrinkage thresholding algorithm, projected Landweber, projectedgradient, alternating projections, alternating-direction method of multipliers, alternatingsplit Bregman are special instances of proximal algorithms. For the theory of proximal gradient methods from the perspective of and with applications to statistical learning theory, see proximal gradient methods for learning.
- Is primary topic of
- Proximal gradient method
- Label
- enProximal gradient method
- Link from a Wikipage to an external page
- proximity-operator.net/
- web.stanford.edu/~boyd/cvxbook/
- web.stanford.edu/class/ee364a/
- web.stanford.edu/class/ee364b/
- people.eecs.berkeley.edu/~elghaoui/Teaching/EE227A/lecture18.pdf
- github.com/kul-forbes/ProximalAlgorithms.jl
- github.com/kul-forbes/ProximalOperators.jl
- Link from a Wikipage to another Wikipage
- Alternating direction method of multipliers
- Alternating projection
- Bregman method
- Category:Gradient methods
- Conjugate gradient method
- Convex functions
- Convex optimization
- Euclidean space
- Frank–Wolfe algorithm
- Gradient descent
- Iteration
- Julia (programming language)
- Landweber iteration
- Matlab
- Projection operator
- Projections onto convex sets
- Proximal
- Proximal gradient methods for learning
- Proximal operator
- Python (programming language)
- Smooth function
- Statistical learning theory
- Subdifferential
- Wikt:implementable
- SameAs
- fnzo
- m.0vxcrfk
- Proximal gradient method
- Q17086765
- Метод проксимального градиента
- Subject
- Category:Gradient methods
- WasDerivedFrom
- Proximal gradient method?oldid=1072866418&ns=0
- WikiPageLength
- 7818
- Wikipage page ID
- 39587805
- Wikipage revision ID
- 1072866418
- WikiPageUsesTemplate
- Template:Cite book
- Template:More footnotes
- Template:Reflist