×
In the literature, the convergence analysis of these algorithms relies on strong convexity of the objective function. To our knowledge, no rate statements exist ...
2016/03/15 · We show that, under suitable assumptions on the stepsize and regularization parameters, the objective function value converges to the optimal ...
In the literature, the convergence analysis of these algorithms relies on strong convexity of the objective function. To our knowledge, no rate statements exist ...
We prove that mSRGTR-BB converges linearly in expectation for strongly and nonstrongly convex objective functions. With proper parameters, mSRGTR-BB enjoys a ...
This work allows the objective function to be merely convex and develop a regularized SQN method, and shows that the function value converges to its optimal ...
Abstract—Motivated by applications in optimization and machine learning, we consider stochastic quasi-Newton (SQN) methods for solving stochastic ...
We consider stochastic quasi-Newton (SQN) methods for solving unconstrained convex optimization problems.
We show that, under suitable assumptions on the stepsize and regularization parameters, the objective function value converges to the optimal objective function ...
2017/10/16 · The convergence analysis of the SQN methods, both full and limited-memory variants, require the objective function to be strongly convex.
Dive into the research topics of 'Stochastic quasi-Newton methods for non-strongly convex problems: Convergence and rate analysis'. Together they form a unique ...