. Optimal mini-batch and step sizes for SAGA, to appear in ICML 2019.

Preprint Code

SGD: general analysis and improved rates, to appear in ICML 2019.


Improving SAGA via a probabilistic interpolation with gradient descent, 2018.


Stochastic quasi-gradient methods: variance reduction via Jacobian sketching, 2018.

Preprint Code

Accelerated stochastic matrix inversion: general theory and speeding up BFGS rules for faster second-order optimization, NIPS, 2018.

Preprint Code Proceedings Poster

Greedy stochastic algorithms for entropy-regularized optimal transport problems, AISTATS, 2018.

Preprint Proceedings Poster

Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods, AISTATS (Oral presenation), 2018.

Preprint Code Proceedings Slides Poster

Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms, SIAM Journal on Matrix Analysis and Applications, 2017.

Preprint Code Journal Slides

Linearly Convergent Randomized Iterative Methods for Computing the Pseudoinverse, 2016.

Preprint Code Slides

Sketch and Project: Randomized Iterative Methods for Linear Systems and Inverting Matrices, PhD Dissertation, School of Mathematics, The University of Edinburgh, 2016.

Preprint Code Slides

Stochastic Block BFGS: Squeezing More Curvature out of Data, ICML, 2016.

Preprint Code Proceedings Slides Poster

Stochastic dual ascent for solving linear systems, 2015.

Preprint Code

Randomized iterative methods for linear systems, SIAM Journal on Matrix Analysis and Applications, 2015.

Preprint Journal Slides Code Most downloaded on SIMAX

High order reverse automatic differentiation with emphasis on the third order, Mathematical Programming, 2014.

Preprint Journal Slides Code

Computing the sparsity pattern of Hessians using automatic differentiation, ACM Transactions on Mathematical Software, 2014.

Preprint Journal Code

A new framework for Hessian automatic differentiation, Optimization Methods and Software, 2012.

Preprint Journal Code

Recent & Upcoming Talks

Aug 5, 2019
Expected smoothness is the key to understanding the mini-batch complexity of stochastic gradient methods


Nerv Symbol

  • gowerrobert$@$
  • Télécom ParisTech, 46 Rue Barrault, 75634 Paris Cedex 13, France Office: E 409
  • email for appointment