# Papers

Adaptive Sketch-and-Project Methods for Solving Linear Systems, 2019.

Towards closing the gap between the theory and practice of SVRG, Neurips 2019.

. RSN: Randomized Subspace Newton, Neurips 2019.

. Optimal mini-batch and step sizes for SAGA, ICML 2019.

SGD: general analysis and improved rates, (extended oral presentation) ICML 2019.

. Characterising particulate random media from near-surface backscattering: A machine learning approach to predict particle size and concentration . EPL (Europhysics Letters), 2018.

Improving SAGA via a probabilistic interpolation with gradient descent, 2018.

Stochastic quasi-gradient methods: variance reduction via Jacobian sketching, 2018.

Accelerated stochastic matrix inversion: general theory and speeding up BFGS rules for faster second-order optimization, NIPS, 2018.

Greedy stochastic algorithms for entropy-regularized optimal transport problems, AISTATS, 2018.

Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods, AISTATS (Oral presenation), 2018.

Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms, SIAM Journal on Matrix Analysis and Applications, 2017.

Linearly Convergent Randomized Iterative Methods for Computing the Pseudoinverse, 2016.

Sketch and Project: Randomized Iterative Methods for Linear Systems and Inverting Matrices, PhD Dissertation, School of Mathematics, The University of Edinburgh, 2016.

Stochastic Block BFGS: Squeezing More Curvature out of Data, ICML, 2016.

Stochastic dual ascent for solving linear systems, 2015.

Randomized iterative methods for linear systems, SIAM Journal on Matrix Analysis and Applications, 2015.

High order reverse automatic differentiation with emphasis on the third order, Mathematical Programming, 2014.

Computing the sparsity pattern of Hessians using automatic differentiation, ACM Transactions on Mathematical Software, 2014.

A new framework for Hessian automatic differentiation, Optimization Methods and Software, 2012.

# Reports and Notes

Train Positioning Using Video Odometry, 2014.

Action constrained quasi-Newton methods, Technical Report ERGO 14-020, 2014

Conjugate Gradients: The short and painful explanation with oblique projections

Hessian matrices via automatic differentiation, State University of Campinas technical report and Msc Thesis 2011

Efficient calculation of derivatives through graph coloring, State University of Campinas technical report, undergraduate project 2009

# Recent & Upcoming Talks

ICCOPT 2019
Aug 5, 2019
Expected smoothness is the key to understanding the mini-batch complexity of stochastic gradient methods

# Teaching

### African Masters of Machine Intelligence (AMMI) (Winter 2019)

1) Lecture I: Introduction into ML and optimization
2) Exercises on convexity, smoothness and gradient descent
3) Lecture II: proximal gradient methods
4) Exercises on proximal operator
5) Lecture III: Stochastic gradient descent
6) Exercises on stochastc methods
7) Lecture IV: Stochastic variance reduced gradient methods
8) Notes on stochastic variance reduced methods

### MDI210 Optimization et Analise númeric (Fall 2019)

Here are some good lecture notes on Linear Programming by Marco Chiarandini. Here are my notes and slides (WARNING: these notes are a work in progress!)
1) Lecture notes on numerical linear algebra
2) Lecture notes on linear and nonlinear optimization
3) Slides on Linear Programming

### Master2 Optimization for Data Science (Fall 2019)

For prerequisites and revision material see here
Lecture notes on gradient descent proofs.

0) Exercises on convexity and smoothness
1) Exercises on complexity and convergence rates
2) Lecture I: intro to ML, convexity, smoothness and gradient descent
3) Exercises ridge regression and gradient descent
4) Lecture II: proximal gradient methods
5) Exercises on proximal operator

# Contact

• gowerrobert$@$gmail.com
• Télécom ParisTech, 46 Rue Barrault, 75634 Paris Cedex 13, France Office: E 409
• email for appointment