# Papers

Adaptive Sketch-and-Project Methods for Solving Linear Systems, 2019.

Towards closing the gap between the theory and practice of SVRG, Neurips 2019.

. RSN: Randomized Subspace Newton, Neurips 2019.

. Optimal mini-batch and step sizes for SAGA, ICML 2019.

SGD: general analysis and improved rates, (extended oral presentation) ICML 2019.

. Characterising particulate random media from near-surface backscattering: A machine learning approach to predict particle size and concentration . EPL (Europhysics Letters), 2018.

Improving SAGA via a probabilistic interpolation with gradient descent, 2018.

Stochastic quasi-gradient methods: variance reduction via Jacobian sketching, 2018.

Accelerated stochastic matrix inversion: general theory and speeding up BFGS rules for faster second-order optimization, NIPS, 2018.

Greedy stochastic algorithms for entropy-regularized optimal transport problems, AISTATS, 2018.

Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods, AISTATS (Oral presenation), 2018.

Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms, SIAM Journal on Matrix Analysis and Applications, 2017.

Linearly Convergent Randomized Iterative Methods for Computing the Pseudoinverse, 2016.

Sketch and Project: Randomized Iterative Methods for Linear Systems and Inverting Matrices, PhD Dissertation, School of Mathematics, The University of Edinburgh, 2016.

Stochastic Block BFGS: Squeezing More Curvature out of Data, ICML, 2016.

Stochastic dual ascent for solving linear systems, 2015.

Randomized iterative methods for linear systems, SIAM Journal on Matrix Analysis and Applications, 2015.

High order reverse automatic differentiation with emphasis on the third order, Mathematical Programming, 2014.

Computing the sparsity pattern of Hessians using automatic differentiation, ACM Transactions on Mathematical Software, 2014.

A new framework for Hessian automatic differentiation, Optimization Methods and Software, 2012.

# Reports and Notes

Train Positioning Using Video Odometry, 2014.

Action constrained quasi-Newton methods, Technical Report ERGO 14-020, 2014

Conjugate Gradients: The short and painful explanation with oblique projections

Hessian matrices via automatic differentiation, State University of Campinas technical report and Msc Thesis 2011

Efficient calculation of derivatives through graph coloring, State University of Campinas technical report, undergraduate project 2009

# Recent & Upcoming Talks

ICCOPT 2019
Aug 5, 2019
Expected smoothness is the key to understanding the mini-batch complexity of stochastic gradient methods

# Teaching

### Master IASD: AI Systems and Data Science (Fall 2019)

For prerequisites see here . For revision of vector calculus see here .
The course information can be found here
1) Slides on introduction to SGD and ERM
2) Lecture notes on probability revision
3) Exercise list on stochastc methods for ridge regression (solutions)
4) Slides on SGD and variants
5) Exercise on SGD proof (solutions)
6) Python notebook on SGD (solutions)

### Telecom Paris IA317: Large scale machine learning (Fall 2019)

The course information can be found here.
For prerequisites and revision material see here.
1) Lecture notes on dimension reduction tools and sparse matrices
2) Exercise list on dimension reduction tools and sparse matrices
3) Python Notebook graded homework on dimension reduction tools and sparse matrices. Data sets needed for homework: colon-cancer, anthracyclineTaxaneChemotherapy and sector.scale.

### MDI210 Optimization et Analise númeric (Fall 2019)

Here are some good lecture notes on Linear Programming by Marco Chiarandini. Here are my notes and slides (WARNING: these are a work in progress!)
1) Lecture notes on numerical linear algebra
2) Lecture notes on linear and nonlinear optimization
3) Slides on Linear Programming

### Master2 Optimization for Data Science (Fall 2019)

For prerequisites see here . For revision of vector calculus see here .
Lecture notes on gradient descent proofs.

0) Exercises on convexity and smoothness (solutions)
1) Exercises on complexity and convergence rates (solutions)
2) Lecture I: intro to ML, convexity, smoothness and gradient descent
3) Exercises ridge regression and gradient descent (solutions)
4) Lecture II: proximal gradient methods
5) Exercises on proximal operator (solutions)
7) Lecture III: Stochastic gradient descent
8) Exercises on stochastc methods for ridge regression (solutions)
9) Exercise on SGD proof (solutions)
10) Lecture IV: Stochastic variance reduced gradient methods
11) Exercise on variance reduction, proof of convergence of SVRG
12) Lecture V: Sampling and momentum
13) Exercise on sampling and momentum
14) Python notebook on momentum

### African Masters of Machine Intelligence (AMMI) (Winter 2019)

1) Lecture I: Introduction into ML and optimization
2) Exercises on convexity, smoothness and gradient descent
3) Lecture II: proximal gradient methods
4) Exercises on proximal operator
5) Lecture III: Stochastic gradient descent
6) Exercises on stochastc methods
7) Lecture IV: Stochastic variance reduced gradient methods
8) Notes on stochastic variance reduced methods

# Contact

• gowerrobert$@$gmail.com
• Télécom Paris, 19 Place Marguerite Perey, 91120 Palaiseau, France. Office: 5.C45
• email for appointment