# Papers

. RSN: Randomized Subspace Newton, 2019.

. Optimal mini-batch and step sizes for SAGA, ICML 2019.

SGD: general analysis and improved rates, (20min Oral presentation) ICML 2019.

. Characterising particulate random media from near-surface backscattering: A machine learning approach to predict particle size and concentration . EPL (Europhysics Letters), 2018.

Improving SAGA via a probabilistic interpolation with gradient descent, 2018.

Stochastic quasi-gradient methods: variance reduction via Jacobian sketching, 2018.

Accelerated stochastic matrix inversion: general theory and speeding up BFGS rules for faster second-order optimization, NIPS, 2018.

Greedy stochastic algorithms for entropy-regularized optimal transport problems, AISTATS, 2018.

Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods, AISTATS (Oral presenation), 2018.

Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms, SIAM Journal on Matrix Analysis and Applications, 2017.

Linearly Convergent Randomized Iterative Methods for Computing the Pseudoinverse, 2016.

Sketch and Project: Randomized Iterative Methods for Linear Systems and Inverting Matrices, PhD Dissertation, School of Mathematics, The University of Edinburgh, 2016.

Stochastic Block BFGS: Squeezing More Curvature out of Data, ICML, 2016.

Stochastic dual ascent for solving linear systems, 2015.

Randomized iterative methods for linear systems, SIAM Journal on Matrix Analysis and Applications, 2015.

High order reverse automatic differentiation with emphasis on the third order, Mathematical Programming, 2014.

Computing the sparsity pattern of Hessians using automatic differentiation, ACM Transactions on Mathematical Software, 2014.

A new framework for Hessian automatic differentiation, Optimization Methods and Software, 2012.

# Recent & Upcoming Talks

ICCOPT 2019
Aug 5, 2019
Expected smoothness is the key to understanding the mini-batch complexity of stochastic gradient methods

# Teaching

### African Masters of Machine Intelligence (AMMI) (Winter 2019)

1) Lecture I: Introduction into ML and optimization
2) Exercises on convexity, smoothness and gradient descent
3) Lecture II: proximal gradient methods
4) Exercises on proximal operator
5) Lecture III: Stochastic gradient descent
6) Exercises on stochastc methods
7) Lecture IV: Stochastic variance reduced gradient methods
8) Notes on stochastic variance reduced methods

### MDI210 Optimization et Analise númeric (Summer 2018)

Here are some good lecture notes on Linear Programming by Marco Chiarandini. Here are my lecture notes (WARNING: these notes are a work in progress!)
1) numerical linear algebra
2) nonlinear optimization
3) My lecture slides on Linear Programming

### Master2 Optimization for Data Science (2018/2019)

Lecture slides:
1) intro ML
2) convexity and smoothness
3) proximity operator, ISTA and FISTA.

Lecture notes on gradient descent proofs.

Exercises:
1) convexity and smoothness, (with answers)
3) GD and SGD on linear least squares, (with answers)
• gowerrobert$@$gmail.com