Photo of me

About

ahmed . khaled @ princeton . edu

Welcome to my tiny corner of the internet! I’m Ahmed, I work on optimization and machine learning. I’m a first-year Ph.D. student in the ECE department at Princeton University, advised by Prof. Chi Jin. I am interested in federated learning, convex optimization, and reinforcement learning.

Before joining Princeton, I was fortunate to intern in the group of Prof. Peter Richtárik at KAUST in the summers of 2019/2020, where I worked on the distributed & stochastic optimization. Prior to this, I did some (applied) research on accelerating the training of neural networks with Prof. Amir Atiya.

Preprints

Faster federated optimization under second-order similarity
Preprint (2022), with Chi Jin. (bibtex).
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Preprint (2022), with Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Konstantin Burlachenko, and Peter Richtárik. (bibtex).
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
Preprint (2020), with Othmane Sebbouh, Nicolas Loizou, Robert M. Gower, and Peter Richtárik. (bibtex).
Better Theory for SGD in the Nonconvex World
Preprint (2020), with Peter Richtárik. (bibtex).
Distributed Fixed Point Methods with Compressed Iterates
Preprint (2019), with Sélim Chraibi, Dmitry Kovalev, Peter Richtárik, Adil Salim, and Martin Takáč. (bibtex).

Publications

Proximal and Federated Random Reshuffling
The 39th International Conference on Machine Learning (ICML 2022), with Konstantin Mishchenko and Peter Richtárik. (bibtex).
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning
The 25th International Conference on Artificial Intelligence and Statistics (AISTATS 2022), with Elnur Gasanov, Samuel Horváth, and Peter Richtárik. (bibtex).
Random Reshuffling: Simple Analysis with Vast Improvements
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), with Konstantin Mishchenko and Peter Richtárik. (bibtex).
Tighter Theory for Local SGD on Identical and Heterogeneous Data
The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020, with Konstantin Mishschenko and Peter Richtárik. (bibtex). Extends the workshop papers (a, b).
Applying Fast Matrix Multiplication to Neural Networks
The 35th ACM/SIGAPP Symposium On Applied Computing (ACM SAC) 2020, with Amir F. Atiya and Ahmed H. Abdel-Gawad. (bibtex).

Workshop papers

Better Communication Complexity for Local SGD
Oral presentation at the NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Konstantin Mishschenko and Peter Richtárik. (bibtex).
First Analysis of Local GD on Heterogenous Data
NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Konstantin Mishschenko and Peter Richtárik. (bibtex).
Gradient Descent with Compressed Iterates
NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Peter Richtárik. (bibtex).

Talks

On the Convergence of Local SGD on Identical and Heterogeneous Data
Federated Learning One World Seminar (2020). Video and Slides.