Accepted author manuscript, 8.24 MB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution to Journal/Magazine › Conference article › peer-review
Research output: Contribution to Journal/Magazine › Conference article › peer-review
}
TY - JOUR
T1 - Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
AU - Dodd, Daniel
AU - Sharrock, Louis
AU - Nemeth, Christopher
N1 - In: Proceedings of the 41st International Conference on Machine Learning (ICML), Vienna, Austria.
PY - 2024/5/1
Y1 - 2024/5/1
N2 - In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning- rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.
AB - In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning- rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.
M3 - Conference article
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
SN - 1938-7228
ER -