We study the generalization error of randomized learning algorithms—focusing on stochastic gradient descent (SGD)—using a novel combination of PAC-Bayes and algorithmic stability. Importantly, our generalization bounds hold for all posterior distributions on an algorithm’s random hyperparameters, including distributions that depend on the training data. This inspires an adaptive sampling algorithm for SGD that optimizes the posterior at runtime. We analyze this algorithm in the context of our generalization bounds and evaluate it on a benchmark dataset. Our experiments demonstrate that adaptive sampling can reduce empirical risk faster than uniform sampling while also improving out-of-sample accuracy.
Note: The latest version indicates that all references to Kuzborskij & Lampert (2017) are for v2 of the manuscript, which was posted to arXiv in March, 2017. Importantly, Theorem 3 therein (a stability bound for convex losses)—which is used to prove Proposition 3 in this paper–has a different form than in the final version.