Stochastic Submodular Maximization via Polynomial Estimators

Published in PAKDD 2023, 2023

In this paper, we study stochastic submodular maximization problems with gen- eral matroid constraints, which naturally arise in online learning, team formation, facility location, influence maximization, active learning and sensing objective functions. In other words, we focus on maximizing submodular functions that are defined as expectations over a class of submodular functions with an unknown distribution. We show that for monotone functions of this form, the stochastic continuous greedy algorithm attains an approxima- tion ratio (in expectation) arbitrarily close to (1 − 1/e) ≈ 63% using a polynomial estimation of the gradient. We argue that using this polynomial estimator instead of the prior art that uses sampling eliminates a source of randomness and experimentally reduces execution time.

Recommended citation: Özcan, Gözde, and Stratis Ioannidis. (2023). "Stochastic Submodular Maximization via Polynomial Estimators." Pacific-Asia Conference on Knowledge Discovery and Data Mining.
paper | slides