Posts by Collection

portfolio

publications

Submodular Maximization via Taylor Series Approximation

Published in SDM 2021.

We study submodular maximization problems with matroid constraints, in particular, problems where the objective can be expressed via compositions of analytic and multilinear functions. We show that for functions of this form, the so-called continuous greedy algorithm attains a ratio arbitrarily close to (1 − 1/e) ≈ 0.63 using a deterministic estimation via Taylor series approximation. This drastically reduces execution time over prior art that uses sampling.

Recommended citation: Özcan, Gözde, Armin Moharrer, and Stratis Ioannidis. (2021). "Submodular Maximization via Taylor Series Approximation." Proceedings of the 2021 SIAM International Conference on Data Mining (SDM). Society for Industrial and Applied Mathematics, 2021.
paper | code | slides | talk

Stochastic Submodular Maximization via Polynomial Estimators

Published in PAKDD 2023.

In this paper, we study stochastic submodular maximization problems with gen- eral matroid constraints, which naturally arise in online learning, team formation, facility location, influence maximization, active learning and sensing objective functions. In other words, we focus on maximizing submodular functions that are defined as expectations over a class of submodular functions with an unknown distribution. We show that for monotone functions of this form, the stochastic continuous greedy algorithm attains an approxima- tion ratio (in expectation) arbitrarily close to (1 − 1/e) ≈ 63% using a polynomial estimation of the gradient. We argue that using this polynomial estimator instead of the prior art that uses sampling eliminates a source of randomness and experimentally reduces execution time.

Recommended citation: Özcan, Gözde, and Stratis Ioannidis. (2023). "Stochastic Submodular Maximization via Polynomial Estimators." Pacific-Asia Conference on Knowledge Discovery and Data Mining.
paper | code | slides

Online Submodular Maximization via Online Convex Optimization

Published in AAAI 2024.

We study monotone submodular maximization under general matroid constraints in the online setting. We prove that online optimization of a large class of submodular functions, namely, weighted threshold potential functions, reduces to online convex optimization (OCO). This is precisely because functions in this class admit a concave relaxation; as a result, OCO policies, coupled with an appropriate rounding scheme, can be used to achieve sublinear regret in the combinatorial setting. We show that our reduction extends to many different versions of the online learning problem, including the dynamic regret, bandit, and optimistic-learning settings.

Recommended citation: Si Salem, Tareq, Özcan, Gözde, Nikolaou, Iasonas, Terzi, Evimaria, & Ioannidis, Stratis (2024). "Online Submodular Maximization via Online Convex Optimization." Proceedings of the AAAI Conference on Artificial Intelligence.
paper | code | poster

talks

teaching

EEE 211 Teaching Assistant

Analog Electronics, Bilkent University, Department of Electrical and Electronics Engineering, 2016

  • Supervised hands-on Analog Electronics laboratory sessions for 40+ sophomore level students.

EECE 5645 Teaching Assistant

Parallel Processing & Data Analytics, Northeastern University, Department of Electrical and Computer Engineering, 2020

  • Led 1:1 office hours with a focus on PySpark programming for 50+ Master’s level EECE5645 Parallel Processing and Data Analytics course students