Topic Description
In classical statistics, the focus has primarily been on how inferential accuracy improves with an increasing number of data points—with no attention given to computational complexity. Indeed, when constrained to achieve a specific level of accuracy within a limited timeframe, classical statistical theory does not provide guidance on how to devise an effective inferential strategy. Ideally, one would like to develop a statistically optimal method for a given task that is also computationally efficient. In various contexts, however, it appears that statistical optimality and computational tractability are at odds with each other. A key question is whether these tradeoffs are inherent. An information-computation (IC) tradeoff for a statistical task describes a scenario where no computationally efficient algorithm can achieve the information-theoretic limits.
When faced with a statistical task, how can we determine if an IC tradeoff exists? Unfortunately, traditional methods from worst-case complexity theory, such as NP-hardness, seem inadequate for this purpose. Recent work at the interface of theoretical computer science and statistics has made notable progress in our understanding of this broad question. Roughly speaking, two interconnected approaches have emerged. One approach, similar to classical complexity theory, involves developing efficient reductions from problems believed to be hard on average (e.g., planted clique or LWE). The other approach focuses on establishing unconditional lower bounds within broad (yet restricted) computational models—such as Statistical Query (SQ) algorithms, low-degree polynomials, and Sum-of-Squares (SoS) algorithms. These complementary methodologies have provided strong evidence of IC tradeoffs across various statistical tasks. Despite these recent advances, we are still a long way from a unified theory that identifies the underlying causes of these tradeoffs.
Objectives
The workshop will provide an introduction to IC tradeoffs in statistical tasks, covering recent techniques, results, and emerging research directions. While the workshop is designed to be accessible to a broad theoretical computer science and statistics audience, it will also emphasize significant recent advances and open questions. Some key questions to be addressed during the workshop include: What IC tradeoffs are currently known for central average-case problems (e.g., planted clique, NGCA, sparse PCA)? How can we further expand the scope of IC tradeoffs to problems where our understanding remains poor? What are the connections between various restricted computational models (e.g., SQ and SoS algorithms), and how do lower bounds in these models relate to reduction-based hardness? Under what circumstances can we overcome lower bounds in restricted models? Ultimately, we aim for this workshop to lay the groundwork for more researchers to engage with the field, potentially leading to further progress and connections with other areas of study.
List of Participants (Tentative)
Ilias Diakonikolas (UW Madison)
Chao Gao (U Chicago)
Jingyi Gao (UW Madison)
Brice Huang (MIT)
He Jia (Northwestern)
Giannis Iakovidis (UW Madison)
Daniel Kane (UCSD)
Fred Koehler (U Chicago)
Tim Kunisky (Johns Hopkins)
Jerry Li (UW)
Sihan Liu (UCSD)
Yuetian Luo (U Chicago)
Mingchen Ma (UW Madison)
Ansh Nagda (Berkeley)
Shuo Pang (Bristol)
Ankit Pensia (Simons Institute)
Thanasis Pittas (UW Madison)
Aaron Potechin (U Chicago)
Lisheng Ren (UW Madison)
Mark Sellke (Harvard)
David Steurer (ETH)
Neekon Vafa (MIT)
Aravindan Vijayaraghavan (Northwestern)
Alex Wein (UC Davis)
Jeff Xu (CMU)
Ilias Zadik (Yale)
Nikos Zarifis (UW Madison)