Knowledge Vault 5 /65 - CVPR 2021
Equilibrium Computation and Deep Learning
Constantinos Daskalakis
< Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:

graph LR classDef challenges fill:#f9d4d4, font-weight:bold, font-size:14px classDef equilibrium fill:#d4f9d4, font-weight:bold, font-size:14px classDef convex fill:#d4d4f9, font-weight:bold, font-size:14px classDef nonconvex fill:#f9f9d4, font-weight:bold, font-size:14px classDef multiagent fill:#f9d4f9, font-weight:bold, font-size:14px A[Equilibrium Computation and
Deep Learning] --> B[Equilibrium computation, deep learning
challenges, opportunities. 1] A --> C[ML models excel in difficult games,
struggle in multi-agent games. 2] C --> D[Gradient descent struggles converging
in multi-agent settings. 3] D --> E[Talk: gradient descent issues
depth in multi-agent settings. 4] E --> F[Gradient descent-ascent fails in
simple 2-agent zero-sum games. 5] A --> G[Focus: multi-agent settings, each
minimizing others-dependent objective. 6] G --> H[Agents objectives convexity crucial
for equilibria, tractability. 7] H --> I[Classical convex-concave min-max: not
much harder than convex minimization. 8] I --> J[Modern non-convex non-concave
min-max problems differ. 8] I --> K[Gradient descent oscillates in
convex-concave games. 9] K --> L[Talk: oscillations removable
or due to intractability? 9] H --> M[Convex games: negative momentum
removes oscillations, converges. 10] H --> N[General-sum convex games: no-regret
learning, negative momentum converge faster. 11] J --> O[Non-convex games: local min-max
vs local min complexity. 12] O --> P[First-order methods find approximate
non-convex local min efficiently. 13] O --> Q[Local min-max equilibrium complexity
in non-convex non-concave unclear. 13] Q --> R[Computing non-convex non-concave local
min-max exponentially hard. 14] J --> S[Min-min games: function decreases
along best-response paths. 15] S --> T[Min-max games: function can
cycle along best-response paths. 15] T --> U[Min-max best-response paths give
no local min-max location info. 16] J --> V[Sperners lemma variant understands
local min-max equilibria topology. 17] V --> W[Directed path argument proves
Sperners lemma variant. 18] J --> X[Local min-max computation equivalent
to Sperner instances colored squares. 19] X --> Y[Equivalence derives globally convergent
second-order local min-max method. 20] J --> Z[Sperner reduction to local
min-max proves PPAD-completeness. 21] Z --> AA[Local min-max, Brouwer, convex
Nash are PPAD-complete. 22] AA --> AB[PPAD-completeness: first-order methods
intractable in exponential time. 23] C --> AC[Multi-agent deep learning needs
more domain expertise than single-agent. 24] J --> AD[Talk: non-convex games local
min-max intractability. Open questions remain. 25] AD --> AE[Identify asymptotically convergent methods,
potentially fast in practice. 26] AD --> AF[Identify structured games enabling
fast local equilibria convergence. 27] C --> AG[Two-player zero-sum RL structure
exploitable for equilibrium computation. 28] C --> AH[Study multi-agent RL targeting
correlated, coarse correlated, no-regret equilibria. 29] J --> AI[Non-convex games: variational inequalities
perspective. Gradient methods may solve some. 30] class A,B,C,D,E,F challenges class G,H,I,K,L,M,N equilibrium class J,O,P,Q,R,S,T,U,V,W,X,Y,Z,AA,AB,AD,AE,AF,AI nonconvex class AC,AG,AH multiagent

Resume:

1.- Challenges and opportunities at the interface of equilibrium computation and deep learning.

2.- Machine learning models can beat humans in difficult games but struggle in simpler games with multiple agents.

3.- Gradient descent has a hard time converging in multi-agent settings, even in simple convex-concave min-max problems.

4.- Talk investigates how deep the issues with gradient descent are in multi-agent settings.

5.- Gradient descent-ascent fails to converge even in simple 2-agent zero-sum games with scalar variables and known convex-concave objectives.

6.- Focus is on settings with multiple agents, each minimizing an objective dependent on others' actions. Game theory offers solution concepts.

7.- Convexity of agents' objectives is important for existence of equilibria and tractability of certain solution concepts.

8.- Classical convex-concave min-max problems are not much harder than convex minimization problems. Modern non-convex non-concave min-max problems are different.

9.- Gradient descent exhibits oscillations in convex-concave games. Talk investigates if they can be removed or are due to intractability.

10.- In convex games, negative momentum optimization methods remove oscillations and achieve last-iterate convergence to min-max equilibrium.

11.- In general-sum convex games, no-regret learning with negative momentum achieves faster convergence to correlated equilibria.

12.- In non-convex games, talk compares complexity of local min-max equilibrium computation to that of local min computation.

13.- First-order methods find approximate local min of non-convex functions efficiently. Complexity of local min-max equilibrium is unclear.

14.- Computing local min-max equilibrium in non-convex non-concave games is exponentially hard for first-order methods, even with small locality.

15.- Function value decreases along best-response paths in min-min games, providing progress. In min-max games, it can cycle.

16.- Querying function value along best-response paths in min-max games provides no information about location of local min-max equilibrium.

17.- Variant of Sperner's lemma used to understand topological structure of local min-max equilibria in non-convex non-concave games.

18.- Directed path argument proves variant of Sperner's lemma, revealing combinatorial existence argument at its core.

19.- Local min-max equilibrium computation is computationally equivalent to finding well-colored squares in Sperner instances.

20.- Equivalence can be leveraged to derive second-order method with global convergence to local min-max equilibria.

21.- Reduction from Sperner to local min-max equilibrium computation establishes PPAD-completeness of the latter.

22.- Local min-max equilibrium, Brouwer fixed point, and Nash equilibrium in convex games are all PPAD-complete.

23.- PPAD-completeness turns into black-box exponential-time intractability results for first-order methods.

24.- Multi-agent deep learning will require more domain expertise than single-agent case, which relies on gradient descent.

25.- Talk describes intractability results for local min-max equilibria in non-convex games. Many open questions remain.

26.- Identify asymptotically convergent methods that are potentially fast in practice for instances of interest.

27.- Identify structured games that sidestep intractability and enable fast convergence to local equilibria.

28.- Two-player zero-sum RL (stochastic games) have structure that could be exploited for equilibrium computation.

29.- Study multi-agent RL targeting equilibrium concepts like correlated equilibria, coarse correlated equilibria, or no-regret learning.

30.- Non-convex games can be studied from variational inequalities perspective. Gradient methods may solve certain non-monotone variational inequalities.

Knowledge Vault built byDavid Vivancos 2024