Knowledge Vault 2/98 - ICLR 2014-2023
Thiviyan Thanapalasingam · Emile van Krieken · Halley Young · Disha Shrivastava · Kevin Ellis · Jakub Tomczak ICLR 2023 - Workshop Neurosymbolic Generative Models (NeSy-GeMs)
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef workshop fill:#f9d4d4, font-weight:bold, font-size:14px; classDef neurosymbolic fill:#d4f9d4, font-weight:bold, font-size:14px; classDef application fill:#d4d4f9, font-weight:bold, font-size:14px; classDef challenge fill:#f9f9d4, font-weight:bold, font-size:14px; classDef future fill:#f9d4f9, font-weight:bold, font-size:14px; A[Workshop NeSy-GeMs
ICLR 2023] --> B[NeuroSymbolic Generative Models workshop:
combines approaches for reasoning, interpretability. 1] A --> C[Gallett: workshop program,
hybrid event logistics. 2] A --> D[Niepert: incorporating symbolic structures,
algorithms into ML. 3] D --> E[Domain knowledge in ML
for physical systems. 4] D --> F[Algorithmic components in differentiable
ML pipelines. 5] D --> G[Discovering explanatory discrete structures
from data. 6] A --> H[Online talks: diverse neurosymbolic topics. 7] A --> I[In-person talks: symbolic methods,
reasoning, abstract representations. 8] A --> J[Tarlow: transformers learning reasoning,
explicit algorithms. 9] J --> K[Algorithmic components in deep
learning for reasoning. 10] J --> L[Constrained language generation outperforms
pure deep learning. 11] A --> M[Fan: cognitive science, abstraction,
communication models. 12] M --> N[Bridging abstractions link sensory
to motor for communication. 13] M --> O[Neuro-symbolic models for shared
symbolic abstractions emergence. 14] M --> P[Challenges: symbol grounding, non-stationarity,
inductive biases. 15] A --> Q[Le: neuro-symbolic as probabilistic
programs for generalization. 16] Q --> R[Inference compilation amortizes inference
in probabilistic programs. 17] Q --> S[Wake-sleep methods effective but
face challenges. 18] Q --> T[DUDE: neuro-symbolic model for
out-of-distribution drawing generalization. 19] A --> U[Van den Broeck: transformers paradox,
reasoning mismatch. 20] U --> V[Constrained language generation improves
quality, constraint satisfaction. 21] U --> W[Semantic loss trains networks
for valid predictions. 22] A --> X[Panel: focus on inductive
biases, modularity, reasoning emergence. 23] A --> Y[Applications: program synthesis, generative
models, 3D scenes, tools. 24] Y --> Z[Language models with symbolic
tools: reliability, risks, education. 25] A --> AA[Evolution of reasoning: coordination,
constraints. AI may surpass. 26] A --> AB[Gaps: scaling challenges. Language
models closing gap. 27] A --> AC[Tivadar: organizing hybrid workshop,
thanking contributors. 28] AC --> AD[Workshop brings community together
after virtual events. 29] AC --> AE[Tivadar thanks program committee,
authors, speakers, chairs, Slides Live. 30] class A,B,C,AC,AD,AE workshop; class D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,AA neurosymbolic; class Y,Z application; class AB challenge;

Resume:

1.-The NeuroSymbolic Generative Models workshop aims to combine neural networks with symbolic approaches for more robust reasoning, interpretability, and out-of-distribution generalization.

2.- Claude Gallett opened the workshop by explaining the program schedule and logistics for the hybrid in-person and online event.

3.-Matthias Niepert presented on incorporating discrete symbolic structures and algorithms into machine learning models for better generalization and interpretability.

4.-Incorporating domain knowledge like symmetries and conservation laws into ML models for physical systems leads to better results than pure deep learning.

5.-Integrated algorithmic components like shortest path solvers into end-to-end differentiable ML pipelines allows learning the algorithm and parameters jointly.

6.-Discovering explanatory discrete structures from data, such as in gene regulatory networks and PDEs, is an exciting application of neurosymbolic AI.

7.-Online spotlight talks covered diverse topics like integrating knowledge graphs with language models, open-ended discovery of diverse programs, and object-centric scene generation.

8.-In-person spotlight talks spanned symbolic methods for graph generation, neurosymbolic deductive reasoning, editing abstract object representations, and amortized probabilistic inference.

9.-Danny Tarlow discussed the paradox of transformer language models learning to reason from data and whether reasoning can emerge or requires explicit algorithms.

10.-Integrating algorithmic components into deep learning architectures for program analysis can provide scalable models that perform multi-step reasoning.

11.-Constrained language generation using tractable probabilistic circuits to guide autoregressive language models outperforms pure deep learning approaches.

12.-Judy Fan presented cognitive science research on how people use visual and linguistic abstraction to communicate, and computational models of these abilities.

13.-Bridging abstractions link sensory processing to motor execution to enable flexible multimodal communication behaviors aligned with communicative goals.

14.-Neuro-symbolic models can potentially account for the emergence of shared symbolic abstractions that expand over time from neural architectures.

15.-Key challenges include the symbol grounding problem, accounting for non-stationary sets of symbols, and identifying useful inductive biases to build into models.

16.-Tuan Anh Le framed neuro-symbolic generative models as universal probabilistic programs with neural nets and symbolic components to aid generalization from limited data.

17.-Inference compilation allows amortizing inference in universal probabilistic programs by coupling program execution with an autoregressive recognition model.

18.-Wake-sleep methods are effective for learning the parameters of probabilistic programs but face challenges with high variance gradients and the tighter bounds problem.

19.-Drawing Out of Distribution (DUDE) is a neuro-symbolic model integrating execution-guided inference and a library of strokes that achieves out-of-distribution generalization on drawings.

20.-Guy Van den Broeck demonstrated a paradox in transformer language models appearing to "learn reasoning" that points to a mismatch between test accuracy and true reasoning ability.

21.-Constrained language generation using tractable probabilistic circuits to guide intractable autoregressive language models guarantees constraint satisfaction and improves quality.

22.-Semantic loss functions derived from logical constraints and computed using tractable circuits can train neural networks to make semantically valid structured predictions.

23.-Panel discussion: No need for rigid definitions of "neuro-symbolic AI". Focus should be on useful inductive biases, modularity, and emergence of reasoning.

24.-Possible applications: Program understanding/synthesis, controlling generative models, 3D scene understanding, design tools, incorporating broader invariants into models seamlessly.

25.-Connecting language models to symbolic tools enables opportunities like increased reliability but also risks if connected to decision-making systems. Could enhance education.

26.-Biological evolution of reasoning likely driven by functional pressures for coordination as well as physical constraints. Modern AI may surpass human cognitive limitations.

27.-Largest gaps between neuro-symbolic proofs-of-concept and real-world applications involve scaling challenges. Tool use with language models is closing the gap.

28.-Tivadar emphasized the significant work required to organize a successful hybrid workshop and thanked everyone involved in making it happen.

29.-The workshop aimed to bring the community together for in-person interactions after two years of virtual events during the pandemic.

30.-Tivadar closed by thanking the program committee, authors, invited speakers, ICLR workshop chairs, and Slides Live for their crucial contributions to the workshop.

Knowledge Vault built byDavid Vivancos 2024