Knowledge Vault 2/89 - ICLR 2014-2023
Yuanqi Du · Adji Dieng · Yoon Kim · Rianne van den Berg · Yoshua Bengio ICLR 2022 - Workshop Deep Generative Models for Highly Structured Data
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef deeplearning fill:#f9d4d4, font-weight:bold, font-size:14px; classDef equivariant fill:#d4f9d4, font-weight:bold, font-size:14px; classDef graphnets fill:#d4d4f9, font-weight:bold, font-size:14px; classDef generation fill:#f9f9d4, font-weight:bold, font-size:14px; classDef proteins fill:#f9d4f9, font-weight:bold, font-size:14px; classDef bayes fill:#d4f9f9, font-weight:bold, font-size:14px; classDef diffusion fill:#f9d4d4, font-weight:bold, font-size:14px; classDef counterfactual fill:#d4f9d4, font-weight:bold, font-size:14px; classDef dynamics fill:#d4d4f9, font-weight:bold, font-size:14px; classDef inverse fill:#f9f9d4, font-weight:bold, font-size:14px; classDef flows fill:#f9d4f9, font-weight:bold, font-size:14px; A[Workshop Deep Generative Models
for Highly Structured Data
ICLR 2022] --> B[Deep learning for molecules, PDEs
using graph neural nets. 1] B --> C[Equivariant graph nets combine
GNNs, equivariance for 3D. 2] B --> D[GNNs for PDEs: learned stencils,
frequency marching. 3] A --> E[Conditional generation challenge: stable,
non-toxic, synthesizable molecules. 4] A --> F[Cryo-DRAGON: deep generative model
for 3D protein reconstruction. 5] F --> G[Cryo-DRAGON: coordinate nets, VAE,
pose inference for proteins. 6] F --> H[Cryo-DRAGON discovered structures,
visualized protein dynamics. 7] F --> I[Future: ab initio reconstruction,
data analysis, benchmarking, sequence info. 8] A --> J[DAG G-Flow Nets for Bayesian
structure learning. 9] J --> K[DAG G-Flow Nets approximate
posterior of DAGs. 10] J --> L[Detailed balance for G-flow
with terminating states. 11] J --> M[DAG G-Flow Nets outperformed
on synthetic, real data. 12] A --> N[Torsional diffusion: diffusion model
for molecular conformations. 13] N --> O[Torsional diffusion restricts to
torsions, reduces dimensionality. 14] N --> P[Torsional diffusion leverages Fourier
slice theorem, hypertorus functions. 15] N --> Q[Torsional diffusion outperformed
rule-based, ML methods. 16] A --> R[MACE: model-agnostic counterfactual
explanations for predictions. 17] R --> S[MACE generates local chemical
space, labels counterfactuals. 18] R --> T[Counterfactuals give intuitive insights
for drug-like molecules. 19] R --> U[XMol package: easy-to-use
implementation of MACE. 20] A --> V[DDPMs for generating molecular
conformations, trajectories. 21] V --> W[DDPMs learn by diffusing to
noise, learning to denoise. 22] V --> X[DDPMs capture Boltzmann distribution,
sample new energy regions. 23] V --> Y[Path-sampling LSTMs excel at
non-Markovian dynamics. 24] Y --> Z[Physics constraints improve
path-sampling LSTMs. 25] A --> AA[DDRM: unsupervised inverse
problem solver using diffusion. 26] AA --> AB[DDRM operates in spectral space
for general degradation. 27] AA --> AC[DDRM outperforms in PSNR,
perceptual quality. 28] A --> AD[Semi-discrete flows via Voronoi
for bounded supports. 29] AD --> AE[Voronoi enables flexible partitioning
for dequantization, mixtures. 30] class A,B,C,D deeplearning; class C,D,E equivariant; class B,C,D graphnets; class E,F,G,H,I generation; class F,G,H,I proteins; class J,K,L,M bayes; class N,O,P,Q,V,W,X,Y,Z diffusion; class R,S,T,U counterfactual; class V,W,X,Y,Z dynamics; class AA,AB,AC inverse; class AD,AE flows;

Resume:

1.-Max Welling discussed deep learning for molecules and PDEs using graph neural networks with equivariance properties.

2.-Equivariant graph neural networks combine graph neural networks with equivariance to model 3D molecular structures.

3.-Graph neural networks for PDEs enable solving many types of PDEs with learned stencils and frequency marching.

4.-Conditional generation remains a challenge - generated molecules must be chemically stable, non-toxic, and synthesizable.

5.-Ellen Zhong presented cryo-DRAGON, a deep generative model for reconstructing 3D protein structures from 2D cryo-EM images.

6.-Cryo-DRAGON uses coordinate-based neural networks, a VAE architecture, and exact pose inference to model heterogeneous protein structures.

7.-Cryo-DRAGON was used to discover new protein structures and visualize continuous protein dynamics from cryo-EM data.

8.-Future work includes ab initio reconstruction, exploratory data analysis, benchmarking, and incorporating protein sequence/structure information.

9.-Tristan Deleu introduced DAG G-Flow Nets for Bayesian structure learning of Bayesian networks.

10.-DAG G-Flow Nets provide an approximation of the posterior distribution of DAGs using generative flow networks.

11.-A new detailed balance condition for G-flow nets with only terminating states was introduced.

12.-DAG G-Flow Nets outperformed other Bayesian structure learning methods on both synthetic and real data.

13.-Bowen Jing and Gabriel Corso presented torsional diffusion, a diffusion model for molecular conformation generation.

14.-Torsional diffusion restricts diffusion to torsion angles, greatly reducing dimensionality compared to diffusing atomic coordinates.

15.-Torsional diffusion leverages the Fourier slice theorem and special functions on the hypertorus for equivariant generation.

16.-Torsional diffusion significantly outperformed existing rule-based and machine learning methods for conformation generation.

17.-Gimhani Eriyagama introduced MACE, a model-agnostic counterfactual explanation method for explaining predictions of arbitrary black-box models.

18.-MACE generates a local chemical space around an input molecule and labels counterfactuals using the black-box model.

19.-Counterfactual explanations provide intuitive, actionable insights into model predictions for drug-like molecules.

20.-The open-source XMol package provides an easy-to-use implementation of the MACE algorithm.

21.-Prateek Tiwari presented denoising diffusion probabilistic models (DDPMs) for generating sensible molecular conformations and trajectories.

22.-DDPMs learn distributions over molecules by diffusing to noise and then learning to denoise samples.

23.-DDPMs capture the Boltzmann distribution and generate samples from new regions of the energy landscape.

24.-For modeling non-Markovian dynamics from time series data, path-sampling LSTMs provide state-of-the-art results.

25.-Adding physics-based constraints to path-sampling LSTMs improves generation quality by reducing data noise.

26.-Bahijja Tolulope Raimi presented DDRM, an unsupervised method for solving inverse problems using pre-trained diffusion models.

27.-DDRM operates in spectral space, enabling denoising and inpainting for general degradation matrices.

28.-DDRM outperforms previous unsupervised inverse problem solvers in both PSNR and perceptual quality.

29.-Ricky T.Q. Chen introduced semi-discrete normalizing flows via differentiable Voronoi tessellation for modeling bounded supports.

30.-Voronoi tessellation enables flexible partitioning of a continuous space for dequantization and disjoint mixture modeling.

Knowledge Vault built byDavid Vivancos 2024