Knowledge Vault 2/90 - ICLR 2014-2023
Natasha Dudek · Karianne Bergen · Stewart Jamieson · Valentin Tertius Bickel · Will Chapman · Johanna Hansen ICLR 2022 - Workshop AI for Earth and Space Science
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef workshop fill:#f9d4d4, font-weight:bold, font-size:14px; classDef ai fill:#d4f9d4, font-weight:bold, font-size:14px; classDef learning fill:#d4d4f9, font-weight:bold, font-size:14px; classDef models fill:#f9f9d4, font-weight:bold, font-size:14px; classDef applications fill:#f9d4f9, font-weight:bold, font-size:14px; A[Workshop AI for Earth
and Space Science
ICLR 2022] --> B[AI for Earth and Space
Science Workshop held. 1] A --> C[Explainable, interpretable trustworthy AI
for earth sciences. 2] A --> D[ForecastNet: global data-driven high
resolution weather model. 3] A --> E[Graph Gaussian processes for
street-level air pollution. 4] A --> F[Trainable wavelet neural network
for non-stationary signals. 5] A --> G[Invertible neural network for
ocean wave equations. 6] A --> H[Crop yield forecasts using
transferred representations. 7] A --> I[Bayesian neural network ensemble
improved precipitation predictions. 8] A --> J[Interpretable LSTM predicted net
ecosystem CO2 exchange. 9] A --> K[Onboard science capabilities for
exploring distant worlds. 10] A --> L[Multiscale graph neural networks
for incompressible fluids. 11] A --> M[Hybrid graph network simulator
for subsurface flow. 12] A --> N[Swirlnet wave spectra forecast
model improved. 13] A --> O[Invertible neural networks for
earth system models. 14] A --> P[ACGP model combined heterogeneous
output Gaussian processes. 15] A --> Q[Model feature vectors and
Fourier Neural Operators. 16] A --> R[Multi-image multi-spectral super-resolution
dataset and benchmarks. 17] A --> S[Reinforcement learning set estimator
for filtering systems. 18] A --> T[Unsupervised zone scaling of
climate models. 19] A --> U[Sea ice concentration charting
improved. 20] A --> V[Wildlife identification, counting, description
using deep learning. 21] V --> W[Transfer learning, active learning,
bounding boxes improved performance. 22] V --> X[LILA repository for conservation
machine learning datasets. 23] V --> Y[Raccoon social learning from
puzzle boxes studied. 24] A --> Z[Interpretability techniques evaluated on
climate downscaling models. 25] Z --> AA[Domain knowledge, iterative refinement
for explainable AI. 26] Z --> AB[Meta-learning, uncertainty quantification promising
for interpretable AI. 27] Z --> AC[Interpretability for disregarding untrustworthy
models, gaining insights. 28] Z --> AD[Visualization, discovering concepts, symbolic
regression emerging directions. 29] A --> AE[Model interpretability importance highlighted
for AI potential. 30] class A,B workshop; class C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U ai; class W,X learning; class V,Y,Z,AA,AB,AC,AD,AE applications;

Resume:

1.-The AI for Earth and Space Science Workshop was held, covering AI applications in atmosphere, solid earth, space, hydrosphere, and ecology.

2.-Professor Amy McGovern gave a keynote on explainable, interpretable and trustworthy AI for earth sciences.

3.-ForecastNet is a global data-driven high resolution weather model using Fourier Neural Operators that outperforms numerical weather prediction models.

4.-Graph Gaussian processes were used for street-level air pollution modeling to identify communities at risk of high NO2 levels.

5.-A trainable wavelet neural network was developed for non-stationary signals with improved performance from prior knowledge of signal characteristics.

6.-An invertible neural network was proposed for ocean wave equations to efficiently estimate solutions and quantify parameter uncertainties.

7.-Weekly supervised crop yield forecasts were generated at higher resolutions than label data availability using transferred representations.

8.-A Bayesian neural network ensemble improved precipitation predictions by leveraging spatiotemporally varying scales of individual climate models.

9.-An interpretable LSTM network predicted net ecosystem CO2 exchange and quantified variable importance to guide terrestrial ecosystem model development.

10.-Lucas Mandrake discussed onboard science capabilities to break bandwidth barriers and earn mission scientists' trust in exploring distant worlds.

11.-Mario Lino presented multiscale graph neural networks to efficiently capture non-local dynamics in simulating incompressible fluids.

12.-Talin Wu introduced a hybrid graph network simulator for subsurface flow simulations with 2-18x speedup over classical solvers.

13.-Swirlnet, a deep learning wave spectra forecast model, was improved using transfer learning from hindcasts and evaluating on real forecasts.

14.-Invertible neural networks enabled accurate and efficient estimation of both parameter distributions and model simulations for calibrating earth system models.

15.-The ACGP model combined heterogeneous output Gaussian process regression with learned DAG structure to improve prediction and interpretability.

16.-Antonios Mamouyalakis used model feature vectors and Fourier Neural Operators to improve Stokes inversion for solar atmosphere inference.

17.-Morvan Ge created a multi-image multi-spectral super-resolution dataset and benchmarks to evaluate models on realistic storm imagery data.

18.-Saviz Mowlavi proposed a reinforcement learning set estimator using nonlinear policies and augmented MDPs for filtering high-dimensional systems.

19.-Unsupervised zone scaling of climate models was performed using deep image priors for super-resolution of sea surface heights.

20.-Sea ice concentration charting was improved using loss function representations as regression or classification and class balancing.

21.-Wildlife in camera trap images was automatically identified, counted and described using deep learning to aid ecological understanding and conservation.

22.-Transfer learning, active learning, and bounding boxes improved performance on small camera trap datasets to monitor wildlife.

23.-The LILA repository was created to host and distribute conservation machine learning datasets for pre-training models.

24.-Raccoon social learning from puzzle boxes is being studied but tracking individuals in video remains very challenging.

25.-Interpretability techniques were evaluated on deep statistical climate downscaling models, finding issues not captured by traditional validation metrics.

26.-Leilani Gilpin discussed the importance of domain knowledge and iterative refinement with experts for explainable, safety-critical AI systems.

27.-Andrew Ross suggested meta-learning and uncertainty quantification as promising areas for interpretable earth science ML beyond prediction.

28.-Antonios Mamouyalakis highlighted using interpretability to disregard untrustworthy models and gain earth system insights beyond just prediction.

29.-Visualization, discovering new concepts, and symbolic regression were discussed as exciting emerging directions in interpretable AI.

30.-The workshop highlighted the importance and future of model interpretability for realizing the potential of AI in earth and space sciences.

Knowledge Vault built byDavid Vivancos 2024