Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:
Resume:
1.-The AI for Earth and Space Science Workshop was held, covering AI applications in atmosphere, solid earth, space, hydrosphere, and ecology.
2.-Professor Amy McGovern gave a keynote on explainable, interpretable and trustworthy AI for earth sciences.
3.-ForecastNet is a global data-driven high resolution weather model using Fourier Neural Operators that outperforms numerical weather prediction models.
4.-Graph Gaussian processes were used for street-level air pollution modeling to identify communities at risk of high NO2 levels.
5.-A trainable wavelet neural network was developed for non-stationary signals with improved performance from prior knowledge of signal characteristics.
6.-An invertible neural network was proposed for ocean wave equations to efficiently estimate solutions and quantify parameter uncertainties.
7.-Weekly supervised crop yield forecasts were generated at higher resolutions than label data availability using transferred representations.
8.-A Bayesian neural network ensemble improved precipitation predictions by leveraging spatiotemporally varying scales of individual climate models.
9.-An interpretable LSTM network predicted net ecosystem CO2 exchange and quantified variable importance to guide terrestrial ecosystem model development.
10.-Lucas Mandrake discussed onboard science capabilities to break bandwidth barriers and earn mission scientists' trust in exploring distant worlds.
11.-Mario Lino presented multiscale graph neural networks to efficiently capture non-local dynamics in simulating incompressible fluids.
12.-Talin Wu introduced a hybrid graph network simulator for subsurface flow simulations with 2-18x speedup over classical solvers.
13.-Swirlnet, a deep learning wave spectra forecast model, was improved using transfer learning from hindcasts and evaluating on real forecasts.
14.-Invertible neural networks enabled accurate and efficient estimation of both parameter distributions and model simulations for calibrating earth system models.
15.-The ACGP model combined heterogeneous output Gaussian process regression with learned DAG structure to improve prediction and interpretability.
16.-Antonios Mamouyalakis used model feature vectors and Fourier Neural Operators to improve Stokes inversion for solar atmosphere inference.
17.-Morvan Ge created a multi-image multi-spectral super-resolution dataset and benchmarks to evaluate models on realistic storm imagery data.
18.-Saviz Mowlavi proposed a reinforcement learning set estimator using nonlinear policies and augmented MDPs for filtering high-dimensional systems.
19.-Unsupervised zone scaling of climate models was performed using deep image priors for super-resolution of sea surface heights.
20.-Sea ice concentration charting was improved using loss function representations as regression or classification and class balancing.
21.-Wildlife in camera trap images was automatically identified, counted and described using deep learning to aid ecological understanding and conservation.
22.-Transfer learning, active learning, and bounding boxes improved performance on small camera trap datasets to monitor wildlife.
23.-The LILA repository was created to host and distribute conservation machine learning datasets for pre-training models.
24.-Raccoon social learning from puzzle boxes is being studied but tracking individuals in video remains very challenging.
25.-Interpretability techniques were evaluated on deep statistical climate downscaling models, finding issues not captured by traditional validation metrics.
26.-Leilani Gilpin discussed the importance of domain knowledge and iterative refinement with experts for explainable, safety-critical AI systems.
27.-Andrew Ross suggested meta-learning and uncertainty quantification as promising areas for interpretable earth science ML beyond prediction.
28.-Antonios Mamouyalakis highlighted using interpretability to disregard untrustworthy models and gain earth system insights beyond just prediction.
29.-Visualization, discovering new concepts, and symbolic regression were discussed as exciting emerging directions in interpretable AI.
30.-The workshop highlighted the importance and future of model interpretability for realizing the potential of AI in earth and space sciences.
Knowledge Vault built byDavid Vivancos 2024