Knowledge Vault 2/99 - ICLR 2014-2023
Hongyang Li · Mengye Ren · Li Chen · Chonghao Sima · Kashyap Chitta · Holger Caesar · Ping Luo · Wei Zhang ICLR 2023 - Workshop Scene Representations for Autonomous Driving
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef perception fill:#f9d4d4, font-weight:bold, font-size:14px; classDef prediction fill:#d4f9d4, font-weight:bold, font-size:14px; classDef mapping fill:#d4d4f9, font-weight:bold, font-size:14px; classDef robustness fill:#f9f9d4, font-weight:bold, font-size:14px; classDef models fill:#f9d4f9, font-weight:bold, font-size:14px; classDef datasets fill:#d4f9f9, font-weight:bold, font-size:14px; classDef foundation fill:#f9d4d4, font-weight:bold, font-size:14px; A[Workshop Scene Representations
for Autonomous Driving
ICLR 2023] --> B[Workshop: scene representation for
autonomous driving 1] B --> C[Waymo: V2V/V2I enhances
perceptual range 2] B --> D[Vision-centric driving: end-to-end
trajectory prediction 3] B --> E[Robust visual perception adapts
to domains/weather 4] B --> F[3D-aware models synthesize
objects, humans, scenes 5] B --> G[AI Motive dataset: robust,
long-range radar 6] B --> H[VIP3D: monocular 3D detection
with multi-view attention 7] B --> I[Active learning reduces 3D
annotation costs 8] B --> J[CO3: self-supervised 3D learning
with V2I data 9] B --> K[Robust3D-OD benchmarks 3D
models' robustness 10] B --> L[CRN: real-time camera-radar
3D detection 11] B --> M[Depth perception in
vision transformers 12] B --> N[Neural MPC for
multi-lane roundabouts 13] B --> O[Adversarial training for robust
depth estimation 14] B --> P[MapTR: end-to-end HD
map construction 15] B --> Q[Geometric policy pre-training
improves sample efficiency 16] B --> R[WAVE: RL, world models
for driving intelligence 17] B --> S[SafeBench: safety-critical scenarios,
benchmarking AD 18] S --> T[Diffusion models generate
safety-critical scenarios 19] S --> U[GPT-3 generates driving
scenario descriptions 20] B --> V[Certifying robustness of
perception mathematically tractable 21] V --> W[Certified robustness guarantees
for safety-critical systems 22] A --> X[Foundation models: modular,
interpretable, safe 23] X --> Y[Modular pipelines allow safety
constraints, certification 24] X --> Z[Multi-paradigm learning crucial
for robust AD 25] X --> AA[Uncertainty estimation, embodied AI
for robust perception 26] X --> AB[Academia: lifelong learning,
machine reasoning research 27] X --> AC[Model compression for
edge deployment 28] X --> AD[Leverage open-source models
to kickstart research 29] X --> AE[Excitement and awareness
of model limitations 30] class B,C,E,F,H,K,L,M,O,V,W perception; class D,N,Q,R,S prediction; class G,J,P mapping; class I,T,U,Y,Z,AA,AB robustness; class AC,AD,AE models; class X foundation;

Resume:

1.-The workshop focused on scene representation learning for autonomous driving, covering topics like perception, prediction, mapping, safety, and robustness.

2.-Han Qiu from Waymo discussed cooperative perception using V2V and V2I communication to enhance the perceptual range of autonomous vehicles.

3.-Han Zhao from Tsinghua University presented research on vision-centric autonomous driving, including end-to-end trajectory prediction and neural map priors.

4.-Dengxin Dai talked about building robust visual perception models that can adapt to new domains and weather conditions.

5.-Yi Liao proposed using generative 3D-aware models trained on 2D images to synthesize novel objects, humans, and urban scenes.

6.-Thomas Matuszka introduced the AI Motive dataset - a multimodal dataset for robust autonomous driving with long range radar perception.

7.-Chenfeng Xu presented a method called VIP3D that improves monocular 3D object detection by leveraging multi-view images and attention mechanisms.

8.-Zhou Xiao explored active learning approaches to reduce 3D object detection annotation costs while maintaining high model performance.

9.-Ren Jiechen proposed a self-supervised learning method called CO3 that leverages vehicle-to-infrastructure data for 3D representation learning.

10.-Lin Dong benchmarked the robustness of 3D perception models to common corruptions and sensor failures on the Robust3D-OD dataset.

11.-Yongseok Kim presented CRN, a camera-radar fusion network that achieves LiDAR-level 3D detection performance in real-time.

12.-Peter Mortimer investigated how vision transformers perceive depth from a single image and created an interactive blog post.

13.-Yao Mu proposed a neural model predictive control framework for autonomous driving decision making in multi-lane roundabouts.

14.-Zhiyuan Cheng proposed an adversarial training approach to make self-supervised monocular depth estimation models robust to physical attacks.

15.-Ben Cheng Liao presented MapTR, an end-to-end transformer architecture for online vectorized HD map construction.

16.-Peng Hao Wu proposed a self-supervised geometric policy pre-training method for visual autonomous driving models to improve sample efficiency.

17.-Jamie Shulton from WAVE discussed simulation, reinforcement learning, world models and language for building scalable driving intelligence.

18.-SafeBench is a unified platform for generating safety-critical driving scenarios and benchmarking autonomous driving systems.

19.-Diffusion models can be leveraged to generate realistic and diverse safety-critical scenarios for autonomous vehicle testing.

20.-Large language models like GPT-3 can be used to automatically generate natural language descriptions of complex driving scenarios.

21.-Certifying the robustness of autonomous driving perception against semantic transformations is important and mathematically tractable for point clouds.

22.-Bo Li emphasized the need to go beyond empirical robustness and provide certified robustness guarantees for safety-critical systems.

23.-Foundation models in autonomous driving may be more modular instead of end-to-end, with interpretability and safety considerations.

24.-Modular autonomous driving pipelines allow incorporation of safety constraints, certifications and knowledge-based reasoning more easily than end-to-end models.

25.-Combining supervised learning, reinforcement learning, imitation learning and self-supervised learning is crucial for building robust autonomous driving systems.

26.-Uncertainty estimation and leveraging embodied AI priors are important for building robust perception systems that know when they don't know.

27.-Academia are focused on fundamental research questions around efficient lifelong learning from limited data and machine reasoning.

28.-Model compression techniques are crucial for deploying large foundation models on resource-constrained edge devices, especially in developing countries.

29.-Researchers should leverage and fine-tune open-sourced foundation models to kickstart new research instead of training models from scratch.

30.-The autonomous driving research community is excited about leveraging foundation models while being aware of their current limitations.

Knowledge Vault built byDavid Vivancos 2024