Knowledge Vault 2/72 - ICLR 2014-2023
Yejin Choi ICLR 2021 - Invited Talk - Commonsense AI: Myth and Truth
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef AI fill:#f9d4d4, font-weight:bold, font-size:14px; classDef reasoning fill:#d4f9d4, font-weight:bold, font-size:14px; classDef language fill:#d4d4f9, font-weight:bold, font-size:14px; classDef knowledge fill:#f9f9d4, font-weight:bold, font-size:14px; classDef challenges fill:#f9d4f9, font-weight:bold, font-size:14px; A[Yejin Choi
ICLR 2021] --> B[Common sense AI: moonshot,
rethink assumptions. 1] A --> C[Abductive, counterfactual reasoning
vital for common sense. 2] C --> D[Backpropagation enables abductive,
counterfactual language model reasoning. 3] A --> E[Neural logic decoding: logical
constraints, outperforms larger models. 4] A --> F[Knowledge-reasoning continuum,
language embodies knowledge. 5] F --> G[Language: neural-symbolic integration
symbol, represents knowledge. 6] G --> H[COMET, ATOMIC 2020: knowledge
graphs, neural models, generalization. 7] H --> I[Social Chemistry 101, Scruples:
social, moral norms knowledge. 8] A --> J[Neural logic decoding handles
cheeseburger stabbing, shows progress. 9] A --> K[Multimodal learning needed, language
bootstraps indirect knowledge. 10] A --> L[Meta-reasoning, self-awareness lacking
in current models. 11] A --> M[Open-ended generation more informative
than leaderboards, multiple choice. 12] A --> N[Modeling diverse, contradictory
viewpoints crucial for AI. 13] A --> O[Common sense provides transferable
background knowledge, human-like AI. 14] A --> P[Diverse benchmarks resistant to
bias, gaming needed as
models rapidly advance. 15] A --> Q[Deep learning central, neural-symbolic
integration with language key.
Modular reasoners may help. 16] A --> R[Open challenges: interactive, introspective
learning concept beyond
superficial input-output mappings. 17] class A,B,O AI; class C,D,E,Q reasoning; class F,G,H,I,K,N knowledge; class J,L,M challenges; class P,R future;

Resume:

1.-Common sense AI is a moonshot research goal that requires rethinking many assumptions. Past failures don't mean it's impossible.

2.-Abductive and counterfactual reasoning are important for broad common sense, in addition to deduction and induction.

3.-Backpropagation can be used as an inference-time algorithm for abductive and counterfactual reasoning with language models.

4.-Neural logic decoding allows incorporating logical constraints into language generation. Smaller models with neural logic outperform larger models without it.

5.-There seems to be a continuum between knowledge and reasoning. A lot of knowledge is in language.

6.-Language should be considered as a symbol for neural-symbolic integration, not just logic. Open text can represent knowledge.

7.-COMET and ATOMIC 2020 are common sense knowledge graphs that combine symbolic knowledge with neural language models to enable strong generalization.

8.-Social Chemistry 101 and Scruples expand the scope of common sense knowledge to social and moral norms.

9.-GPT-3 struggles with "cheeseburger stabbing" example, but neural logic decoding handles it well, showing progress on challenging common sense reasoning.

10.-Multimodal learning is necessary for human-level AI, but language may be especially important for bootstrapping knowledge that's hard to experience directly.

11.-Meta-reasoning about what the model knows and doesn't know is an important future direction. Current models lack this self-awareness.

12.-Leaderboards and multiple choice questions are increasingly less useful for measuring AI progress. Open-ended generation is more informative.

13.-Modeling diverse, even contradictory viewpoints is important for AI to understand the complexities of human norms and beliefs.

14.-Adding common sense makes AI more human-like by providing background knowledge that can transfer across tasks. Current models lack this.

15.-Algorithmic ways of constructing more diverse benchmarks resistant to bias and gaming are an important research direction as models rapidly advance.

16.-The speaker believes deep learning is central to AI progress but that neural-symbolic integration, especially with language, is key. Modular specialized reasoners may be needed.

17.-Key open challenges include interactive and introspective learning, and concept learning beyond superficial input-output mappings. Fundamentally rethinking learning may be required.

Knowledge Vault built byDavid Vivancos 2024