Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:
Resume:
1.-Common sense AI is a moonshot research goal that requires rethinking many assumptions. Past failures don't mean it's impossible.
2.-Abductive and counterfactual reasoning are important for broad common sense, in addition to deduction and induction.
3.-Backpropagation can be used as an inference-time algorithm for abductive and counterfactual reasoning with language models.
4.-Neural logic decoding allows incorporating logical constraints into language generation. Smaller models with neural logic outperform larger models without it.
5.-There seems to be a continuum between knowledge and reasoning. A lot of knowledge is in language.
6.-Language should be considered as a symbol for neural-symbolic integration, not just logic. Open text can represent knowledge.
7.-COMET and ATOMIC 2020 are common sense knowledge graphs that combine symbolic knowledge with neural language models to enable strong generalization.
8.-Social Chemistry 101 and Scruples expand the scope of common sense knowledge to social and moral norms.
9.-GPT-3 struggles with "cheeseburger stabbing" example, but neural logic decoding handles it well, showing progress on challenging common sense reasoning.
10.-Multimodal learning is necessary for human-level AI, but language may be especially important for bootstrapping knowledge that's hard to experience directly.
11.-Meta-reasoning about what the model knows and doesn't know is an important future direction. Current models lack this self-awareness.
12.-Leaderboards and multiple choice questions are increasingly less useful for measuring AI progress. Open-ended generation is more informative.
13.-Modeling diverse, even contradictory viewpoints is important for AI to understand the complexities of human norms and beliefs.
14.-Adding common sense makes AI more human-like by providing background knowledge that can transfer across tasks. Current models lack this.
15.-Algorithmic ways of constructing more diverse benchmarks resistant to bias and gaming are an important research direction as models rapidly advance.
16.-The speaker believes deep learning is central to AI progress but that neural-symbolic integration, especially with language, is key. Modular specialized reasoners may be needed.
17.-Key open challenges include interactive and introspective learning, and concept learning beyond superficial input-output mappings. Fundamentally rethinking learning may be required.
Knowledge Vault built byDavid Vivancos 2024