Knowledge Vault 2/48 - ICLR 2014-2023
Christopher Manning ICLR 2018 - Invited Talk - A Neural Network Model That Can Reason
<Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4 | Gemini Adv | Llama 3:

graph LR classDef main fill:#f9d4d4, font-weight:bold, font-size:14px; classDef neural fill:#d4f9d4, font-weight:bold, font-size:14px; classDef reasoning fill:#d4d4f9, font-weight:bold, font-size:14px; classDef mac fill:#f9f9d4, font-weight:bold, font-size:14px; classDef future fill:#f9d4f9, font-weight:bold, font-size:14px; A[Christopher Manning
ICLR 2018 ] --> B[Design neural nets for
reasoning 1] A --> C[Reasoning needs careful
deliberate thinking 2] C --> D[Reasoning manipulates knowledge,
relies composition 3] A --> E[Flexible priors enable
effective learning 4] E --> F[Trees good bias,
attention alternative 5] A --> G[Compose reasoning,
differentiable, scalable 6] A --> H[CLEVR tests reasoning
about objects 7] H --> I[CLEVR has programs
specifying steps 8] H --> J[Past used strong
supervision, layers 9] A --> K[MAC nets introduced
for reasoning 10] K --> L[MAC cell adapts
for operations 11] K --> M[Control, memory states,
attention key 12] M --> N[Attention simulates reasoning,
maintains differentiability 13] K --> O[Question encoded LSTM,
image ResNet 14] K --> P[Control attends question
for query 15] K --> Q[Read relates knowledge
to memory 16] K --> R[Write updates memory
varied ways 17] K --> S[End-to-end differentiable
reasoning models 18] S --> T[MAC excels on
CLEVR accuracy 19] S --> U[MAC learns faster,
less data 20] S --> V[MAC best on
human questions 21] S --> W[Attention interprets
reasoning steps 22] K --> X[MAC differs from
module nets 23] K --> Y[Benefits from control,
memory, attention 24] A --> Z[Design for reasoning
key challenge 25] Z --> AA[Limitation: steps ignore
retrieved info 26] AA --> AB[Memory influencing control
helps reasoning 27] Z --> AC[Attention aids some
interpretability 28] AC --> AD[Textual explanations can
improve it 29] A --> AE[Aims spur
reasoning research 30] class A,B main; class C,D,E,F,Z,AA,AB,AC,AD reasoning; class G,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y mac; class H,I,J clevr; class AE future;

Resume:

1.-Chris Manning discusses designing neural networks for higher-level cognition and reasoning tasks beyond just intuitive stimulus-response.

2.-Current deep learning successes are in instinctive tasks for humans; reasoning requires careful deliberate thinking.

3.-Reasoning defined as algebraically manipulating acquired knowledge to answer new questions. Composition rules are central.

4.-Manning argues for using appropriate but flexible structural priors as inductive biases to enable effective learning.

5.-Tree-structured models provide good inductive bias but are hard to optimize. Attention offers an alternative.

6.-The goal is encouraging compositional multi-step reasoning in neural networks while maintaining differentiability and scalability.

7.-The CLEVR dataset tests visual question answering requiring reasoning about object attributes, relations, numbers.

8.-CLEVR examples include functional programs specifying reasoning steps to answer the question.

9.-Previous CLEVR approaches used strong supervision of functional programs or specialized layers in ConvNets.

10.-Memory, Attention and Composition (MAC) networks are introduced for multi-step reasoning.

11.-MAC networks use a versatile recurrent MAC cell to adapt behavior for different reasoning operations.

12.-MAC cells have separate control and memory states. Control extracts instructions, read retrieves information, write updates memory.

13.-Attention is used extensively in MAC networks to maintain differentiability while simulating complex reasoning.

14.-The question is encoded with an LSTM. The image is encoded with a ResNet as the knowledge base.

15.-The control unit attends to question words to compute a time-specific query representation.

16.-The read unit relates items from the knowledge base to the previous memory and control to retrieve information.

17.-The write unit can use a simple linear layer or more complex highway or self-attention mechanisms.

18.-MAC networks are fully differentiable end-to-end models for multi-step reasoning.

19.-On CLEVR, MAC networks achieve 98.9% accuracy, more than halving previous state-of-the-art error rates.

20.-MAC networks learn much faster than alternatives, performing well even with 1/10 of the training data.

21.-On the CLEVR-Humans dataset testing transfer to human-authored questions, MAC networks outperform other approaches.

22.-Attention distributions in MAC networks help interpret the reasoning steps being performed on the question and image.

23.-MAC networks differ from neural module networks by using one universal cell rather than specialized modules.

24.-MAC networks benefit from separating control and memory and using attention rather than conditional normalization as in FiLM.

25.-The talk argues for designing neural architectures with inductive biases for reasoning as a challenge for the community.

26.-Potential limitations are that MAC networks decide reasoning steps regardless of retrieved information.

27.-Allowing memory to influence future control could help for knowledge-driven reasoning.

28.-The attention distributions aid interpretability of the reasoning steps but still have limitations.

29.-Generating textual explanations alongside the reasoning is a possible direction for improving explainability.

30.-The talk aims to spur more research into neural building blocks for higher-level inference and reasoning.

Knowledge Vault built byDavid Vivancos 2024