Concept Graph & Resume using Claude 3.5 Sonnet | Chat GPT4o | Llama 3:
Resume:
1.- AI programming: Early AI focused on writing programs for specific tasks, but lacked generality and robustness outside anticipated scenarios.
2.- Methodological shift: AI research moved from writing programs for ill-defined problems to designing algorithms for well-defined mathematical tasks.
3.- Learners vs. Solvers: Two main approaches in AI - learners (e.g. deep learning) and solvers (e.g. classical planners).
4.- Deep learners: Neural networks with adjustable parameters, trained to minimize error functions on tasks like image recognition.
5.- Deep reinforcement learning: Neural networks trained to make decisions in dynamic environments, achieving superhuman performance in games.
6.- Solvers: Algorithms that map inputs to outputs based on explicit models, like classical planners or SAT solvers.
7.- Generality of solvers: Solvers work across various domains without training, but may require significant computation time for each input.
8.- Problem relaxation: Simplifying complex problems to make them tractable, then using solutions to guide solving the original problem.
9.- Monotonic relaxation: A planning technique that makes action effects monotonic, enabling efficient solution of simplified problems.
10.- Goal recognition: Using planners to infer an agent's goal from observed behavior, applying Bayes' rule and cost considerations.
11.- Generalized planning: Creating strategies that work across multiple problem instances, not just solving individual cases.
12.- IW1 algorithm: A breadth-first search variant that prunes states not making new features true, enabling efficient exploration.
13.- Online planning for Atari: Using planning algorithms to play Atari games directly from screen pixels, competing with deep learning approaches.
14.- System 1 and System 2: Dual-process theory of cognition, with fast, intuitive System 1 and slow, deliberative System 2.
15.- Parallels to AI: Learners resemble System 1 (fast, intuitive), while solvers resemble System 2 (slow, deliberative).
16.- Integration challenge: Combining learners and solvers to tackle more complex problems, similar to human cognition.
17.- AlphaZero: An example of integrating learning and planning, using Monte Carlo tree search to guide reinforcement learning.
18.- Representation bottlenecks: Challenges in representing and solving seemingly simple problems like Blocks World for arbitrary instances.
19.- State variable learning: The need to infer problem state variables from streams of actions and observations.
20.- Feature learning: Developing methods to learn useful general features for planning and model learning.
21.- Abstract representations: Learning finite abstractions of problems to enable general planning for arbitrary-sized inputs.
22.- AI impact: Despite not achieving human-level intelligence, AI can have significant positive or negative societal effects.
23.- Asilomar AI Principles: Guidelines for beneficial AI development, though challenging to enforce in practice.
24.- Societal alignment: The need to align not just AI, but also technology, politics, and economics with human values.
25.- System 1 targeting: Modern society often targets intuitive thinking (System 1) rather than reasoned analysis (System 2).
26.- Compute power impact: Increasing computational resources alone may not solve all AI challenges, especially for novel problems.
27.- Trust in AI: Difficulty in trusting black-box AI systems for critical applications like self-driving cars.
28.- Model learning challenges: The need for better techniques to learn accurate models and plan with imperfect models.
29.- Goal specification: The challenge of expressing complex goals, especially for problems typically solved by System 1 thinking.
30.- Ethical considerations: The importance of addressing ethical dilemmas in AI, including scenarios we lack data for or wish to avoid.
Knowledge Vault built byDavid Vivancos 2024