Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:
Resume:
1.- We need to shift from thinking of AI as optimizing fixed objectives to AI that is uncertain about and assists with human preferences.
2.- AI collaboration like Netflix prize shows potential, but we should aim higher than just movie recommendations to benefit humanity.
3.- Progress in self-driving cars, game-playing AI, computer vision classification - but some concerning limitations and fragility in current approaches.
4.- Probabilistic programming combines probability theory, logic/programming to enable universal modeling capability and reasoning on complex real-world models.
5.- In the coming decade, AI will likely swing back to knowledge and reasoning, including mastering natural language understanding.
6.- AI has concerning misuses - facial recognition, fake content generation, surveillance, autonomous weapons. We need to address these.
7.- With advanced AI, many currently expensive/difficult tasks like construction, education, healthcare could become dramatically cheaper and more accessible globally.
8.- The economic value of human-level AI is astronomical - potentially quadrillions of dollars in net present value.
9.- The standard model of AI - optimizing a fixed objective - is fundamentally flawed because we can't specify the right objective.
10.- The better the AI, the worse the outcome if pursuing the wrong objective. History shows the danger of misspecified wishes/goals.
11.- We need a new model - beneficial AI where the AI is uncertain about human preferences and aims to satisfy them.
12.- For AI serving many humans, ideas from social choice theory, mechanism design, public policy can help address conflicting preferences.
13.- We have to understand the connection between human behavior and underlying preferences through cognitive science, psychology, neuroscience.
14.- AI can enhance human autonomy by freeing people from constraints and enabling them to learn and make better life choices.
15.- At a collective level, AI could help solve coordination problems and allow humanity to proactively shape the future rather than just react.
16.- But we don't want so much AI that humans become demotivated and fail to learn how to run civilization themselves.
17.- AI is already helping with global problems and will accelerate benefits, but is also accelerating risks like autonomous weapons, manipulation, surveillance.
18.- When we say "AI for good" we don't actually know how to define an objective for "good" that an AI can optimize.
19.- If we give AIs the wrong objective, even with good intent, they will cause harm in single-mindedly pursuing it. This requires new AI foundations.
20.- AI isn't a panacea - many barriers are human coordination and motivation. Technology alone can't solve all problems.
21.- We likely have time before AGI risk, but given the stakes and rate of progress, we cannot be complacent.
22.- Problems of algorithmic bias, autonomous weapons, fake content, surveillance must be addressed now as they'll get worse quickly with AI progress.
23.- Media discussions of evil robot overlords are a distraction - the real concern is AIs that are good at the wrong objective.
24.- As an individual, educating oneself then participating via writing/advocacy is how to engage with shaping the future of AI.
25.- Satellite imagery analysis has huge potential for environmental monitoring, urban planning, migration, shipping, fishing etc. if costs are pooled.
26.- With language and reasoning, AI tutors could dramatically accelerate education, making it far cheaper and more accessible to spread skills and knowledge.
27.- Once infrastructure is in place, adding AI for new global-scale services could cost just hundreds of thousands to a few million dollars.
28.- The ratio of infrastructure cost to AI deployment cost can be 1000:1 based on the global seismic monitoring example - enabling huge benefits.
29.- AI will enhance human autonomy and help solve coordination problems, but we must proactively define the future we want rather than react.
30.- We are still far from AGI that poses existential risk, but we don't know how long it will take to solve AI control and safety - we must begin now.
Knowledge Vault built byDavid Vivancos 2024