Knowledge Vault 4 /52 - AI For Good 2020
Beneficial AI to Advance the SDGs
Stuart Russell
< Resume Image >
Link to IA4Good VideoView Youtube Video

Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:

graph LR classDef shift fill:#f9d4d4, font-weight:bold, font-size:14px classDef potential fill:#d4f9d4, font-weight:bold, font-size:14px classDef progress fill:#d4d4f9, font-weight:bold, font-size:14px classDef prob fill:#f9f9d4, font-weight:bold, font-size:14px classDef misuse fill:#f9d4f9, font-weight:bold, font-size:14px classDef future fill:#d4d4f4, font-weight:bold, font-size:14px A[Beneficial AI to
Advance the SDGs] --> B[Shift AI
from fixed objectives. 1] A --> C[AI collaboration
shows potential
beyond movies. 2] A --> D[AI progress,
but with limitations. 3] A --> E[Probabilistic programming enables
universal modeling. 4] A --> F[AI to refocus on
knowledge, reasoning. 5] A --> G[Address AI misuse:
recognition, surveillance. 6] B --> H[New model:
AI uncertain
about preferences. 11] H --> I[Social choice theory helps
address conflicts. 12] H --> J[Understand behavior-preferences link
via sciences. 13] H --> K[AI enhances autonomy,
better life choices. 14] H --> L[AI solves coordination problems,
shapes future. 15] H --> M[Avoid too much AI,
maintain motivation. 16] C --> N[AI helps,
but accelerates risks. 17] N --> O[AI for good lacks
definable objective. 18] N --> P[Wrong objective leads
to AI harm. 19] N --> Q[AI isnt a
cure-all solution. 20] N --> R[Time before AGI,
but cant wait. 21] N --> S[Address algorithmic bias,
weapons, surveillance now. 22] D --> T[Concern: AI
good at
wrong objectives. 23] T --> U[Educate, engage through
advocacy on AI. 24] E --> V[Satellite imagery:
potential for
global monitoring. 25] V --> W[AI tutors can
transform education
accessibility. 26] V --> X[Low cost to
add global AI. 27] V --> Y[Infrastructure cost
outweighs AI
deployment cost. 28] F --> Z[AI enhances autonomy,
solves coordination. 29] F --> AA[AGI far,
but must
solve safety now. 30] class A,B shift class C,D progress class E prob class F potential class G misuse class H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z,AA future

Resume:

1.- We need to shift from thinking of AI as optimizing fixed objectives to AI that is uncertain about and assists with human preferences.

2.- AI collaboration like Netflix prize shows potential, but we should aim higher than just movie recommendations to benefit humanity.

3.- Progress in self-driving cars, game-playing AI, computer vision classification - but some concerning limitations and fragility in current approaches.

4.- Probabilistic programming combines probability theory, logic/programming to enable universal modeling capability and reasoning on complex real-world models.

5.- In the coming decade, AI will likely swing back to knowledge and reasoning, including mastering natural language understanding.

6.- AI has concerning misuses - facial recognition, fake content generation, surveillance, autonomous weapons. We need to address these.

7.- With advanced AI, many currently expensive/difficult tasks like construction, education, healthcare could become dramatically cheaper and more accessible globally.

8.- The economic value of human-level AI is astronomical - potentially quadrillions of dollars in net present value.

9.- The standard model of AI - optimizing a fixed objective - is fundamentally flawed because we can't specify the right objective.

10.- The better the AI, the worse the outcome if pursuing the wrong objective. History shows the danger of misspecified wishes/goals.

11.- We need a new model - beneficial AI where the AI is uncertain about human preferences and aims to satisfy them.

12.- For AI serving many humans, ideas from social choice theory, mechanism design, public policy can help address conflicting preferences.

13.- We have to understand the connection between human behavior and underlying preferences through cognitive science, psychology, neuroscience.

14.- AI can enhance human autonomy by freeing people from constraints and enabling them to learn and make better life choices.

15.- At a collective level, AI could help solve coordination problems and allow humanity to proactively shape the future rather than just react.

16.- But we don't want so much AI that humans become demotivated and fail to learn how to run civilization themselves.

17.- AI is already helping with global problems and will accelerate benefits, but is also accelerating risks like autonomous weapons, manipulation, surveillance.

18.- When we say "AI for good" we don't actually know how to define an objective for "good" that an AI can optimize.

19.- If we give AIs the wrong objective, even with good intent, they will cause harm in single-mindedly pursuing it. This requires new AI foundations.

20.- AI isn't a panacea - many barriers are human coordination and motivation. Technology alone can't solve all problems.

21.- We likely have time before AGI risk, but given the stakes and rate of progress, we cannot be complacent.

22.- Problems of algorithmic bias, autonomous weapons, fake content, surveillance must be addressed now as they'll get worse quickly with AI progress.

23.- Media discussions of evil robot overlords are a distraction - the real concern is AIs that are good at the wrong objective.

24.- As an individual, educating oneself then participating via writing/advocacy is how to engage with shaping the future of AI.

25.- Satellite imagery analysis has huge potential for environmental monitoring, urban planning, migration, shipping, fishing etc. if costs are pooled.

26.- With language and reasoning, AI tutors could dramatically accelerate education, making it far cheaper and more accessible to spread skills and knowledge.

27.- Once infrastructure is in place, adding AI for new global-scale services could cost just hundreds of thousands to a few million dollars.

28.- The ratio of infrastructure cost to AI deployment cost can be 1000:1 based on the global seismic monitoring example - enabling huge benefits.

29.- AI will enhance human autonomy and help solve coordination problems, but we must proactively define the future we want rather than react.

30.- We are still far from AGI that poses existential risk, but we don't know how long it will take to solve AI control and safety - we must begin now.

Knowledge Vault built byDavid Vivancos 2024