Knowledge Vault 4 /96 - AI For Good 2024
Interview with Sam Altman (remote) and Nick Thompson (in-person)
Sam Altman
< Resume Image >
Link to IA4Good VideoView Youtube Video

Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:

graph LR classDef productivity fill:#d4f9d4, font-weight:bold, font-size:14px classDef security fill:#f9d4d4, font-weight:bold, font-size:14px classDef languages fill:#d4d4f9, font-weight:bold, font-size:14px classDef progress fill:#f9f9d4, font-weight:bold, font-size:14px classDef model fill:#d4f9f9, font-weight:bold, font-size:14px classDef safety fill:#f9d4f9, font-weight:bold, font-size:14px classDef governance fill:#d9f4f9, font-weight:bold, font-size:14px classDef future fill:#d9d9d9, font-weight:bold, font-size:14px classDef regulations fill:#ffd4d4, font-weight:bold, font-size:14px A[Interview with Sam
Altman remote and
Nick Thompson in-person] --> B[AI boosting
productivity in key sectors. 1] A --> C[AI poses early
cybersecurity threats. 2] A --> D[GPT-4 covers
many languages effectively. 3] A --> E[Uncertain AI progress
aims for responsible releases. 4] A --> F[Next model trained
on synthetic data partly. 5] A --> G[Goal: understand neurons
holistic safety approach. 6] G --> H[AI must be
human-compatible yet superhuman. 7] G --> I[OpenAI wont use
real voices without permission. 8] G --> J[Integrate AI safety and
capabilities for users. 9] G --> K[Leaders leaving doesnt mean
safety ignored. 10] G --> L[Focus on AGI, aiming
human-oriented world. 11] E --> M[Future: many language models,
few dominate. 12] E --> N[AI alters internet use
without overwhelming. 13] E --> O[AI benefits poorest,
needs new social contracts. 14] E --> P[Society, economy need reconfiguration
with AI. 15] Q[Interview with Sam
Altman remote and
Nick Thompson in-person] --> R[Current regulations miss
long-term AI impacts. 16] Q --> S[Regulations need empirical
evolution with AI. 17] Q --> T[Altman disagrees with critiques
on governance. 18] Q --> U[Powerful AI could
foster humility, awe. 19] Q --> V[AI may aid in governance
through aggregation. 20] T --> W[Hard to replicate human
subjective experience. 21] T --> X[Key skill: continual
relearning with AI. 22] T --> Y[Tremendous AI upsides
and serious risks. 23] T --> Z[New Safety and Security board
for next model. 24] T --> AA[Steep AI improvement requires
proactive policies. 25] P --> AB[Need frameworks balancing
short and long-term. 26] P --> AC[Dont neglect transformative
AI considerations. 27] P --> AD[Iterative safe AI, learning,
co-evolution approach. 28] P --> AE[Science reduces human-centricity
over time. 29] P --> AF[Balance AI benefits,
mitigate societal risks. 30] class B productivity class C security class D languages class E progress class F model class G safety class H,I,J,K,L safety class M,N,O,P future class Q regulations class R,S,T,U,V governance class W,X,Y,Z,AA governance class AB,AC,AD,AE,AF future

Resume:

1.- AI is starting to increase productivity in areas like software development, education, and healthcare.

2.- Cyber security could be an early negative impact of AI that needs attention.

3.- OpenAI's GPT-4 model has very good coverage of a wide variety of languages, serving 97% of people in their primary language.

4.- It's unclear if AI progress will be linear, asymptotic, or exponential, but OpenAI aims to responsibly release the best models they can create.

5.- The next OpenAI model will be trained partly on synthetic data generated by other language models, but it's inefficient to rely on this entirely.

6.- Increased model interpretability and understanding what happens at the neuron level is a goal, but safety will require a whole package approach.

7.- Making AI systems human-compatible in how they operate and communicate is important, but their underlying capabilities are superhuman and alien.

8.- OpenAI does not impersonate real people's voices, like Scarlett Johansson's, in their voice model without permission.

9.- AI safety and capabilities work are intertwined - it's about building integrated systems that safely and capably accomplish what users want.

10.- Key safety leaders like Ilya Sutskever leaving OpenAI doesn't mean safety isn't prioritized; work is integrated across teams.

11.- AGI has been a major focus for OpenAI but they aim to shape an AI-enabled world that stays maximally human-oriented.

12.- In 3 years there may be hundreds or thousands of large language models, with a small number getting the majority of usage.

13.- The way people use the internet may change with AI, but it won't become incomprehensible or overwhelmed with spam.

14.- AI will likely do more to help the world's poorest than richest, bringing abundance and prosperity, but may require social contract changes.

15.- Over the long-term, the entire structure of society and economy may need reconfiguration as AI becomes extremely powerful.

16.- Regulations so far have focused on short-term issues like elections, not bigger questions of what happens when AI can reshape the economy.

17.- Regulatory frameworks will need to evolve empirically by putting AI systems out in the world and learning, not just theorizing in advance.

18.- Altman disagrees with critiques from former board members that OpenAI's governance and oversight has been dysfunctional.

19.- Creating AI more powerful than humans could increase humility and awe rather than egotism, as science reveals our small place in the universe.

20.- Altman envisions a possible future where AI helps everyone have a say in governance by understanding and aggregating individual preferences.

21.- Determining if there are things about human brains/minds, like subjective experience, that can't be replicated in AI is a difficult question.

22.- An important human skill in an AI-enabled future will be learning how to continually relearn things as the world rapidly changes.

23.- There are both tremendous upsides and serious risks to mitigate as AI becomes extremely powerful.

24.- OpenAI launched a new Safety and Security board committee to prepare for their next model release.

25.- The trajectory of AI improvement is likely to remain steep, so we need to proactively develop the right structures and policies.

26.- Companies, countries, and the international community need preparedness frameworks balancing both the short and long-term issues.

27.- We shouldn't neglect the long-term considerations around transformative AI, but also shouldn't assume we are near an asymptote in progress.

28.- OpenAI believes in an iterative approach, putting safe AI systems out in the world, learning empirically, and co-evolving technology and society together.

29.- The history of science has been marked by decreasing human-centricity as we discover the true scale of the universe.

30.- AI development requires simultaneously capturing the incredible benefits for the world while carefully mitigating societal risks on different time scales.

Knowledge Vault built byDavid Vivancos 2024