Knowledge Vault 7 /362 - xHubAI 11/08/2025
🔴¿QUE ES LO SIGUIENTE EN AI? Lo siguiente en sistemas.AI y LLMs
< Resume Image >
Link to InterviewOriginal xHubAI Video

Concept Graph, Resume & KeyIdeas using Moonshot Kimi K2 0905:

graph LR classDef greet fill:#ffe0b3,font-weight:bold,font-size:14px; classDef comm fill:#b3d9ff,font-weight:bold,font-size:14px; classDef hype fill:#ffb3b3,font-weight:bold,font-size:14px; classDef health fill:#b3ffb3,font-weight:bold,font-size:14px; classDef hist fill:#e6b3ff,font-weight:bold,font-size:14px; classDef scale fill:#ffffb3,font-weight:bold,font-size:14px; classDef align fill:#b3ffff,font-weight:bold,font-size:14px; classDef arch fill:#ffb3e0,font-weight:bold,font-size:14px; classDef kit fill:#d9ffb3,font-weight:bold,font-size:14px; classDef plat fill:#ffb3d9,font-weight:bold,font-size:14px; Main[Vault7-362] Main --> P1[22k fans greet
daily papers 1] P1 -.-> G1[Greet] Main --> P2[XHubAI five
160 eps Spanish 2] P2 -.-> G2[Community] Main --> P3[80k views
99% positive 3] P3 -.-> G2 Main --> P4[Irregular slots
shield mind 4] P4 -.-> G3[Health] Main --> P5[GPT5 hype
critical antidote 5] P5 -.-> G4[Hype] Main --> P6[Wang interview
unearthed 2022 6] P6 -.-> G5[History] Main --> P7[Ilya nets
at 16 7] P7 -.-> G5 Main --> P8[2000s snubbed
nets no theorems 8] P8 -.-> G5 Main --> P9[OpenAI born
science+engineering 9] G5 --> P9 Main --> P10[Prediction core
GPT scaling 10] P10 -.-> G6[Scaling] Main --> P11[Codex reborn
no-code hint 11] P11 -.-> G6 Main --> P12[CLIP DALL-E
embodied path 12] P12 -.-> G6 Main --> P13[Scaling laws
compute data 13] P13 -.-> G6 Main --> P14[Data scarcity
caps law 14] P14 -.-> G6 Main --> P15[Data alone
no morals 15] P15 -.-> G7[Align] Main --> P16[Krishnamurti
instant action 16] P16 -.-> G7 Main --> P17[Freedom unknown
beats code 17] P17 -.-> G7 Main --> P18[Network nets
beyond LLM 18] P18 -.-> G8[Arch] Main --> P19[HRM first
brain layer 19] P19 -.-> G8 Main --> P20[Summer tracks
modular routing 20] P20 -.-> G8 Main --> P21[Codex kit
open GitHub 21] P21 -.-> G9[Kit] Main --> P22[Remix kit
test alignment 22] P22 -.-> G9 Main --> P23[Design audits
before deploy 23] P23 -.-> G7 Main --> P24[Trolls blocked
keep thinkers 24] Main --> P25[Multistream
YouTube LinkedIn 25] P25 -.-> G10[Platform] G1[Greet] --> P1 G2[Community] --> P2 G2 --> P3 G3[Health] --> P4 G4[Hype] --> P5 G5[History] --> P6 G5 --> P7 G5 --> P8 G5 --> P9 G6[Scaling] --> P10 G6 --> P11 G6 --> P12 G6 --> P13 G6 --> P14 G7[Align] --> P15 G7 --> P16 G7 --> P17 G7 --> P23 G8[Arch] --> P18 G8 --> P19 G8 --> P20 G9[Kit] --> P21 G9 --> P22 G10[Platform] --> P25 class P1 greet class P2,P3 comm class P4 health class P5 hype class P6,P7,P8,P9 hist class P10,P11,P12,P13,P14 scale class P15,P16,P17,P23 align class P18,P19,P20 arch class P21,P22 kit class P25 plat

Resume:

The host opens with gratitude to the live Sunday audience, framing the stream as an “antidepressant” against the hype and confusion surrounding GPT-5. He introduces the Spanish-language AI community XHubAI, now five years old, 160 episodes strong, and still independent. After apologizing for his irregular schedule—he records in bursts to protect mental health—he announces a packed summer slate: daily paper analyses, starting with the Hierarchical Reasoning Model (HRM), followed by Archassis, Agencial Evolución de Superinteligencia and others. The viral success of the recent “Inversión Racional” dialogue (80 k views in four days) is celebrated, yet the host warns new viewers that trolling or low-effort comments will be blocked; the space is meant for serious, critical thinking.
The core of the episode is a forgotten 2022 interview in which Alexander Wang asks Ilya Sutskever about the future of AI. Sutskever recounts falling in love with neural nets at sixteen, the long “desert” of disinterest before compute caught up, and why OpenAI was created: to fuse science and engineering into one discipline and to confront safety issues early. He explains that GPT’s power emerged from the simple insight that prediction equals understanding; scale data and compute together and emergent capabilities appear. Codex is praised for turning natural language into executable code, foreshadowing a future where programmers merely describe intent. Multimodal models such as CLIP and DALL-E are praised for binding text and vision, moving AI closer to an embodied, human-like understanding of the world.
Closing reflections pivot to philosophy: Krishnamurti’s claim that true intelligence is instantaneous perception-and-action without the delay of thought is juxtaposed with the limits of brute-force scaling. The host argues that while bigger data and larger matrices may yield impressive narrow skills, they will not deliver personality, identity or moral alignment unless we weave multiple specialized nets—memory, creativity, empathy—into a higher-order cognitive architecture. HRM, he insists, is the first crack in that wall: a brain-inspired layer above today’s transformers that could become one module in a federation of models. The summer series will therefore explore not only new algorithms but how to orchestrate them, how to insert “attractors” that steer behaviour, and how to preserve human agency once AIs can act in real time. Viewers are invited to download the forthcoming “Gentic AI Codex 2025” kit, a free GitHub bundle of papers and prompts that will evolve with every episode.

Key Ideas:

1.- Host greets 22 k YouTube followers, vows daily summer paper analyses starting with HRM.

2.- XHubAI community turns five, 160 episodes, remains proudly independent and Spanish-speaking.

3.- Viral “Inversión Racional” clip hit 80 k views in four days, 99 % positive comments.

4.- Host admits irregular schedule to protect mental health, records in creative bursts.

5.- GPT-5 hype is called “noise”; stream aims to be critical antidote.

6.- Forgotten 2022 Wang–Sutskever interview unearthed for fresh commentary.

7.- Ilya recounts discovering neural nets at 16 in Toronto Public Library.

8.- Early 2000s academia rejected neural nets for lack of provable theorems.

9.- OpenAI founded to merge science & engineering and confront safety early.

10.- Prediction-as-understanding core belief led to GPT scaling success.

11.- Codex revives: natural language → executable code, foreshadows no-code future.

12.- Multimodal CLIP/DALL-E bind text and vision toward embodied AI.

13.- Scaling laws: simultaneous compute + data increases yield emergent capabilities.

14.- Data scarcity in narrow domains (e.g., law) may cap pure scaling.

15.- Host argues brute-force data alone cannot create personality or moral alignment.

16.- Krishnamurti excerpt: true intelligence is instantaneous, attachment-free action.

17.- Freedom from the known distinguishes humans from programmed computers.

18.- Host proposes “network of networks” cognitive architecture beyond single LLM.

19.- HRM presented as first brain-inspired layer above transformers.

20.- Summer series will track modular architectures, attractors, routing policies.

21.- Gentic AI Codex 2025 kit promised: open-source papers + prompts on GitHub.

22.- Community invited to remix kit to test alignment hypotheses in silico.

23.- Alignment framed as design problem: audit incentives before deployment.

24.- Host warns trolls will be blocked; space reserved for serious critical thinking.

25.- Stream retransmits simultaneously to YouTube, LinkedIn, Rumble, Kick, Twitch.

Interviews by Plácido Doménech Espí & Guests - Knowledge Vault built byDavid Vivancos 2025