Knowledge Vault 7 /359 - xHubAI 08/08/2025
đź”´OpenAI GPT-5 EN DIRECTO! PresentaciĂłn
< Resume Image >
Link to InterviewOriginal xHubAI Video

Concept Graph, Resume & KeyIdeas using Moonshot Kimi K2 0905:

graph LR classDef core fill:#ffd4d4,font-weight:bold,font-size:14px; classDef size fill:#d4ffd4,font-weight:bold,font-size:14px; classDef bench fill:#d4d4ff,font-weight:bold,font-size:14px; classDef api fill:#ffffd4,font-weight:bold,font-size:14px; classDef price fill:#ffd4ff,font-weight:bold,font-size:14px; classDef open fill:#d4ffff,font-weight:bold,font-size:14px; classDef vibe fill:#fff0d4,font-weight:bold,font-size:14px; classDef safe fill:#f0d4ff,font-weight:bold,font-size:14px; classDef meta fill:#d4fff0,font-weight:bold,font-size:14px; Main[Vault7-359] Main --> C1[Speed
& reason merge. 1] C1 -.-> G1[Core] Main --> C2[Three sizes:
Full Mini Nano. 2] C2 -.-> G2[Size] Main --> C3[SOTA claims
on three benches. 3] C3 -.-> G3[Bench] Main --> C4[100 % scores
spark over-fit doubt. 4] C4 -.-> G3 Main --> C5[Distilled from
O3 curriculum. 5] C5 -.-> G1 Main --> C6[400 k window
4Ă— length. 6] C6 -.-> G1 Main --> C7[API adds
reasoning slider. 7] C7 -.-> G4[API] Main --> C8[Regex grammar
output constraints. 8] C8 -.-> G4 Main --> C9[Tool-call
preambles explain. 9] C9 -.-> G4 Main --> C10[Verbosity:
low med high. 10] C10 -.-> G4 Main --> C11[Free drops
to Mini. 11] C11 -.-> G5[Price] Main --> C12[Pro keeps
unlimited plus. 12] C12 -.-> G5 Main --> C13[Gmail Calendar
memory Pro. 13] C13 -.-> G1 Main --> C14[Voice lasts
hours natural. 14] C14 -.-> G1 Main --> C15[Themes &
sarcasm toggles. 15] C15 -.-> G6[Vibe] Main --> C16[Safe path
after refusal. 16] C16 -.-> G7[Safe] Main --> C17[Hallucination
drop vs O3. 17] C17 -.-> G7 Main --> C18[Pricing $1.25
$10 per M. 18] C18 -.-> G5 Main --> C19[Nano 1/25
cost fast. 19] C19 -.-> G2 Main --> C20[OSS 20 B
rivals GPT-4. 20] C20 -.-> G8[Open] Main --> C21[270 tok/s
on RTX 5090. 21] G8 --> C21 Main --> C22[Apache 2.0
bans retrain. 22] G8 --> C22 Main --> C23[PhD pocket
slogan mocked. 23] C23 -.-> G9[Meta] Main --> C24[Cancer tale
praised panned. 24] C24 -.-> G9 Main --> C25[Spanish panel
calls infomercial. 25] C25 -.-> G9 Main --> C26[100 % scores
useless per experts. 26] C26 -.-> G3 Main --> C27[Google Gemini
seen winner. 27] C27 -.-> G9 Main --> C28[Claude 4.1
gains ground. 28] C28 -.-> G9 Main --> C29[No jaw-drop
multimodal demo. 29] C29 -.-> G9 Main --> C30[No AGI
superintelligence said. 30] C30 -.-> G9 Main --> C31[August roll-out
not months. 31] C31 -.-> G1 Main --> C32[Memory learns
user schedules. 32] C32 -.-> G1 Main --> C33[Canvas auto-SVG
interactive demos. 33] C33 -.-> G4 Main --> C34[Front-end quality
70 % vs O3. 34] C34 -.-> G1 Main --> C35[Agent coding
45 min alone. 35] C35 -.-> G1 Main --> C36[Cursor adopts
GPT-5 default. 36] C36 -.-> G4 Main --> C37[Cursor trial
days free. 37] C37 -.-> G4 Main --> C38[PDF bug
fixed live. 38] C38 -.-> G4 Main --> C39[Voice quizzes
Korean café. 39] C39 -.-> G6 Main --> C40[Nano hinted
pre-launch. 40] C40 -.-> G2 Main --> C41[Mini-Nano
Apple memes. 41] C41 -.-> G9 Main --> C42[Streamed on
YT LinkedIn Rumble. 42] C42 -.-> G9 Main --> C43[Reject Meta
for rebeldĂ­a. 43] C43 -.-> G9 Main --> C44[170 Spanish
AI episodes. 44] C44 -.-> G9 Main --> C45[Discord shares
books models. 45] C45 -.-> G9 Main --> C46[Want Ilya
Demis Elon. 46] C46 -.-> G9 Main --> C47[US AI lead
at risk. 47] C47 -.-> G9 Main --> C48[China mocks
GPT-petardo-5. 48] C48 -.-> G9 Main --> C49[Elon tweets
Grok 4 wins. 49] C49 -.-> G9 Main --> C50[Brand fatigue
like Bard. 50] C50 -.-> G9 Main --> C51[Weak Spanish
speakers slammed. 51] C51 -.-> G9 Main --> C52[Want visionary
not emotional. 52] C52 -.-> G9 Main --> C53[Synthetic loop
self-improve hint. 53] C53 -.-> G1 Main --> C54[Safe reduce
blunt refusals. 54] C54 -.-> G7 Main --> C55[HealthBench by
250 doctors. 55] C55 -.-> G3 Main --> C56[Health scores
top ever. 56] G3 --> C56 Main --> C57[Amgen tests
drug design. 57] C57 -.-> G3 Main --> C58[BBVA cuts
weeks to hours. 58] C58 -.-> G3 Main --> C59[Org-wide
rate limits. 59] C59 -.-> G5 Main --> C60[EDU tier
next week. 60] C60 -.-> G5 Main --> C61[Custom GPTs
get voice. 61] C61 -.-> G4 Main --> C62[Auto packing
lists from Gmail. 62] C62 -.-> G1 Main --> C63[Fixes own
lint errors. 63] C63 -.-> G1 Main --> C64[Chooses React
Tailwind auto. 64] C64 -.-> G1 Main --> C65[Purple theme
running joke. 65] C65 -.-> G6 Main --> C66[Want tokenizer
open-source. 66] C66 -.-> G9 Main --> C67[August date
damage control. 67] C67 -.-> G9 Main --> C68[4.5 failure
still fresh. 68] C68 -.-> G9 Main --> C69[Expect leap
in GPT-6. 69] C69 -.-> G9 Main --> C70[Undercuts Claude
monthly fee. 70] C70 -.-> G5 Main --> C71[Nano targets
edge IoT. 71] C71 -.-> G2 Main --> C72[Mini fits
mid GPU. 72] C72 -.-> G2 Main --> C73[Full wants
H200 speed. 73] C73 -.-> G2 Main --> C74[Mac RAM praised
bandwidth capped. 74] C74 -.-> G8 Main --> C75[>100 % dubbed
over-fit dash. 75] C75 -.-> G3 Main --> C76[No Sora
video demo. 76] C76 -.-> G9 Main --> C77[No computer-use
shown. 77] C77 -.-> G9 Main --> C78[Google silent
Gemini soon. 78] C78 -.-> G9 Main --> C79[Reg urgency
pulled Spanish bill. 79] C79 -.-> G9 Main --> C80[Brand equity
depleted post-event. 80] C80 -.-> G9 Main --> C81[Model good
launch disaster. 81] C81 -.-> G9 G1[Core] --> C1 G1 --> C5 G1 --> C6 G1 --> C13 G1 --> C31 G1 --> C32 G1 --> C34 G1 --> C35 G1 --> C53 G1 --> C62 G1 --> C63 G1 --> C64 G2[Size] --> C2 G2 --> C19 G2 --> C40 G2 --> C71 G2 --> C72 G2 --> C73 G3[Bench] --> C3 G3 --> C4 G3 --> C26 G3 --> C55 G3 --> C56 G3 --> C57 G3 --> C58 G3 --> C75 G4[API] --> C7 G4 --> C8 G4 --> C9 G4 --> C10 G4 --> C33 G4 --> C36 G4 --> C37 G4 --> C38 G4 --> C61 G5[Price] --> C11 G5 --> C12 G5 --> C18 G5 --> C59 G5 --> C60 G5 --> C70 G6[Vibe] --> C15 G6 --> C39 G6 --> C65 G7[Safe] --> C16 G7 --> C17 G7 --> C54 G8[Open] --> C20 G8 --> C21 G8 --> C22 G8 --> C74 G9[Meta] --> C23 G9 --> C24 G9 --> C25 G9 --> C27 G9 --> C28 G9 --> C29 G9 --> C30 G9 --> C41 G9 --> C42 G9 --> C43 G9 --> C44 G9 --> C45 G9 --> C46 G9 --> C47 G9 --> C48 G9 --> C49 G9 --> C50 G9 --> C51 G9 --> C52 G9 --> C66 G9 --> C67 G9 --> C68 G9 --> C69 G9 --> C76 G9 --> C77 G9 --> C78 G9 --> C79 G9 --> C80 G9 --> C81 class C1,C5,C6,C13,C31,C32,C34,C35,C53,C62,C63,C64 core class C2,C19,C40,C71,C72,C73 size class C3,C4,C26,C55,C56,C57,C58,C75 bench class C7,C8,C9,C10,C33,C36,C37,C38,C61 api class C11,C12,C18,C59,C60,C70 price class C15,C39,C65 vibe class C16,C17,C54 safe class C20,C21,C22,C74 open class C23,C24,C25,C27,C28,C29,C30,C41,C42,C43,C44,C45,C46,C47,C48,C49,C50,C51,C52,C66,C67,C68,C69,C76,C77,C78,C79,C80,C81 meta

Resume:

OpenAI’s launch event for GPT-5 was framed as a milestone on the road to AGI, yet the two-hour broadcast felt more like a carefully scripted infomercial than the scientific watershed the community expected. Executives repeated that the model now “thinks like a PhD,” but the demos rarely strayed from polished party-planning, SVG animations and French-vocabulary games. While benchmark slides claimed state-of-the-art scores on SWE-Bench, MMMU and HealthBench, critics noted that several metrics topped out at 100 %, a statistical red-flag for over-fitting. The presentation trio—GPT-5, GPT-5 Mini and GPT-5 Nano—was introduced with tiered pricing ($1.25 / $10 per million tokens for the full model, down to Nano at a 25× discount), but no radical architectural reveal accompanied the numbers. Instead, the stream lingered on emotional testimonials, including a cancer patient who described using ChatGPT to parse biopsy jargon, a segment many viewers praised as humane while others condemned it as exploitative sentimentality.
Under the hood, OpenAI confirmed that GPT-5 unifies the previously split paths of fast “GPT” and slow “reasoning” models: a single network now decides how long to deliberate before answering. Distillation from O3-generated synthetic data was highlighted as the secret sauce, allowing a 490-billion-parameter teacher to compress into the 70-billion-parameter Mini without heavy loss. API users gain a new “minimal” reasoning-effort flag, 400 k-token context windows, custom regex-constrained outputs and tool-call preambles, all shipping today. Enterprise and EDU tiers arrive next week, while free-tier users cycle into GPT-5 Mini after a yet-unspecified quota. Safety was addressed with “safe completions”: the model may refuse risky steps but must explain why and suggest compliant alternatives, a policy designed to curb both hallucinations and unhelpful blanket refusals. Voice mode, memory, Gmail/Calendar integration and color themes were also upgraded, reinforcing OpenAI’s shift toward consumer stickiness rather than pure research spectacle.
Reaction in the Spanish-speaking AI community was scathing. Panelists who had stayed online past 2 a.m. to cover the event called it “una hora y pico de anuncios sin parar” that buried genuine technical gains under marketing clichés. They praised the open-source GPT-OSS 20-billion model for rivaling GPT-4 on consumer GPUs at 270 tokens/s, and warned that GPT-5’s modest delta could cede narrative ground to Google’s Gemini 2.5 Pro or Anthropic’s Claude 4. The consensus: OpenAI squandered cultural capital by replacing rock-star researchers with inexperienced presenters, over-promising AGI and under-delivering wow-factor demos. If the benchmarks hold, GPT-5 will still be the best coding and health-consultation model available, but the launch spectacle may have damaged the brand more than it helped, handing rivals a rare opportunity to reclaim the spotlight.

Key Ideas:

1.- GPT-5 merges speedy and reasoning models into one.

2.- Three sizes: full, Mini (70 B), Nano (cheapest).

3.- Claims SOTA on SWE-Bench, MMMU, HealthBench.

4.- Some evals hit 100 %, raising over-fit suspicions.

5.- Distilled from O3 synthetic curriculum, not new arch.

6.- 400 k context window, 4Ă— prior length.

7.- API adds “minimal” reasoning-effort slider.

8.- Custom regex/grammar output constraints shipped.

9.- Tool-call preambles explain upcoming actions.

10.- Verbosity parameter: low, medium, high.

11.- Free users downgrade to GPT-5 Mini after quota.

12.- Pro tier keeps unlimited GPT-5 plus extended think.

13.- Gmail & Calendar access for memory, starting Pro.

14.- Voice mode now hours-long, more natural prosody.

15.- Color themes and sarcastic personality toggles.

16.- Safe completions: explain refusal, offer safe path.

17.- Hallucination rate said to drop vs O3/O4.

18.- Pricing: $1.25 in / $10 out per million tokens.

19.- Nano costs 1/25 of full model, faster.

20.- GPT-OSS 20 B open-weights rivals GPT-4 locally.

21.- OSS runs 270 tokens/s on RTX 5090.

22.- OSS license Apache 2.0 but bans retraining.

23.- Community mocks “PhD-in-your-pocket” slogan.

24.- Cancer-patient story praised and criticized.

25.- Spanish panel calls event “endless infomercial.”

26.- Benchmark 100 % scores labeled useless by experts.

27.- Google Gemini 2.5 Pro seen as big winner.

28.- Claude 4.1 Opus also gains ground perception.

29.- Event lacked multimodal jaw-dropping demo.

30.- No AGI or superintelligence announced.

31.- Roll-out promised through August, not months.

32.- Memory upgrade learns user schedules.

33.- Canvas tool auto-creates interactive SVG demos.

34.- Front-end code quality preferred 70 % vs O3.

35.- Agentic coding chains run 45 min unattended.

36.- Cursor IDE adopts GPT-5 as default today.

37.- Cursor users get free trial days.

38.- SDK PDF-upload bug fixed live on stage.

39.- Voice study mode quizzes Korean café phrases.

40.- GPT-5 Nano hinted at on OpenAI site pre-launch.

41.- Community memes “GPT-5 Mini-Nano” Apple-style.

42.- Event streamed on YouTube, LinkedIn, Rumble, Twitch.

43.- Hosts reject Meta platforms for rebeldĂ­a.

44.- 170 prior Spanish AI community episodes cited.

45.- Discord group shares free books, models, tools.

46.- Panelists want Ilya, Demis, Elon on stage.

47.- U.S. geopolitical AI leadership said at risk.

48.- China humor shows mock “GPT-petardo-5.”

49.- Elon tweets Grok 4 Heavy beats GPT-5.

50.- OpenAI risks brand fatigue like Google Bard.

51.- Presentation speakers criticized for weak Spanish.

52.- Cultural call for visionary, not emotional, demos.

53.- Synthetic data loop foreshadows self-improvement.

54.- Safe completions reduce blunt refusals.

55.- HealthBench built with 250 physicians.

56.- GPT-5 scores higher than any prior model on health.

57.- Amgen tests GPT-5 for drug-design reasoning.

58.- BBVA cuts financial-analysis time 3 weeks→hours.

59.- Enterprise rate limits support whole org usage.

60.- EDU tier launches next week.

61.- Custom GPTs now support voice.

62.- Memory can auto-build packing lists from Gmail.

63.- GPT-5 fixes its own lint errors during build.

64.- Model chooses React/Tailwind without prompting.

65.- Purple color theme becomes running joke.

66.- Panel wants tokenizer open-sourced next.

67.- August rollout date seen as damage control.

68.- GPT-4.5 failure still fresh in minds.

69.- Community expects architecture leap in GPT-6.

70.- Pricing undercuts Claude Pro monthly fee.

71.- GPT-5 Nano targets edge IoT deployments.

72.- Mini suffices for mid-range GPU rigs.

73.- Full model needs high-memory H200 for speed.

74.- Mac unified memory praised but bandwidth-limited.

75.- Over-100 % scores dubbed “over-fit dashboard.”

76.- Event lacked live Sora or video-generation demo.

77.- No computer-use capabilities shown.

78.- Panel predicts Google silent Gemini drop soon.

79.- U.S. regulation urgency withdrawn from Spanish bill.

80.- OpenAI brand equity seen as depleted post-event.

81.- Community consensus: model good, launch disaster.

Interviews by Plácido Doménech Espí & Guests - Knowledge Vault built byDavid Vivancos 2025