Model is the Product | Common Corpus, Mid-Training, Open Science
Bringing an exciting pod with Pierre-Carl Langlais (aka Alexandar Doria). We'd discussed about pre-training recipes, common corpus, mid-training, open science, and more.
Conversations with interesting people in AI. For technical deep dives and more content, check out Ground Zero
Bringing an exciting pod with Pierre-Carl Langlais (aka Alexandar Doria). We'd discussed about pre-training recipes, common corpus, mid-training, open science, and more.
Bringing an exciting pod with Kalomaze (20 yo ML researcher, prime intellect). Discussed training, fine-tuning, RL, scaling of LLMs, and interesting TPOT lores.
Daniel Han talks about early career trajectory, founding Unsloth, GTM strategy, LLM pre-training, fine-tuning, scaling RL, YC support, and much more.
An in-person episode featuring Soham Parekh, detailing his journey, experiences, reflections, and how he balanced multiple Silicon Valley roles simultaneously.
Will Brown discusses career trajectory, GenAI Handbook, RL, self-improving agents, AI timeline, reward hacking, post-AGI landscape, and experimental research mindset.
Tokenbender shares insights on post-training, RL, reasoning, post-AGI landscape, experiments, and challenges in Indic AI.
GroundZero AI Talks EP01 with Raj Dabre covers machine learning research, trends, career in Japan, and the road ahead.
Last updated: Sep 16, 2025