The SSN Breach: What Now?
18 August 2024 - 22 minsIn this episode, we cover the recent data breach of nearly 3B records, including a significant number of social security numbers. Joining us to discuss are security experts Joel de la Garza and Naftali Harris. Incredibly enough, Naftali and his team were able to get their hands on the breached dataset and were able to validate the nature of the claims. Listen in as we explore the who, what, when, where, why… but also how a breach of this magnitude happens and what we can do about it.
Resources:
Read 16 Steps to Securing Your Data (and Life)Find Naftali on Twitter: https://x.com/naftaliharrisCheck out Sentilink: https://www.sentilink.com/Stay Updated:
Let us know what you think: https://r...
How Mintlify Is Rebuilding Documentation for Coding Agents
Mintlify is a documentation platform built by cofounders Han Wang and Hahnbee Lee to help teams create and maintain developer docs. In this episode, Andreessen Horowitz general partners Jennifer Li and Yoko Li speak with Han and Hahnbee about how coding agents are changing what “good docs” mean, shifting documentation from a human-only resource into infrastructure that powers AI tools, support agents, and internal knowledge workflows. They share Mintlify’s early journey, including eight pivots, the two-day prototype that landed their first customer, and the “do things that don’t scale” sales motion that helped them win early traction. The conversation also covers why docs go out of date, what “self-healing” documentation requires to actually work, and how serving fast-moving customers has shaped both their product priorities and their pace.
44 mins
23 January Finished
Inferact: Building the Infrastructure That Runs Modern AI
Inferact is a new AI infrastructure company founded by the creators and core maintainers of vLLM. Its mission is to build a universal, open-source inference layer that makes large AI models faster, cheaper, and more reliable to run across any hardware, model architecture, or deployment environment. Together, they broke down how modern AI models are actually run in production, why “inference” has quietly become one of the hardest problems in AI infrastructure, and how the open-source project vLLM emerged to solve it. The conversation also looked at why the vLLM team started Inferact and their vision for a universal inference layer that can run any model, on any chip, efficiently.
43 mins
22 January Finished
Martin Casado on the Demand Forces Behind AI
In this feed drop from The Six Five Pod, a16z General Partner Martin Casado discusses how AI is changing infrastructure, software, and enterprise purchasing. He explains why current constraints are driven less by technical limits and more by regulation, particularly around power, data centers, and compute expansion. The episode also covers how AI is affecting software development, lowering the barrier to coding without eliminating the need for experienced engineers, and how agent-driven tools may shift infrastructure decision-making away from humans. Watch more from Six Five Media: https://www.youtube.com/@SixFiveMedia
27 mins
21 January Finished
From Code Search to AI Agents: Inside Sourcegraph's Transformation with CTO Beyang Liu
Sourcegraph's CTO just revealed why 90% of his code now comes from agents—and why the Chinese models powering America's AI future should terrify Washington. While Silicon Valley obsesses over AGI apocalypse scenarios, Beyang Liu's team discovered something darker: every competitive open-source coding model they tested traces back to Chinese labs, and US companies have gone silent after releasing Llama 3. The regulatory fear that killed American open-source development isn't hypothetical anymore—it's already handed the infrastructure layer of the AI revolution to Beijing, one fine-tuned model at a time.
46 mins
20 January Finished
The AI Opportunity That Goes Beyond Models
The a16z AI Apps team outlines how they are thinking about the AI application cycle and why they believe it represents the largest and fastest product shift in software to date. The conversation places AI in the context of prior platform waves, from PCs to cloud to mobile, and examines where adoption is already translating into real enterprise usage and revenue. They walk through three core investment themes: existing software categories becoming AI-native, new categories where software directly replaces labor, and applications built around proprietary data and closed-loop workflows. Using portfolio examples, the discussion shows how these models play out in practice and why defensibility, workflow ownership, and data moats matter more than novelty as AI applications scale.
1 hour 10 mins
19 January Finished
How Foundation Models Evolved: A PhD Journey Through AI's Breakthrough Era
The Stanford PhD who built DSPy thought he was just creating better prompts—until he realized he'd accidentally invented a new paradigm that makes LLMs actually programmable. While everyone obsesses over whether LLMs will get us to AGI, Omar Khattab is solving a more urgent problem: the gap between what you want AI to do and your ability to tell it, the absence of a real programming language for intent. He argues the entire field has been approaching this backwards, treating natural language prompts as the interface when we actually need something between imperative code and pure English, and the implications could determine whether AI systems remain unpredictable black boxes or become the reliable infrastructure layer everyone's betting on.
57 mins
16 January Finished