April 9, 2025
|
Hidden Layers
Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39
In this episode of Hidden Layers, Ron Green talks with Dr. ZZ Si, Michael Wharton, and Reed Coke about recent AI developments. They cover Anthropic’s work on Claude 3.5 and model interpretability, OpenAI’s GPT-4 image generation and its underlying architecture, and a new approach to latent reasoning from the Max Planck Institute. They also discuss synthetic data in light of NVIDIA’s acquisition of Gretel AI and reflect on the delayed rollout of Apple Intelligence. The conversation explores what these advances reveal about how AI models reason, behave, and can (or can’t) be controlled.