Contact Us

Hidden Layers: Decoded (Meta's Llama 3.2, OpenAI, Microsoft, Nobel Prize Reactions, GraphRAG & more)

In this episode of Hidden Layers, Ron Green is joined by KUNGFU.AI's Michael Wharton and Dr. Steve Kramer to discuss the latest news in AI. They cover OpenAI’s leadership turnover, the rise of smaller, more efficient AI models, and the growing importance of AI governance. Plus, they explore Meta's Llama 3.2, a new multimodal model, and share insights from recent AI conferences. The conversation concludes with a discussion of AI experts winning Nobel Prizes for their groundbreaking work in physics and chemistry.

Resources:

-The EPDS speaker was Dr. William Gilpin, here's the paper in question: https://arxiv.org/abs/2110.05266

-Llama 3.2 release: https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/

-Paper: "Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI" https://arxiv.org/pdf/2409.14160

-Intelligence at the Edge of Chaos — https://arxiv.org/abs/2410.0253

-"The Shift from Models to Compound AI Systems"  https://bair.berkeley.edu/blog/2024/02/18/compound-ai-systems/m  

-"Enhancing AI Retrieval Systems with Structured Data" https://gradientflow.substack.com/i/149376530/enhancing-ai-retrieval-systems-with-structured-data

-"The EU AI Act: A Pathway to AI Governance with Fiddler" https://www.fiddler.ai/blog/the-eu-ai-act-a-pathway-to-ai-governance-with-fiddler

-Kramer, S., and M. Marder. "Evolution of river networks." Physical Review Letters 68, no. 2 (1992): 205.

Ron Green (00:05)
Welcome to Hidden Layers, where we explore the people and technology behind artificial intelligence. I'm your host, Ron Green. We're back with another Decoded episode, where we cover the most important recent developments in artificial intelligence.

Ron Green (00:17)
Today, we're discussing news from Meta, OpenAI, Microsoft, and some surprising Nobel Prize awards. To help me cover everything, I'm joined by two of my amazing Kung Fu AI colleagues, our chief scientist, Dr. Steve Kramer, and our VP of engineering, Michael Wharton. Michael, Steve, so glad to have you both here today.

Michael Wharton
Cool, yeah, great to be back.

Ron Green (00:44)
I'll kick off this Decoded episode with some more OpenAI drama. There's been a lot of stuff that has happened, really, in the last year. They've dissolved the safety team, the alignment team. All the leadership from that part of the organization has moved on; they’re at Anthropic and other places.

Ron Green (01:05)
All but two of the original co-founders have left the company with the departures this summer, and they're now just down to Sam Altman and Greg Brockman, who is actually himself on leave until the end of the year. We'll see if he's coming back. They're also switching to becoming a for-profit entity and poised to give Sam Altman a lot of equity. I want to tee up a case about their future because, in addition to the drama, there's a bunch of other stuff that's happened recently that I think sort of diminishes their long-term likelihood of success.

Ron Green (01:44)
They're burning through cash. OpenAI is expected to lose between four and five billion this year. But that's not as big a deal as it may seem, because they've managed to raise another six billion at a valuation well over a hundred billion—possibly even 150 billion. They've also burned up a lot of goodwill in the AI community. They're viewed with increasing skepticism by most AI professionals.

Ron Green (02:29)
They burned up a bunch of hype around GPT-5 with the release of GPT-4.0. So, one of the questions I have for you guys is: with all these major departures, including Ilya Sutskever, one of the co-founders and one of the most important people in artificial intelligence right now, would these people really be leaving if GPT-5 was on the horizon for this year or next? What do you think about OpenAI's future? Are you bullish or bearish?

Michael Wharton (03:17)
I'm bearish.

Dr. Steve Kramer
Yeah, same.

Dr. Steve Kramer(03:19)
It seems like, with all the leadership turnover, it's a major sign that things aren’t going well. You don't have co-founders leaving unless something’s seriously wrong. There were also reports of whistleblower concerns around product launches, with models being rushed out without optimal safety measures. Those are big concerns, aside from burning through cash and the massive data and power requirements to build these larger models.

Michael Wharton (04:03)
Sam did a good job of capturing investor FOMO at the time, but they're not really differentiated in terms of their IP. Their service isn't sticky; it’s just an API that anyone else could swap out. They're banking on future innovation, not what's already been achieved. It's risky because you're betting on something that doesn't exist yet.

Ron Green (04:56)
Right. So, I'll make the bear case first. They've lost a lot of differentiation. Zuckerberg and Meta are releasing powerful language models as open source, which has taken the wind out of OpenAI's sails. Also, models are getting smaller and easier to run, so there’s less differentiation. On the flip side, they’ve got great strategic partnerships, particularly with Microsoft. Their cloud access gives them unparalleled capacity, but that’s also becoming less important.

Michael Wharton (07:53)
Yeah, but even Microsoft is reportedly frustrated with OpenAI. They’re going head-to-head on customer deals, and Microsoft isn’t happy about that.

Ron Green (08:19)
Exactly. Michael, you were talking about a recent paper on the AI paradigm. Can you expand on that?

Michael Wharton (08:29)
Sure. There's an interesting paper called "Hype Sustainability and the Price of the Bigger is Better Paradigm in AI." One co-author was involved in the public release of scikit-learn, so they're well-known in the open-source community. The paper refutes two pervasive claims in the industry: one, that improved performance is simply a result of increased scale, and two, that all interesting problems require large-scale models. It's not always necessary to have massive models to solve narrow tasks. Often, that just wastes resources.

Ron Green (09:48)
Right, the inference costs would be huge.

Michael Wharton (09:52)
Exactly. And as people are discovering, it's not always economically viable to run these large models for every task. There’s a lot of pressure for smaller models. The paper also touches on the carbon footprint of these models, but historically, ethical arguments don’t hold much sway in AI development.

Ron Green (10:36)
True, and not every problem needs generative AI. I’d argue that most opportunities in business are in domain-specific AI—narrow AI that solves specific problems. It’s cheaper, more accurate, and more efficient than running a large language model.

Ron Green (12:19)
But I am curious to see how big these models will get before we hit diminishing returns.

Michael Wharton (12:54)
Yeah, people would pay billions to solve that. All right, Steve, what do you have for us?

Dr. Steve Kramer (13:06)
Well, last month I attended the AI conference in San Francisco, hosted by Ben Lorica. One key takeaway was that no one is using large language models (LLMs) alone in production for enterprise AI. People who have tried failed badly. A major topic was graph RAG (retrieval-augmented generation), especially integrating LLMs with knowledge graphs to provide a vetted source of knowledge.

Ron Green (14:32)
And compound AI systems?

Dr. Steve Kramer (14:57)
Exactly. Many companies talked about using agentic AI workflows that don’t even use generative models. They’re often using smaller, more traditional approaches like decision trees. There’s also growing work around AI governance and compliance, especially with the EU AI Act.

Ron Green (16:43)
AI governance is finally a serious concern.

Michael Wharton (16:59)
Right. Speaking of that, let's talk about Llama 3.2. It's a multimodal model trained with a vision encoder that allows interleaving vision and text inputs. The cool thing is, it's open source, but it’s not available in the EU due to copyright concerns.

Ron Green (17:49)
The wild west here in the States!

Michael Wharton (18:08)
Yeah, totally. It’s competitive with other models in its class, and it's currently ranked number 12 in the multimodal arena.

Ron Green (19:10)
That’s impressive. Michael, how are you testing these models?

Michael Wharton (19:23)
I’ve been testing them with architecture documents, and Llama 3.2 was able to accurately answer questions like room dimensions and square footage from a blueprint. It’s the first model I’ve used that nailed those tasks.

Ron Green (20:01)
That’s really impressive. All right, Steve, what’s next?

Dr. Steve Kramer (21:57)
I wanted to talk about a fascinating paper called Intelligence at the Edge of Chaos. It explores how intelligence emerges by training models on cellular automata, from simple to chaotic systems. What’s amazing is that the best-performing models were those trained right at the edge of chaos—where there’s still some predictability, but enough complexity to force the model to learn deeper patterns.

Ron Green (24:47)
That's fascinating. Steve, what are your thoughts?

Dr. Steve Kramer
It’s really cool to see these foundational ideas reemerging in AI. My PhD was on complex systems, and this research aligns with how intelligence operates at the edge of complexity.

Ron Green (29:11)
Absolutely. Michael, anything else?

Michael Wharton
That was really all I had.

Ron Green
Great. Steve, what else are you working on?

Dr. Steve Kramer (30:42)
Well, I’ve been working on a project to track AI research by using AI to analyze AI influencers' tweets. I gathered around 400,000 tweets, focusing on research papers, code repos, and YouTube videos that are shared within the AI community. I’m using traditional network analysis techniques and testing models to classify this data into key topics.

Ron Green (37:00)
That sounds exciting. I’ve got one more thing to wrap up with. Last week, three AI experts won Nobel Prizes. Geoff Hinton and John Hopfield won for their foundational contributions to AI, and Demis Hassabis won the chemistry prize for work on protein folding using AI. It’s amazing to see AI crossing into physics and chemistry, and I think it’s a strong indicator of where science is heading—AI-assisted discovery.

Michael Wharton (39:30)
If you’re making the old guard frustrated, you’re probably pushing boundaries. It’s great to see this kind of recognition for AI.

Ron Green (41:49)
Thanks so much for joining me today. As always, it’s a pleasure.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
X Icon