In this episode of Hidden Layers, Ron Green and Steve Meier dive into the intricate world of artificial intelligence, reflecting on their experiences over the past six years at KUNGFU.AI. They discuss the challenges and successes they've encountered, including the importance of machine learning engineers, the unexpected rise of generative AI, and the critical role of business alignment in AI projects. They also touch on the evolving landscape of AI governance and the significance of having a Chief Data and AI Officer (CDAO). Watch and listen as they share valuable insights and lessons learned in the ever-evolving field of AI.
Ron Green:
Welcome to Hidden Layers, where we explore the people and the technology behind artificial intelligence. I'm your host, Ron Green. Joining me today is my co-founder and head of innovation, Steve Meier. When we started Kung Fu AI over six years ago, there were lots of things we didn't anticipate. Some of the things we got right, and some of the things we got wrong. Steve and I are going to talk about those today in hopes that sharing our experiences will help others navigate the exciting but fast-moving landscape of artificial intelligence. Hey, Steve. Thanks for joining me. I'm excited to be here. All right, so let's just jump right in. We'll talk about some things we got right and some things we got wrong. Let's start on the positive. What's something that you think we got right six years ago?
Steve Meier:
Starting on a high note, I love that. What did we get right? Well, I wouldn't say we totally got this right, but there's this thing called the last mile AI problem that we saw in the early days, and we're still seeing it today. I don't know if we totally stuck the landing, but we made some good decisions early on. When it comes to the first mile AI problem, this is the issue of getting from POC to production demo to something usable. I'm hearing from folks even today that they're stuck with hundreds, sometimes thousands, of POCs not getting to production, which is just wild. It illustrates the issue that there are two critical ingredients to that: one I think we got right, and one that we didn't totally anticipate. The one I feel like we did a great job on, kudos to you, sir, is the idea of the machine learning engineer. Very early on, we had a co-founder who was a data scientist, but when you came aboard, you made a very good decision to help us focus on machine learning engineers, which is a different skill set than data scientists altogether. I didn't totally appreciate the difference, but I certainly do now. Taking a machine learning project beyond a demo and taking that algorithm to production in a way that scales is really, really hard and sometimes the hardest part of the problem. What inspired that decision on your end, and how do you couch the difference between a data scientist and a machine learning engineer?
Ron Green:
I think it's a big world, and there are a lot of data scientists out there that will probably fit the description I'm about to give. The distinction, the way I think about it, is that the machine learning engineer title includes "engineer" because they focus on building solutions that go into production. It's less about analysis and understanding and more about a production focus. There's an old joke in software, the 90-90 rule: the first 90 percent of code takes about 90 percent of the development time, and the last 10 percent of code takes the other 90 percent of the development time. The reason is that there's a giant gap between something that works in a prototype and getting something in production. All the weird corner cases that have to work in production need to be ironed out, every little edge case. That can take so much time. You're often in a situation where you're constantly surprised, deliverable requirements are changing, the market is changing, and the technology stack is changing. One of the things we knew from the get-go was that we wanted to focus on hiring people to build AI production systems where they cared about good professional software development practices as much as building AI systems. Good software engineering practices—design, reliability, scalability—are critical. Well-designed software can accommodate change more easily. So, if you're building production-grade AI systems, having people on the team who care about software is a critical component.
Steve Meier:
Both roles are so important. The data scientist has such an important role, and the machine learning engineer has such an important role. The market we serve knows the data scientist well from years and decades of experience. The machine learning engineer concept is just so new. But I will say that's maybe half or 60 percent of the puzzle. Another missing ingredient we didn't totally appreciate is business requirement alignment. Let's say theoretically we have 500 POCs pending production. I’d argue that 300 of them should have never been started because they weren't solving a real business problem that was urgent or had a measurable return value. The business owner wouldn't accept it, and users wouldn't embrace it even if it made it to production. A lot of things stall out because the business wasn't involved. Having business alignment and an executive sponsor or a product manager involved from the start to think about market dynamics, success metrics, and strategic alignment is crucial. That directs the engineering effort and helps determine where to stop.
Ron Green:
Your comment on having product management involved is really huge because if you build something in isolation and then take it to the product management team or the DevOps teams, you blindside them. It's not like they can just integrate that into the existing infrastructure or roadmap. You've got to have buy-in from the get-go, while you're even thinking about what you'll be building and prototyping, not at the end. OK, let's segue to something we got wrong or didn't anticipate. I would put the explosion in generative AI there. What are your thoughts on that?
Steve Meier:
Oh my gosh, how could we? When "Attention is All You Need" came out almost seven years ago, it marked a significant moment. I remember one of our interns gave us a paper read on it, and it seemed like a new and interesting way to do language tasks. Fast forward a year and a half, and Nvidia released the GAN model for photo-realistic image generation. I remember we did a demo at a conference where we could change someone's appearance in real-time, and people loved it. But we were thinking, how do we use this? Maybe for data augmentation or photo editing, but we had no idea it would be as profound as it is today.
Ron Green:
Right, and language models were not even close to production-ready. Once we saw language models that could produce fairly impressive results, we realized there were real questions about how to use them. You brought up synthesizing training data, which was a good approach. I had no idea we would see this monumental leap through scale. Scott Aaronson, a computer science professor, has said that the behavior we're seeing in large language models, where scaling them up leads to incredible abilities, is the biggest scientific discovery of his lifetime. I agree because I would not have imagined we'd get this behavior just by scaling up a language model. Language models, to be clear, are simple systems trained to predict the next word. It's a proof point of emergent abilities, and we're not even at the end yet. We don't know when we'll see diminishing returns as we scale up these models. I was completely blindsided by that.
Steve Meier:
And then in November 2022, ChatGPT-3 came out and marked the age of AI. Now, we're seeing AI budgets for the first time. It set the industry off in ways we didn't anticipate. Thankfully, it did because it's bringing attention to other forms of AI and the value of narrow AI and predictive algorithms. It's been very positive. We don't know where it's going next, but I'm excited to take this ride. It's made AI more accessible, obvious, and approachable, allowing us to work on other interesting things. Let me ask you, a lot of companies are thinking about getting into AI right now. Do you think most of them see ChatGPT as the full extent of their options, or are people realizing there's a broader class of AI out there?
Ron Green:
That's an interesting question. Most see ChatGPT and think it's great, asking what they can do with artificial intelligence. Traditional companies, especially those late to cloud or digital transformation, see Gen AI as an entry point because it's so accessible. It's a fabulous idea. Even if they're not ready for sophisticated predictive algorithms, they can use it for enterprise search or document processing, which can be incredibly valuable. Document processing, in particular, has seen tasks that took months now taking seconds with these models. It's wild. So you see a mix, but the value is in bringing attention to wider capabilities and providing an entry point for those without robust data.
Steve Meier:
Early on, we were very focused on engineering and strategy services, but strategy meant something different to us. We saw strategy as workshops and the idea of a center of excellence, a static department next to IT. While not necessarily incorrect, it's just one of many correct formulations. Instead, we're seeing emphasis on other areas when starting an AI organization. One key area is the Chief Data and AI Officer (CDAO), inspired by the Biden administration's executive order mandating this position for federal agencies. The CDAO is the figurehead of your AI operation, responsible for strategy, program performance, risk mitigation, and understanding AI's broad capabilities. They're accountable for productivity enhancements and revenue opportunities. It's a wide mandate, and they build out the entire program and own responsibility for it. From your perspective, what makes a good CDAO, and what do you see as important?
Ron Green:
If you're overseeing AI development and integration within a company, you need a few things. Ideally, you want someone with real-world production AI deployment experience because there's no substitute for having been there and done that. That’s a tall order because there aren't many people with meaningful experience in this area. Another aspect is deep technical knowledge. You want someone who understands the technology and doesn't just have a surface understanding. Misconceptions about AI are common. It's software, but more complex because most attention-getting AI systems are deep learning-based, trained on data. This adds complexity due to potential errors, biases, ethical issues, and governance questions around privacy. Someone with experience and deep technical knowledge is invaluable for getting systems into production and ensuring they don't perpetuate bias or ethical issues.
Steve Meier:
The CDAO role is crucial. We're seeing a focus on this figurehead and the idea of governance. Governance is about policies—how we're using AI and how we're not. Compliance with regulations like the AI Act in the EU is part of this. Beyond policy, having tools to move models from idea to experiment to production with end-to-end oversight is important. Ensuring models behave appropriately in production and having mitigation techniques if they don't is key. This continuous feedback loop improves model capability. It's a process, an operation, and oversight to manage risk and ensure return on value. This approach is becoming more acceptable across the industry.
Ron Green:
Totally agree. Alright, I'm going to stop us there. We have many more topics to cover in the next episode, but that was fantastic. Hopefully, people listening can learn from our mistakes.
Steve Meier:
I hope so. Thanks, Ron.