Contact Us

AI and the Evolution of Open Source: Insights from NVIDIA's Dr. Katrina Riehl | EP.26

In this episode of Hidden Layers, Ron sits down with Dr. Katrina Riehl, Principal Technical Product Manager for CUDA and Python at NVIDIA, to explore the evolving role of open source in the age of AI. They discuss the foundational impact of open source technologies, the challenges of open source licensing in the AI boom, and the distinction between free-to-use models and true open source. Dr. Riehl shares her journey in the open source community, insights on AI's impact on open source, and her work at NVIDIA.

References:

Ron Green
Welcome to Hidden Layers, where we explore the people and the technology behind artificial intelligence. I'm your host, Ron Green. I'm thrilled to have Dr. Katrina Riehl joining us today to discuss the future of open source in the age of AI. Open source technologies have become foundational to our modern world. Significant changes are coming. Generative AI is blurring the line between human and machine contributions, challenging our notions of authorship, ownership, and collaboration.

Ron Green
Dr. Riehl will help us understand these coming changes with her extensive experience in both artificial intelligence and open source. Katrina is Principal Technical Product Manager for CUDA and Python at NVIDIA. She has over two decades of experience in machine learning, scientific computing, data science, and visualization. She has led machine learning initiatives at the University of Texas Applied Research Laboratory, Anaconda, Apple, Expedia Group, Cloudflare, and Streamlit. Katrina is also an active member of the Python open source scientific software community and was most recently the president of NumFocus, a nonprofit organization that promotes open source practices in research, data, and scientific computing by supporting the growth and sustainability of open source software projects. Katrina, thanks for joining us.

Dr. Katrina Riehl
It's great to be here. Thank you for having me on.

Ron Green
So, as I mentioned in the intro, open source is now foundational to our world, and I'm actually old enough to remember when open source was not only just a bit player in the technological ecosystem, it was almost seemed to be a fantasy, this idea that software could be developed and freely distributed and maintained and compete against large commercial organizations with deep pockets. I've got some stats in here to show how incredibly important open source is today. So, the Linux operating system is probably the most recognizable example of open source. It now runs 90% of the world's servers and 99% of the world's top 500 supercomputers. 70-75% of smartphones use a variant of the Linux operating system, and it's on pretty much every embedded TV, routers, automotive systems, etc. A less known example of open source, but probably the most used in the world, is SQLite. This is a database, it's on every web browser, Google Chrome, Firefox, etc., it's on almost every desktop, it's on almost every mobile operating system. They estimate that there are over 1 trillion SQLite databases in active use. Lastly, you've got Wikipedia, which is, I think, the best example of open source content. It's consistently in the top 10 most visited sites in the world, and it receives tens of billions of page views every month. Open source is now just an incredibly important underpinning in everything we do. You've worked in both AI and open source for a long time. You were the president of NumFocus. Let's talk a little bit about that. How did you get involved in open source? What was your journey like?

Dr. Katrina Riehl
Sure. My journey started way back while I was working on my PhD. I was studying artificial intelligence. That's what I was working towards at the computer science department. I found it really useful to use the Python programming language in order to do my research. Because of the nature of Python being interpreted, that kind of goes hand in hand with open source because you have to have readable code that can go through the interpreter. As a result of that, I wound up reading a lot of code at the time. The Python interpreter was actually written in C, so you could read through absolutely every single part of the interpreter. The standard library is open. It taught me how to be a programmer, actually, I would say. What I thought was really interesting about this is that then I realized, oh my gosh, I can actually contribute back to this, that this is not just a closed system. Especially within the scientific software community, my background even before that, my undergrad degree is in molecular biology. I worked in a population genetics lab. That's where the first need for a lot of number crunching power comes out of that. This numerical computing that I was seeing over on the scientific side is completely applicable to artificial intelligence. That's what we're seeing, right? Getting involved in the NumPy and SciPy community was kind of how that all began. Like a lot of people that get into open source, they get very attached to it and try to find a way to make a living doing open source afterwards, which is very hard, by the way. I just want to mention that. It can be very hard. I wound up at EndThought after I graduated. At the time, they were the ones that were housing NumPy and SciPy on their servers inside their office. A lot of people that were active in the community, either as users or tangential to it or core contributors, wound up coming through EndThought at the time. There are also, of course, the EndThought open source libraries that are available. They have so many developers coming through there from so many popular projects that it becomes kind of a hub for open source. I think that I just kind of naturally, even after being a developer, moved into community building. Trying to be more on the side of facilitating those who are still actually writing code because, you know, I'm a little older. You get to a point where you're no longer writing the code yourself.

Ron Green
I recognize, I can relate to that.

Dr. Katrina Riehl
So moving into that role, I was very fortunate to be nominated onto the Board of Directors for NumFocus. I was able to support the open source community in a very different way. After being on the board for three years, I was then moved into the president position, which was a wonderful experience and a wonderful organization. I recently stepped down. I am still on the advisory council, but I stepped down from the president position four months ago when I joined NVIDIA.

Ron Green
Right. Well, I mean, six years, the work you did there is really, really important to the Python ecosystem. The Python open source ecosystem is tremendously important. As I kind of hinted at in the intro, AI, and probably generative AI more than anything else, is really kind of changing the open source landscape. We've got the ability now to have AI coding assistants. They can write some code for you. They can increasingly fix bugs and generate automatic Git pull requests and just kind of automate the entire cycle. Not to mention the really, I think, sort of legally challenging domain of generative AI on derivative works, right? This idea that now that I've trained on all of the open source data for free, now my model and my output can't be used by anybody else to do downstream training. It's a really unusual world. I don't think lawmakers could have anticipated the world we're dealing with right now. What are the biggest challenges you ran into while you were at NumFocus, as this AI wave kind of took off and it was changing how open source needed to think about licensing?

Dr. Katrina Riehl
Absolutely. Well, I think that open source licensing has actually always been a little bit of a dark art, to tell you the truth. Because what we're really looking at is the cross-section of a lot of different legal pieces at play here, right? It wasn't really until 1974 that you could even call computer code as being something that was copyrightable. Anything that is considered copyrightable is because it has an author that is producing a new creative work. That's where that definition of what a work is starts coming into play. When you look at an open source license, you're really looking at the intersection of trademarks, patents, and copyright law all rolled into one. Also with user agreements and how things are supposed to be distributed and all those other pieces. There are a lot of moving parts in an open source license in general. We have a lot of them out there at this point. We do. With the proliferation of all of these licenses, we still couldn't have seen the AI boom coming, where suddenly the software, the code that's being written, is not necessarily even the most important part of the system anymore. It's not just about being able to deliver that code so that someone can understand what is actually running on their computer. It's the data that's involved in training it. It's the weights that come out of the training. It's the parameters that are used around it that create the entire system. It's not just the actual code itself. Applying copyright law, which is really designed around written words, it creates a different paradigm of like, okay, so how do we look at this? What do we consider a work? What is considered a creative work? Where does that come into play? We get into the situation where we have copyright law that's kind of trying to be applied to open training sets. We're getting it applied to the code itself. We're getting it applied to the weights that come out of the code. We're getting it applied to anything downstream that's produced by the AI system. Every single step along the way, you have very different issues at play. I think that this is playing out in front of our very eyes right now.

Ron Green
None of us, I think, could have anticipated this, and we don't know necessarily the best path forward. I was going to bring this up a little bit later in the interview, but I'm going to jump into this now because I think the points you just touched on really tee this up. Just this week, Meta released Llama 3.1, the newest version of their Llama 3 large language model, as most of our listeners know. This is a model that comes in three variants. The largest variant has over 400 billion parameters, and it has performance roughly on par with what we're seeing from OpenAI's ChatGPT 4.0 and Cloud 3.5 from Anthropic. Definitely within the striking range, if not beating those. What's really interesting about that is they're calling this open source. The source code is not available. You can't actually get the source code. You can download the model weights. The model weights contain all the knowledge of the model, and you can run these models at home. I'm running the 70 billion parameter at home regularly. But they didn't release the data it was trained on either. They released, I think, a fair amount about the training regimes, but obviously they're keeping a lot of the secrets internal. So what does it mean today to say that this model is open source when the code's not available?

Dr. Katrina Riehl
Right. So I think that this is something that we're seeing a lot more of in general, is this equating something being free to use and being free no cost as opposed to something that is open source. Open source has a lot of other components around it besides just the code. There's a community involved. There's reproducibility. There is viability. There is the right to know what is running on your private property machine that you own. There are all of those pieces that are kind of in play there. When you look at something like Llama 3.1, which I am a huge fan of, by the way, I think Meta did a really great job of putting this out there because it's the intention there I think is good. I think the intention is to create a new paradigm where we are releasing weights, which, you know, if you think about that in terms of copyright, what are we going to start copywriting an array of numbers? That's crazy. But at the same time, what they're really doing is by putting this model out there, it is helpful to people. They are able to use it. They are able to modify it in the sense that they're able to add additional training to it. They're able to move it into their field of expertise. They're able to use it and they're even able to release it out into the public, depending on whether or not you follow all of their agreements that are in there. Some of them, I feel, you know, obviously I understand why a company would want to indemnify themselves from anything, any illegal activity, anything that has to do with harassment or arms trading or anything like that. But I could also see situations where it has some provisions in there for not operating heavy machinery using something like this. Where you could see very easily that somebody might want to create a system like that. So I look at their license as being more of a user agreement is what I would call it. There are a lot of restrictions based on what's there. So not just the definition of like open code versus open weights versus open data. The number of restrictions that are in there are really about how it's going to be used. So I look at it as being, you know, it's open in a sense. But I think that some people are taking issue with the idea of kind of piggybacking off of the reputation of the open source community as being such a huge contribution to society. Like you mentioned all of the, you know, like the Linux system or SQLite or all of these other pieces like that. Is Llama 3.1 that? I'm going to say no.

Ron Green
And I think part of the problem is the whole term open source really came from the perspective that the source code, right, the open source code was what made it available. You could actually see what the code was doing and you could make modifications to it if you wanted to. With modern machine learning and AI, if you only have the weights and you don't have the source code and you don't have the training data, not to mention the training regimes and all the tricks that went into it, fine-tuning it, I think it's not really appropriate to call it open source. I really like the term maybe open weights a little bit more. There is one thing that Meta did that I'm really happy about. I say this with, you know, I fully acknowledge that I've found Meta and Facebook to be problematic in the past with some of their actions, but I really support this. They have modified the agreement, their license agreement, and included the ability to use the largest model, the 405 billion parameter model, to generate training data for downstream models and you can use it to generate synthetic tokens or whatever it may be. And that is really, really important because Meta themselves, even in their Llama 3 white paper mentions that they use Llama 2 extensively to train Llama 3, right? And this is something I think a lot of people miss about AI is that it's got this recursive ability to use the previous models. We can stand on the shoulders of the previous models to improve the next generation and it's one of the reasons we're seeing such fast gains. So I'm really, really happy about that modification, but I think you said before we started recording that one of the requirements in the agreement is that you have to call any downstream model has to have the name Llama in it or something like that.

Dr. Katrina Riehl
Yeah. So there is an attribution piece involved there, right? That you have to acknowledge the fact that you did use the original model. And then, I mean, the way I read it and the way I was looking at it, I was like, you have to use llama in the title, right? So does that make it searchable or, you know, somehow, you know, attribute it back to them. But at the same time, you know, it's interesting with that, you know, is the concept of, of branding, right? That protecting the brand of llama is actually something that I think, you know, they're trying to do with that user agreement that they're putting out there, right? Which other open source projects really, you know, struggle to maintain, right? Most of the open source projects that are out there are trademarked, right? And so there are laws protecting the name, protecting the brand. And so I do think it's very interesting, the idea of like requiring that branding, right? Without any control over what they're doing, as long as it, you know, falls outside all of the, you know, dozens of things they said you're not allowed to use it for.

Ron Green
Right. Well, I'm glad you brought up trademarking because I think that's a great segue into the legal side of this. I'll preface this by saying I'm not a lawyer. I'm not a lawyer. So... And neither am I. And neither are you. Right. But we've worked in this domain for quite a while. So trademarks, you know, trademarks are fundamentally about reducing confusion within the marketplace. A lot of people get that wrong. They think it's the opposite. No, it's really about avoiding consumer confusion. Having clear laws benefit all of us in society because it makes it really understandable what's allowed and what's not allowed. And now we're in this world where it's really, really, really vague within open source and what even constitutes open source and what's allowed. Let's talk a little bit about that area. What are the changes that you have seen that are being proposed within the sort of the open source licensing community around derivative works and other aspects of legal responsibility and training of models that are being affected now and upsetting sort of what we've traditionally thought of as open source?

Dr. Katrina Riehl
Yeah. And, oh boy, what a huge topic, right? So I hardly even know where to start on this one. I think that first thing I'm going to go back and reference is the open source initiative as being a leader in trying to help make sense of this open source, you know, world that we're now entering into. And even within the open source initiative right now, they're still trying to actually come up with a definition of what is open artificial intelligence, right? For anyone who is interested in joining that conversation, by the way, the forum is open through the OSI website. And you can also see the most recent draft of their statement. Oh, fantastic. See, I didn't know. I didn't know any of this, this is wonderful. Yeah, absolutely. So that is already there to kind of, you know, break down the pieces of what exactly open artificial intelligence systems actually are. Right. And so this kind of goes back to what I was talking about before is that, you know, how if we're looking at an artificial intelligence system end to end as being the collection of the source code plus the training data plus the evaluation data plus the parameters involved in, you know, training the model plus the weights, right? It's a lot of pieces, right? And so each one of those things could be handled slightly differently. And I've even seen some models that are trying to put, you know, different parts of the system under different licenses, which I think is really interesting. And I think that this goes back to most open source licenses that are out there do have the concept of what is considered a work, what is considered source, what is considered, you know, a contributor, all of those things like that are defined. And even within the very popular open source license that are out there, which I'm going to cite, you know, basically MIT, BSD, GPL, and Apache too, I would call is like kind of the most popular ones that are out there right now. Totally agree. And when you look at those, each of them defines those things slightly differently. And some, you know, even permissive licenses can be very specific about what those entities are and how you look at them. And so once again, I do want to preface this as saying that I am not a lawyer, but it does come into this, this, this question of like, okay, this makes a lot of sense when we're just talking about source code. But now where are we when we start looking at training data, which is just as much a part of this, right? And so you see people talking about, you know, putting it all out into the public domain. Obviously we can't do that with PII, like there are proprietary data sources that are out there that people are selling that don't, people don't have rights to. And I mean, we've seen even the downfall of companies because, you know, people have said like, Hey, you can't use that proprietary data, even if it's obfuscated in some sort of way. And they can't pull it out of the model. Yeah, you can't, you can't pluck it out. Yeah. Once it's there, it's there. And, and then also of course the output, right? It's like, what kind of, what kind of license does that have, right? And if we have no control over what is being, you know, used to, to train this model, how do we ensure that it's not already under some other licensing agreement, right? And that's, you know, I mean, kind of alluding to the co-pilot lawsuit that's out there right now, right? Yeah, let's talk about that. Yeah. Okay. Let's talk. That's great. So, Copilot, a product from Microsoft, it was trained on. I don't even know how much source code, but most of it was GitHub repo-based. Most of the lawsuit was thrown out recently, just a few weeks ago, but there are some key components of it that are still in play. What are your thoughts on this?

Dr. Katrina Riehl
So I think this is really, really interesting, first off, because I do believe that people have the right to license their software how they see fit, right? And I think one of the biggest issues I have with using a training set like GitHub is that it really doesn't take into account the licenses that are on the original works. And so people protect GPL with their lives, right? Like this is, GPL is something to not take lightly, right? And not to mention the attribution piece of it. There are several different licenses that require attribution. And so how do you do that? And in general, in generative AI, we don't know how to solve that problem, how to give attribution back to original works of art, how to give attribution to original authors for different pieces of literature, all of those things like that. And so it's the same thing. You go back and you look at some of the answers that CoPilot gives can sometimes, I mean, depending on the training, let's be real, AI systems are not actually thinking. It's a giant search problem, right? They're using statistical methods in order to determine what the next given token is going to be in the output, right? Which puts it in a huge, huge, huge search space. It's not creating anything. So it has to be based on something it's seen before. And if the only example it's seen is based on something that's under a GPL license or that requires attribution, that's the only answer it's going to give you, right? Because that's the only thing it's ever seen before. And so it is possible to get back the original work in a generative AI model. It's not, and so that, in my opinion, means that it's still under the same license. And in fact, I also think that, you know, in a lot of cases, those derivative works are still under the original license.

Ron Green
Yeah, I couldn't help but think when you were talking through that a couple of funny and it is do you remember last year this sort of a prompt poisoning attack was found where if you asked he was GPT 3.5 might have before to report the word poem. Yeah forever after about 600 iterations It just started spilling out verbatim some of the training data Emails correct email addresses social security numbers everything so all that's in there and then the other thing is I remember Yeah, it was one of the it was one of the private diffusion text to image generators that got a lot of hot trouble talking about attribution It was spitting out images and you could kind of see in the corner It had almost perfectly reproduced like Getty image watermarks on the yeah You know you're like pretty clear where that data came from.

Dr. Katrina Riehl
Exactly. Do you, I mean, what do you think? What do you think the right answer here is? Do you think is to break down have different licensing rules or scopes or for the different parts for the source code versus the weights versus the training data or should it all operate under one license because that just makes it simpler and easier for adoption?

Dr. Katrina Riehl
Sure. So I can say that first off, I don't know what the right answer is. And I think that my I'm very much watching with great interest what is happening right now and the debates that are happening between people. So my mind is not closed on this issue by any stretch of the imagination. But I do think that, you know, my own personal opinion is that I definitely fall much more on the everything should be open spectrum of like where things should be out in the world. And so putting things out in the public domain with, you know, sort of the, you know, I don't guarantee this thing works, use at your own risk, have fun kind of, you know, attitude is much more one that personally resonates with me.

Ron Green
I agree. I totally agree. I think one of the things that we were really lucky about as a society is that all of the modern progress that has been made in early AI and this sort of this revolution we're under was principally being driven by research scientists. And they just had this open-source ethos, this idea of doing research, publishing the results, and just giving back to the world. And I think that's part of the reason that we are still seeing open-source as a viable option here, because if you want to hire these people and you need these people to build state-of-the-art AI systems, they're going to want and demand that. And I think that's part of the reason we're seeing this. So you're at NVIDIA now. So you are a Principal Technical Product Manager for the CUDA and Python at NVIDIA. I know, obviously, Python, I think you might have the most vibrant open-source, if not one of the most vibrant open-source communities, at least in the world. What is going on? What is NVIDIA doing these days at the intersection of sort of Python and open-source?

Dr. Katrina Riehl
A lot is the simple answer. But I will say that, you know, coming all the way back to the Python open source community, of course, I mean, it's baked into the entire language, right? It really is. It is. And that mentality has been brought forward in pretty much everything. And so I was even, I love quoting this, but, you know, a couple of years ago at PyCon, Guido even said, you know, like, Python is the people's programming language, right? And I think that that's very true, in fact. And so it's baked into the culture of the language itself. And I think that, you know, within NVIDIA, what's been going on here, right? CUDA has been around for 20 years almost, right? Next year will be 20 years since the release of CUDA.

Ron Green
Real fast, you want to describe what CUDA is just in case somebody is not familiar with it.

Dr. Katrina Riehl
Oh, sure. So going back to, you know, the graphical processing unit on your computer, right, those were primarily used for graphics and video games and gamers are all alike, just all have amazing GPU cards on their computers. That kind of, you know, was it 2006, 2005, that area, they started using this for general computing. And so it's an incredibly powerful processor, actually, but the processor itself is not really all that great for the things that the CPU is good for, right? It's a different way of looking at it. And so it provides this massive parallelization that we see over on the AI side, right? So for scientific computing and artificial intelligence applications, it's perfectly suited for that because you're just performing the same operation over and over and over and over and over again on huge amounts of data without having to do a lot of branching logic, you know, operating system calls, all those things like that, that your CPU is perfectly designed to handle, right? And so CUDA is the low level library that allows you to do general computing on your GPU. And so that has kind of been the basis for a lot of the accelerated computing libraries that have been put out by NVIDIA. And so along with that is primarily C++, like I said, originally, I think it was written in C and now a very, very strong C++ community around this. But with the advent of all of these AI systems and all of this data science that's going on, Python seems to be the chosen language that seems to be the one that people really gravitated towards. And I know it's the one I gravitated towards while I was working on my PhD, right? And so watching this kind of proliferation happen, you know, NVIDIA has to support the Python community. And so if we are going to continue to see the gains we're seeing in generative AI that we're seeing in deep learning that we're seeing in machine learning in general, we're going to have to be able to provide that low level support in order to be able to keep up with the pace and the amount of data that we're looking at. So there's, we can't have this mismatch between like these huge storage systems and the great amount of bandwidth and a huge amount of compute that we have available. But then at the same time, like have to, you know, limit like, okay, well, you know, can I have some compute, please, you know, and that's not really going to work out. And so what we're seeing in the Python community is that a lot of the interface into that low level library really came from wrapped code, right? So it's really wrapped C++ code. And I'm just going to go ahead and say it may be not the most Pythonic way of working with those libraries. And therefore, you know, maybe not the most friendly way for library developers, especially open source library developers to interact with it, especially in the sense that, you know, I mean, this does run on Nvidia hardware. So it is not completely open. It's not an open standard. It is not, you know, there is a certain amount of closed, you know, secret sauce involved here. Right. But at the same time, in order to create those, you know, Python interfaces, Python APIs, and also libraries that support the Python ethos of developing software. It's a different, it's just mind shift. Right. Right. And so I think that Nvidia has always had a finger on the pulse of Python. But I see really only within the last few years that they're making a concerted effort to really insert themselves and be a player within the actual community itself. And you can see that through, you know, the people that they're bringing on board, powerful Python players from all over the place are coming into Nvidia and helping to bridge that gap between what has traditionally been available and then what really serves the needs of the Python community, especially the scientific computing Python community.

Ron Green
Right, because CUDA is so important because it's not just used for building machine learning or AI systems. It's really used incredibly widely within scientific computing more broadly, whether you're doing things like simulating physics or astronomical simulations, galaxies merging and things like that. You mentioned some new initiatives, some new open source initiatives at NVIDIA. Can you talk about that a little bit?

Dr. Katrina Riehl
Sure. So we do have a ton of open source libraries that are available to people. You know, we really, you know, Kupai is not owned by NVIDIA, but is something that we are very much a part of as well. You know, Numba is also not something that we own or anything, but we have a huge amount of support for Numba. You know, Kudif, KuGraph, all of these other different, how many coups can we have out there, right? But all of these different pieces are available to people. And I think especially under the RAPIDS umbrella, you see a lot of those libraries that are available to people. And I don't think that people are as aware, the users are not as aware that these actually exist, that it can be incredibly easy to accelerate your scientific computing applications with drop-in replacements, in fact, right?

Ron Green
Yeah, you can get orders of magnitude gained in compute with sometimes just adding like a Python decorator.

Dr. Katrina Riehl
Oh yeah. It's just that simple. Or even Kupai is the one that I keep harping on lately is that it is a full on drop in replacement for NumPy. And in fact, even has a dispatch system that allows it to convert between NumPy arrays and Kupai arrays on the fly. So it can detect the type and actually make that, it'll dispatch to the correct function as a result of that, right? So in some cases, you truly don't have to change anything. I mean, that's the goal, right? Is to get to the point where we're not gonna reinvent what is Pythonic. We're not gonna reinvent, you know, the way people do scientific computing. We just wanna make it as easy as possible to accelerate all of your applications without having to get into the low level mess of dealing with the GPU. Because believe me, under the hood, that is, that is a mess. And let's leave that to the people that have been working on CUDA for the last 20 years.

Ron Green
I know a lot of stuff is going on in Nvidia right now in open source. What else you got?

Dr. Katrina Riehl
So something I'm also really excited about that's going on is at SciPy that just happened a couple weeks ago, the SciPy conference, we released an NVIDIA curated repository of open materials for education materials. So the idea here is that under an open source license, a Creative Commons license, you have access to up-to-date user guides, tutorials, sample code, things like that that are all available in order to make sure that people have the most up-to-date information and reduce that barrier to entry. So you get the idea of like, what are the best practices? How do I fix my performance issues? All those things from the experts directly. And so, and then also I wanted to mention that we are also starting a monthly developer office hours starting next month. So you can have direct access to CUDA developers to get all of your answers to questions that are happening, which I think is an amazing opportunity. I had this opportunity when I first started with CUDA back in 2007, I went to SIGGRAPH, I stalked everyone at NVIDIA until somebody sat down and explained this CUDA mess to me. And I left with a book and an idea of how to be able to move forward with CUDA. The idea here is that, you know, that virtual connect with the experts and build some community around all of these different exciting things that are happening at NVIDIA right now.

Ron Green
Well, this is just so much fun. I share your thought that I'm not sure exactly what the right path is in open source around AI, but there's no question in my mind that we're going to be pivoting quite frequently as these new capabilities that just weren't imaginable 10 years ago keep popping up. But thank you so much for joining us today, Katrina.

Dr. Katrina Riehl
No, thank you. It's really a pleasure to be here and I appreciate it.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
X Icon