In this episode, Torsten Grabs, Senior Director of Product Management at Snowflake, does a deep dive into the pros and cons of Generative AI. And talks about Snowflake's approach to data and AI, how to choose the right vendor, and much more.
In this episode, Torsten Grabs, Senior Director of Product Management at Snowflake, does a deep dive into the pros and cons of Generative AI. And talks about Snowflake's approach to data and AI, how to choose the right vendor, and much more.
---------
How you approach data will define what’s possible for your organization. Data engineers, data scientists, application developers, and a host of other data professionals who depend on the Snowflake Data Cloud continue to thrive thanks to a decade of technology breakthroughs. But that journey is only the beginning.
Attend Snowflake Summit 2023 in Las Vegas June 26-29 to learn how to access, build, and monetize data, tools, models, and applications in ways that were previously unimaginable. Enable seamless alignment and collaboration across these crucial functions in the Data Cloud to transform nearly every aspect of your organization.
Producer: [00:00:00] Hello and welcome to the Data Cloud Podcast. Today's episode features an interview with to and grabs senior Director of Product Management at Snowflake. Before joining Snowflake six years ago to and held management positions at Microsoft and Amazon, On and was also a lecturer on cloud databases at the University of Washington.
In this episode, Torsten does a deep dive into the pros and cons of generative AI and talks about snowflake's approach to data and ai, how to choose the right vendor, and so much more. So please enjoy this interview between Torsten Grabs and your host, Steve Ham. How you approach data will define what's possible for your organization.
Data engineers, data scientists, application developers, and a host. Of other data professionals who depend on the Snowflake data cloud continue to thrive, thanks to a decade of technology breakthroughs, but that journey is only the beginning. Attend Snowflake Summit 2023 in Las [00:01:00] Vegas, June 26th to 29th to learn how to access, build and monetize data tools, models, and applications in ways that were previously unimaginable.
Enable seamless alignment and collaboration across these crucial functions in the data cloud to transform nearly every aspect of your organization. Learn. And more and register at www.snowflake.com/summit.
Steve Hamm: Torson, it's great to have you on the
Torsten Grabs: podcast. Yeah, thrilled to be here. Thank you. Yeah.
Steve Hamm: Now, since late last year when Open AI released G P T three, it's large language model and chat, G P T.
It's AI chat agent for general use. Large language models and generative AI have been all the rage, not just in the tech industry, but throughout society. What is going on here and why is this approach to AI suddenly so popular?
Torsten Grabs: Yeah, I think I, I fully agree with the, the [00:02:00] observations that you made, and I think it's a probably a unique moment that we are experiencing all right now in, in the industry and with, with the generative ai, large language model technology, I think we, we actually are at a disruptive moment for the industry, um, for tech and for, for, for businesses.
And the reason for that is that, The way how people interact with computers now can suddenly change in dramatic ways and become much easier, much, much better, much more approachable for a lot of users. So previously, Computers. Human computer interaction was pretty prescriptive on how humans had to interact with computers to get the results that they wanted.
And now with generative AI and large language models, That is changing dramatically, that now you actually have an experience where [00:03:00] you can engage in a much more conversational experience with a computer, with a system, and still get those results. And it's the, the conversational nature that is, is much more appropriate for a lot of users and also then allows.
A lot of different roles to do meaningful work with computers that previously were not necessarily able to accomplish that without help from someone who has more depth on the technology side. And, and I think it's that, that opportunity that's really resonating with everyone right now. Yeah.
Steve Hamm: Well, people have been talking to Siri for years, so what's so different about this?
Torsten Grabs: Yeah, I think the, the, it's a good question. The, the, the main, the main difference is, The breadth of knowledge behind the, the large language models, in particularly foundational ones. They, to a degree and compass knowledge about the world that we [00:04:00] expect from a system to have. And they're also able to.
Reason with you over the course of a number of, let's say, sentences over a conversation where one step logically follows the other, and that context remains present for the chat bot or for the system that you're interacting with throughout that conversation. And that hasn't been necessarily case with with systems like Siri in the past.
Yeah.
Steve Hamm: You don't ask Siri follow up questions that that, that it's informed by the previous question. That's right. Very interesting. Very interesting. Now, computer scientists and data science scientists have been using a variety of AI techniques for decades. And machine learning models have become an essential element of data management and data analytics.
What's the difference between the more conventional machine learning and, and the modeling and the large language models that people are talking about and, and and using today? Yeah,
Torsten Grabs: and I think [00:05:00] the, the main difference again is in. How people are using the, these technologies and, uh, for the more traditional machine learning, you would as an enterprise, as a company, as an organization, you would have a dedicated team in your organization that would help you create these machine learning models over your own proprietary data.
And then so you tune them, optimize. Them for what your organization needs. And a lot of time was spent on that. And resources in those teams typically were scarce. So you had bandwidth concerns where only a certain subset of the questions that you would, would like to get answered by machine learning technology, you were actually able to, to answer given, given the resources that you had.
Now that equation is shifting that. The model creation part, to a large extent, is going to the vendors, to the providers of the foundational large language models. And that moves from the [00:06:00] organization that has done data science previously and was in the business of creating models for the organization that is shifting to an external entity if you want.
And then the organization itself just consumes the result of that, the, the, the foundational model and then has to deal with. Prompt engineering to inject domain specific knowledge into that or organizational knowledge, right. Or occupies itself with fine tuning that, that large language model, right? So that the responsibilities are shifting here across organizational boundaries.
And also as part of that, over time, I expect that the role definition of job descriptions for. Data science teams in organizations will also change. They will shift away from creating machine learning models from scratch towards more consuming existing foundational models and then applying that, optimizing them for the specific purposes of that [00:07:00] organization.
Steve Hamm: Yeah, so they'll, they'll still be doing some machine modeling, but it'll kind of be on a lever level up from the large language models that they'll use, provided by others. Is that the idea?
Torsten Grabs: That's, that's right. So you essentially are, you're, you're consuming some sort of a pre-built, pre-baked model from either open source or from a commercial vendor.
And a huge amount of compute resources goes into generating these, these foundational models. Also, large amounts of data go into that as well, right? So compute consumption is spent by the external entity that's creating that model. It's no longer spent by the organization that creates these models, right?
So now the organization consumes the result of that from the external entity. And then applies typically a much smaller amount of compute, but maybe still a large amount of data for fine tuning to that machine learning model to optimize it for, for, for that organization. Yeah,
Steve Hamm: that's interesting. So there's a lot [00:08:00] of insight that comes out of this, but there's also efficiency, both for the organizations but also for society at large.
I mean, if you're, it, it's, it's like a public utility, these, these large language models, right?
Torsten Grabs: In a way, yeah, almost. I mean, there's a, there's a huge opportunity here for, for everyone to just become much more effective, much more proficient when interacting with, uh, with technology, with, with computers, right?
So it becomes much, much more approachable if you can write in natural language and engage in a conversation through natural language as compared to, to previous ways. How traditionally data science has, uh, has been done. Right. Yeah.
Steve Hamm: Now a lot of the attention and, and what's really captured the popular imagination recently is chat, G P T, which, you know, you can, you can pose a question to it, simple language.
Uh, you could ask it to write you an essay. You know, I, there's concern about, you know, [00:09:00] the high school and college kids doing this. You can talk to, do e about creating digital illustrations. It's just a lot of fun and interesting, but also kind of troublesome in some ways. But these seem kind of more like.
I don't know, parlor tricks, I mean, and, and fun things, but how do those activities and capabilities fit in with, with hardcore kind of data analytics practiced by, by businesses and government?
Torsten Grabs: Yeah, so the, the, the first observation I'll make is that systems like chat, G B T I have the opportunity to. Just make data practitioners much more proficient.
They have essentially the means to accelerate the, the day-to-day work that data practitioners, for instance, put into writing code. So if I'm a data engineer, writing a data pipeline in sql, there's a lot of. Plate code that once in a while I have to write by leaning on generative ai, [00:10:00] something like chat, G B T or other systems, I can get a lot of help writing that code.
At least a first draft, let's say, of that code can be auto-generated. And then I can come in to make sure that the auto-generated code is actually sound and I can make adjustments to fit it to the specific needs of my use case or my organization. And that has the potential to reduce the amount of time that people have to spend to, to write a data pipeline from scratch dramatically.
80, 90% I think is, is, is is literally what what I would expect if you look at systems like copilot for, for, for example, right,
Steve Hamm: right, right now. Up to now, my understanding is that large language models are, are made by training systems using massive amounts of data. You know, the stuff available on the internet, it's not private data, so is proprietary data.
Or other kinds of data and you know, kind of not out in the wild of the internet. Is that being used [00:11:00] already in large language models or will will that in the future? And if so, how do you do that? Yeah, I
Torsten Grabs: think that is the large untapped opportunity for large language models in the enterprise. Right? As you correctly pointed out, these models are trained over publicly available data.
And that has made, made them reasonably powerful, but that doesn't give you the best performance, the best quality results, and the, the most cost effectiveness for a given organization. For that, you would like to make sure that you can either fine tune or train your large language model over your own proprietary data and make sure that all the data governance and privacy regulations that you, that you have, that they are actually followed and honored.
By by the system, right? And that's a big challenge right now because a lot of these, these systems are hosted as public endpoints on the cloud. So an organization that has [00:12:00] sensitive data, they first need to understand what are the security, the privacy, the governance risks that they're entering into when they're sending sensitive data over the internet into an external system.
So this is, this is some something where, Organizations today should pay a lot of attention to what are the security and privacy promises by these systems. But then also for the industry, I think there is a lot of work left to actually bring the compute from those large language models closer to where the data sits, and make sure that the data that you send into the large language models does not leave the security boundary of the organization or the organization.
Yeah,
Steve Hamm: that's really interesting. I, I mean, I didn't realize that you actually. Had to send your data out into that public, you know, zone. I thought you could kind of, kind of. That the intermingling of these things happened at a, at a kind of a more secure spot where, I mean, where do you [00:13:00] see it, where is it happening now and, and where will it, where do you think it'll be?
And this
Torsten Grabs: is, this is constantly changing. So the earlier, the earlier versions of large language models that were cloud hosted literally had provisions in the terms and conditions that said that everything that you send into the service, we will use to further train to further optimize the large language model.
And that obviously raises concerns with an enterprise that, Hey, is there a risk for me that the data sensitive data that I might have sent up into the system is used for training purposes, and then when someone else is using that machine learning model, that sensitive data could get disclosed to someone else, right?
So that, that creates a lot of hesitation and enterprises have rightfully so pushed back on on those terms and conditions. And since then already we have. Seen the industry move towards stricter guarantees around privacy, security, and data governance. But in the limit, you will only get the most security and privacy sensitive organizations to sign off on using large [00:14:00] language models over their proprietary data.
When you as a vendor can actually demonstrate that that organization's data is not crossing the security perimeter of the organ. I gotcha. And that's, that's the remaining work that the industry needs, needs to do. But my understanding is that everybody is motivated to actually do that in the enterprise space.
Good
Steve Hamm: answer. I wanna focus now on some snowflake specific questions in the next part of the podcast. I know that Snowflake has put a lot of work into enabling data scientists to build applications using Python and JavaScript natively in the Snowflake data cloud, including some machine learning applications.
I want to focus in on two key technologies, snow Park, which is Snowflake's Developer Framework for securely hosting non SQL business logic across various run times and libraries and stream Lit, which was acquired by Snowflake and allows users to build data applications with just a few lines of code.
[00:15:00] These are key technologies and they've come along in the past year or so to enable organizations to make powerful data applications available to regular business users. Is there a role for large language models in these technologies in Snow Park and streamlet? There
Torsten Grabs: certainly is. Let's maybe start with, with the top level.
If you think about an application that uses Streamli to provide an approachable end user experience, it is, it is relatively straightforward to interact with cloud hosted large language models or generative AI from within a Streamli application. And there are various examples of that already and, and blog posts.
And encode samples on, on GitHub. So you, you can do that literally, uh, today. And by doing that, you can actually create a conversational experience within a Streamli application where in, in, in places you are interacting with a large language model that's, uh, hosted elsewhere Now. If you, if you then think [00:16:00] about, um, uh, snow park and, uh, and extensibility for your data stack, that's the place where you would think about hosting a, a, a, a tuned and optimized large language model that you throw at your proprietary data to extract additional intelligence from your proprie, Teradata.
So one example to just illustrate this is with our recent acquisition of aka, we've started to look at unstructured data and how can we better CRE create intelligence and value from unstructured data assets for organizations. And coincidentally, the technology that's being used by the AKA team is actually based on generative AI and large language models.
So in that case, We are already applying large language models much, much further down in this stack against your unstructured data to derive that additional intelligence from the data to make that your data more valuable when you put that into snowflake. Interesting.
Steve Hamm: So it sounds like Snow Park, you know, you know, we're talking a moment ago about [00:17:00] governance and security and things like that.
It sounds like Snowflake can be that place where sensitive data meets large language
Torsten Grabs: models. Yeah, exactly. And uh, I I, I like the term, uh, meeting place because I think what's meeting there is, is your proprietary data with. Your own proprietary compute requirements, right? And you, you put that together in the different snow park run times that we offer and they then run and execute within the security perimeter of your, of your Snowflake account.
And that's, that's conceptually the model. And also we want to follow, for large language models, we wanna bring those into the snow park run times to give you that same experience with the same governance and security promises that we uphold for snow Park more generally.
Steve Hamm: Yeah, yeah. No, I'm getting, so I'm getting an idea ho, hopefully it's corrected.
So we have these, a few large language models created by vendors who specialize in that, and then [00:18:00] it's, uh, it sounds like corporations are gonna be able to kind of bring together those models with their proprietary data, but, Is there also kind of another layer E or either within corporations or organizations or separately with kind of like ISVs for applications that.
That use large language models that are kind of like maybe horizontal or something like that, or, or maybe domain specific. I'm, I've gotta ask you a big, broad question here, but is there, is there yet another layer? Are there are, are there applications and applications that could be put into the Snowflake marketplace for sale and, and, and use in
Torsten Grabs: that way?
Yeah, I definitely think that there is a, there's a spectrum and, and in this spectrum, on one end you have. The foundational model's, very general purpose. And on the other end of the spectrum, you have a highly proprietary model that you may be fine tuned for one super specific use case, right? But in between, I think there we will see a whole array of [00:19:00] different use cases and scenarios light up that are enabled by generative AI and large language models.
And one expectation that I have is that we're gonna see a a, a lot of. Probably smaller, large language models come out for domain-specific use cases that have been purpose-built for something like, let's say, predictive maintenance or other domains, maybe financial services, right? And those are great examples for the marketplace, right?
Where besides those foundational models, ISVs, technology partners could offer those more domain specific models and make it easy for customers to deploy those into their snowflake accounts. Interesting.
Steve Hamm: So they would be sometimes functional, organization, functional, and sometimes, you know, by industry, by domain.
Now there are an a number of vendors offering. Large language models as a cloud service, as, as a matter of fact, I think there is no other way to to get it, but [00:20:00] as a cloud service, what criteria would you advise organizations to use to choose among those vendors?
Torsten Grabs: I would. Certainly encourage folks to, particularly from the enterprise with sensitive data, to make sure that their data governance requirements and privacy requirements are satisfied with the services that they're interacting with.
And that could mean that, for instance, they restrict which data assets. Can be sent into these services, right? So for which use cases do we allow these current services to, to, to be applied until maybe versions of that become available that cater better to enterprise governance and secur and security and privacy requirements.
Right? So that's, that's one thing to, to check it. The other one is about, Price performance. So the, the a a key part there is, is the quality of the results that you are getting. And still we are still, I would say in the early days, you can actually see r reasonable differences [00:21:00] between these different large language models.
Some work really well for text, others work really well for coding scenarios. Others work really well for, let's see, unstructured data, images for example, right? So based on those use cases, figure out which ones are the right ones for you. And then also the vendors then have different cost profiles for you.
Just, I mean, thinking about our own use case with AKA for document intelligence, there's, there's one dimension that we very carefully watch, which is the quality of the results that we are producing. But the other one is how much compute are you burning to actually produce those, those results. Right. And if you have a very, very large foundational model, it's very.
Broad because it, it contains essentially knowledge of the world, of the whole internet that becomes very expensive to run if you already know that you have a very, very use case that you want to implement with that. So in those cases, it becomes more attractive maybe to settle on a smaller, more specialized model because it will help [00:22:00] you to save some of that compute cost while still giving you the right quality of results.
Yeah,
Steve Hamm: yeah. Very interesting. You know, uh, there's a lot of concern that this new wave of, of innovation in AI will result in agents and systems that are smarter than people. Uh, I think there's little doubt that, that at this point that that will happen. And that could put a lot of people out of work and also potentially lead to some of these scenarios where the bots take over and stuff like that.
You know, we, we've, I mean, this is. This is not idle speculation. I mean, Jeffrey Hinton, one of the pioneers of large language models, resigned from Google. He, he was there leading, you know, AI scientist. He resigned basically saying, Hey, I'm so nervous about this. I've gotta make a statement that this is, that there's danger here and I can't, I can't be part of it.
What are your views on the risks? [00:23:00] Here, and what do you think society, business, you know, all the interested parties should do to kind of minimize the risk?
Torsten Grabs: I, I would certainly say there, there is risk in particular, I think the biggest risk that I see is if you blindly trust the models to fully automate critical decisions for you, right?
And in particular for those use cases. We want to make sure that we actually have human oversight in the loop, right? So we should use this as a tool to make people more productive, to accelerate the work. But we should be very careful about completely automating the human away. And I think that is in line also where the technology is at currently, right?
You'll find plenty of cases where you just get wrong answers that that are either outdated or just factually wrong, and you don't want to make like, Business decisions based on, on, on that where you know, there is the risk for just [00:24:00] factually wrong results. So having someone with domain expertise in the loop here, making sure that this makes sense, that is I think, the critical piece for us going forward.
Yeah. Yeah.
Steve Hamm: It seems like you should have humans involved in at least two spots. One is kind of like when you're talking about a new application or a new system. The, the business leaders who, who know what the business mo, what they want, how they want the business to run, both in its, you know, broad.
Broadly, strategically, but also specifically they have to shape these things. They have to, to tell, tell the, uh, the technologist, oh, this is what we want to accomplish with this. This is the, this is the shape of our business. And at the end, it seems like you really need some quality control people on the results, but also on the risks.
Kind of stuff. It seems like it's almost like you're sandwiching these systems between the humans and there's on the two ends. Does that make sense?
Torsten Grabs: And there's a cultural aspect to [00:25:00] that as well, that you, you just establish a culture where you don't blindly dress, trust the system just because it, it, it generates good results or, or great results.
In 80% of the cases, you, you, you have to make sure that in those 20% or 10% where it, it, it fails. That you're not falling down a very steep cliff. Yeah. Yeah. I
Steve Hamm: gotcha.
Torsten Grabs: For your information, there's a lot more to OLG than people think. Really need to dig deep and get to know the real youth
Producer: in the real up close and personal.
Steve Hamm: So we're coming to the end of the podcast and we typically end on a more personal or a lighter note. And you know, one of the things that I, I'm aware of is a lot of. The early days of, I mean, even in the 1930s and 1940s, even as, as modern computer systems were just really being invented already, some of these science fiction writers were [00:26:00] out there kind of imagining, well, where did, where did we go with this?
How well, how was society gonna be gonna be affected by, by machine intelligence? And I'm just wondering, I mean, You know, a lot of the, a lot of you, you tech guys were, were science fiction fans and maybe still are. Did you read a lot of sci-fi when you were a, a teenager and, and did it kind of, did it get you into technology or, or what's, what's your
Torsten Grabs: connective point there?
I, I think I've read my fair, fair share of science fiction. When I was younger and I, I still enjoy watching Star Wars movies with, uh, family. I've, I've read, read probably every second book from Stan Lale. So a lot of, some of those early ideas actually resonate with me, right? So that you now have the opportunity to interact with technology, with a system in those conversational ways and that, that, that you can see already in those, those early days in science fiction, how did people interact with.
Their spaceship right there, there was a machine that you [00:27:00] could talk to and it was a meaningful conversation. And out of that came instructions to change course or land on this particular planet. Right. And, and this becomes conceivable now. So we, we, we can build systems that provide a similar style of conversation when you interact with him.
Steve Hamm: Yeah, yeah, yeah. I, I know a lot of people remember 2001 Space Odyssey must have been, must have come out. The movie, it must have come out in like 1972. It had the, the robot Hal who took over the ship. So that's, I think that kind of implanted the fear of robots. Some of the others, I mean, like I think Isaac Aov talked about and wrote about robots really extensively.
But not fearfully. But he did have, I, he created rules, basic rules for robots, how to, how they should be seen and controlled and kind of the deal, a deal made between humans and the robots. So that was that. I think that was [00:28:00] really
Torsten Grabs: interesting. Yeah, and there's, there's a lot of work going on on that front as well around things like, for instance, constitutional ai where you provide these guardrails to.
To the system with the, the clear intent to establish boundaries of what the system is allowed to do and what it is not allowed to do. Yeah, yeah.
Steve Hamm: Yeah. I almost see it as like a, a new social contract between within society, but that really governs the relationship between humans and machines and. Both sides.
Need, need, need to have boundaries and guardrails. You know, because we know that it's the humans with malicious intent that make the machines dangerous. Exactly. Yeah. So I think that'll be, we have to keep our eyes on them as well, right? Yep. Yeah. Yeah. Well this has been a fascinating conversation, Torson.
I, I've really enjoyed talking to you and it's really what's really refreshing to me [00:29:00] is that, you know, I, I read a lot of the articles in the popular press about these new capabilities, and very often they're, they're really focused on kind of like, you know, what are people doing with deep fakes? Or, you know, there's something that's kind of sexy and you, and it makes you wonder, well, is there really a business application?
And I think you've really talked very kind of, Deeply and credibly and convincingly about the ways that this stuff can be used by organizations to really transform the way they operate. So I think that's, it's been a great podcast for, for that reason. We are, we are on the, on the, on the verge of profound changes in society.
Yeah. With technology driving them so, So thanks
Torsten Grabs: very much for your time. Yeah, likewise. Thank you so much for the conversation. Really enjoyed it.
Producer: Are you interested in learning how to build on Snowflake? Join other developers, data engineers and data architects at Snowflakes Build dot local event series.
Roll up your sleeves and explore the [00:30:00] possibilities of building on Snowflake with local in-person instructor-led workshops taking place across more than 30 global cities now. Learn more and register at www.snowflake.com/build local.[00:31:00]