The Data Cloud Podcast

AI and the Future of Finance: Decoding Earnings Calls with Liam Hynes, Global Head of New Product Development at S&P Global Market Intelligence

Episode Summary

In this episode, Dana Gardner, Principal Analyst at Interarbor Solutions is joined by Liam Hynes, Global Head of New Product Development at S&P Global Market Intelligence. They discuss how S&P Global Market Intelligence utilizes AI to analyze corporate earnings calls to guide and improve financial reporting. These insights help businesses enhance communication, refine executive performance, and predict market outcomes. The conversation also highlights the use of Snowflake Cortex AI platform and the importance of data-driven decision-making in the financial sector.

Episode Notes

In this episode, Dana Gardner, Principal Analyst at Interarbor Solutions is joined by Liam Hynes, Global Head of New Product Development at S&P Global Market Intelligence. They discuss how S&P Global Market Intelligence utilizes AI to analyze corporate earnings calls to guide and improve financial reporting. These insights help businesses enhance communication, refine executive performance, and predict market outcomes. The conversation also highlights the use of Snowflake Cortex AI platform and the importance of data-driven decision-making in the financial sector.

Episode Transcription

[00:00:00] Producer: Hello and welcome to The Data Cloud Podcast. Today's episode features an interview with Liam Hynes, Global Head of New Product Development at S&P Global Market Intelligence, hosted by Dana Gardner, Principal Analyst at Interarbor Solutions. They discuss how S&P Global Market Intelligence uses AI to analyze corporate earnings calls to guide and improve financial reporting.

[00:00:28] Producer: These insights help businesses enhance communication, refine executive performance, and predict market outcomes. The conversation also highlights the use of Snowflake's Cortex, AI platform, and the importance of data-driven decision making in the financial sector. So please enjoy this interview between Liam Hines and your host, Dana Gardner.

[00:00:48] Dana Gardner: Welcome to the Data Cloud podcast, Liam. We're delighted to have you with us. 

[00:00:52] Liam Hynes: Thank you, Dana. Pleasure. Looking you forward to it.

[00:00:55] Dana Gardner: It's a very interesting use case. You know, among the most promising ways that data science and business intersect is when an entirely new service or services and use cases can emerge.

[00:01:05] Dana Gardner: And you've been able at S&P Global Market Intelligence to identify an underutilized data resource of the legacy transcripts of corporate results reports to financial analysts and create new insights for business leaders. These new services show innovative ways that AI and human behavior can reinforce each other for multiple benefits.

[00:01:27] Dana Gardner: Tell us how you're using technology to improve how people better communicate?

[00:01:34] Liam Hynes: Well, that's a great question, Dana. So how do people better communicate? So, the genesis of all this started around 25 years ago when we were analyzing the Enron case study. Analysts on the earnings call with Enron were asking the executives questions.

[00:01:52] Liam Hynes: In particular, they were asking Kenneth Lay, the CEO at the time, questions about writedowns. Now everyone knows about the writedowns now, but back then it was just emerging news. Kenneth Lay didn't discuss writedowns in the presentation on the earnings calls, but there were six questions from analysts about writedowns in the Q&A section.

[00:02:13] Liam Hynes: That, that showcased to us that Kenneth Lay was being reactive rather than proactive on that topic. Kenneth Lay didn't go and preemptively tell the market in, in, in that earnings call presentation about, about writedowns. So he was being entirely reactive with the market on, on those downs. And then when he answered questions about the writedowns, he was being evasive.

[00:02:38] Liam Hynes: He didn't actually answer the question about write downs. He was pivoting. Entirely with off topic. So there's two behaviors there that we identified from, from Enron. One was being proactive or reactive with information, and the second one was a very straightforward behavioral characteristic. When I answer a question, do I remain on topic to the question asked or do I go off topic?

[00:03:05] Liam Hynes: And in finance, you know, both of those behavioral characteristics are, are important and we, we, we've showcased in the research, you know, why they're important. So that's, that was the genesis of trying to identify these two behavioral characteristics from ex executives on learning's calls now at S&P Market Intelligence, S&P Global Market Intelligence.

[00:03:27] Liam Hynes: We have a product called machine readable transcripts. That's got earnings call transcripts, going back to almost 20 years of history, going back to 2006. It's, uh, you know, approximately, you know, 250,000 earnings calls that we have, uh, in that machine be able transcript product. So we thought, okay, is there a way that we can systematically identify these two behaviors from executives across this massive corpus of earnings calls?

[00:03:59] Liam Hynes: And it turns out we were able to identify those two com, those two behavioral characteristics, proactiveness and on topicness. And it turns out that when you look at them mutually, exclusively proactive executives who, who voluntarily give information to the market, rather than being asked about it, outperform their reactive peers.

[00:04:21] Liam Hynes: And on topic executives, so executives who just simply answer the question and remain on topic to the question, outperform their off topic peers. 

[00:04:31] Dana Gardner: So that's a very powerful result from an existing data set that you've been able to exercise new sorts of analysis on. And perhaps we'll get into this a little later, you can then reinforce what's a good behavior from a bad behavior.

[00:04:45] Dana Gardner: So tell us a little bit about S&P Global Market Intelligence and how you're using data science to create science features and and services like this. What's your role in the world to, to which this now is an added benefit? 

[00:04:57] Liam Hynes: Sure. Yeah. So S&P Market Intelligence is a division of S&P Global Market Intelligence is essentially the data and analytical service provider to, you know, financial institutions, academics, government institutions and, and corporates.

[00:05:13] Liam Hynes: So, and we sell data and we sell data analytical tools. I sit on a, on a, on a team called QRS, which stands for Quantitative Research and Solutions that sits within market intelligence. And our job is essentially, you know, we're the closest thing you can get to a client at S&P Marketing Intelligence.

[00:05:33] Liam Hynes: We look across the vast array of data sets that we have. We essentially try and extract the value from those data sets, and then we articulate that value to our clients through research reports, to coding, coding notebooks, or coding blueprints, or to derived data products. So that's how we extract the value from our data and then pass it on.

[00:05:56] Dana Gardner: Right. And why is now a, an important and interesting time? What's coming together in terms of the technology, the price points of acquiring and using the technology, the availability of the data? What, why are we in a, a unique position to start offering new services like you've been describing? 

[00:06:14] Liam Hynes: Well, we're kind of at the, in the, you know, inflection period when it comes to technological innovation, right.

[00:06:21] Liam Hynes: Large language models have come into the fray. Over the past couple of years, and now we have this, you know, new shiny, amazing tool that we can point to all of our textual data, you know, and, and, and essentially that's why there's been such a, an uptick in, in interest in this is, you know, large language models, you know, ChatGPT and, and Lama, et cetera have trained themselves on.

[00:06:45] Liam Hynes: All publicly available texts, and that's, that's on the internet. But if you think about it, that's just the, the, the first level of analysis that the large language models can do. And, you know, I can go onto ChatGPT and I can find, you know, do a multitude of things that, that save me time versus let's say, jumping straight onto the internet.

[00:07:03] Liam Hynes: But the next wave is, uh, let's say textual information that's behind a paywall or is very difficult to collect. So, as an example. If I wanted to point a large language model to analyze those 250,000 earnings calls I was talking about, the large language model would have to go and, you know, find first of all the transcripts for each of those calls.

[00:07:26] Liam Hynes: It would have to, you know, identify what was the presentation section, what was the question and answer section. So, you know, we S&P Global Marketing Intelligence has already pure agent, this machine readable. Uh, transcript product, it has all of the metadata associated with it. I know the executives that are speaking.

[00:07:44] Liam Hynes: I know the analysts that are asking questions on the call. I can map the analysts back to their recommendations. I can map the company back to its financials. So it's the fact that we can now unleash these large language models on financial text so that you know, the, the, the words that the executives and the analyst articulated on the call and try and identify and dream if there's any signals embedded into this, into this textual or financial protection information.

[00:08:12] Liam Hynes: So it's, we've opened up a whole box. All, uh, really interesting research that you can do around financial, textual information to determine if there's any alpha or signals in there that we can, that we can then either, you know, drop the code in our clients and say, you know, here's how you go and calculate these behaviors.

[00:08:30] Liam Hynes: If you, if you want to do it from scratch, we can containerize them and, and deliver the signals to our clients directly. Or we can actually just drop a coding notebook on them and say, here you go. Here's how you can do it yourself. So, very interesting time to be in. 

[00:08:45] Dana Gardner: Yes. And you're looking at this through both structured and unstructured information.

[00:08:49] Dana Gardner: You're looking at the core data and the metadata, and you're also comparing it to other data sources in order to say, well, what difference can we make between how a behavior took place in an earnings call, and then how the company performed over a period of time. So when we do this right, when we can take advantage of these, uh, new, uh, large language models and these, uh, data sources across both structured, unstructured.

[00:09:13] Dana Gardner: What sort of paybacks have you been able to, to get? What's the bottom line, so to speak? 

[00:09:18] Liam Hynes: Yeah, that's a good point, Dana. So you're, you're right. You know, you have transcripts sitting in one corner. You have pricing information for the company sitting in another. You have the financials of the company sitting in another area.

[00:09:35] Liam Hynes: So, you know, one of the benefits of using Snowflake for this analysis is that, you know, all of S&P Global Market Intelligence data sets are available on Snowflake. So, first things first is you, you have all of these very important company specific data points in one place. So I can go in and use the large language model to

[00:09:53] Liam Hynes: analyze the text, but I know then that this text was from an earnings call that was on the 1st of April, 2025. I also have the pricing information from the 1st of April, 2025. I also have, you know, the company's first quarter results, so you're able to create this. Data infrastructure where you have all of the important information in one place, and if I derive a signal from a large language model, I can then map that back to the company's financials and I can map it back to the company's points.

[00:10:25] Liam Hynes: So I'm able to understand if there's any behaviors or, or. Any signals that the executive has spoken out on the call, I can, I can then, you know, see if that correlates with an increase or a decrease in, in the, in the price movement. 

[00:10:40] Dana Gardner: And so what have you found when you do this comparison? What sort of results are you getting in terms of saying, ah, we, we now see something we didn't see before?

[00:10:47] Liam Hynes: Sure. Yeah. So, when we look at the two behavioral characteristics that we identified, and, and again, I can go into the weeds later on, on, on, on how we built that, built those two signals. But when we look at a proactive executive, you know, essentially we come up with a score on whether or not the executive is a, a proactive, proactive executive, or whether or not they're a reactive executive.

[00:11:12] Liam Hynes: So let's say I've got an earnings call that happens on, let's say the 15th of March. I now have a score that says, okay, this, this executive was proactive. Let's say there's another call that happens on the 15th of April, and it's an executive that's reactive. What we do is at the end of every month, we identify, let's say we're looking at the S&P 500.

[00:11:34] Liam Hynes: We identify every executive that had an earnings call in that month, and if the executives were proactive on that earnings call and, and if they were in the top 20% of proactive executives. We would build a long portfolio, we would build a long portfolio of those companies. And then on the short side, we would build a short portfolio of executives who are reactive.

[00:11:58] Liam Hynes: So let's say it's the S&P 500 at the end of April, we have a hundred executives who are proactive and a hundred executives who are reactive, 20% of each. And we hold that long portfolio until the next month and then we re-sample it. And the same with the short portfolio. When you do that over a 17 year period, rebalancing at the end of every month, proactive executives or pro firms with proactive executives outperform the reactive peers and it's, they generate something like, I think 250 or 300 basis points of pure alpha per year.

[00:12:36] Liam Hynes: Now, that's one signal that we identified. The second signal that we identified was an on-topic alignment. So when the executive answers a question, it remains on topic. Again, we go long. The top 20% of on-topic executives short, the bottom 20% that generates around 350 basis points of alpha per year. Just, just without even looking at financials, without even analyzing any other information about the company, just, just post two signals plus the price movements.

[00:13:06] Liam Hynes: You're able to identify companies that outperform and, and underperform, right? So.

[00:13:12] Dana Gardner: That's fascinating and very valuable. You can make inferences about how a company will perform based on how these executives themselves perform. But it seems to me that over time you're creating essentially a score for trust and credibility and interview performance.

[00:13:28] Dana Gardner: And when you are able to then return this information in a feedback loop to those executives, they perhaps can improve how they communicate. And that's where we started our conversation. So I have, we've gotten to the point where we are able to take these, this analysis, using this great data across multiple sources and apply these analysis tools.

[00:13:50] Dana Gardner: To then bring it back to the person and say, aha, AI is gonna help you be a better executive.

[00:13:56] Liam Hynes: Absolutely. So, if you think about it, there's multiple use cases, but the two main use cases here are just investment managers, you know, asset managers who want to want to new signal to be able to identify companies that are outperforming, underperform to build, you know, stock portfolios.

[00:14:15] Liam Hynes: But then there's also corporates, right? So if I'm a CEO. Of a corporation. I now have some valuable information, right? I know that if the market is looking for some information, it benefits me to be proactive with that information and give it to the market, rather to them look for it. And the second component, very straightforward.

[00:14:36] Liam Hynes: If I'm an executive and I don't answer a question and remain on topic, I'm penalized for it. So, you know, we already have. A piece of software that we've built for investor relation departments at S&P Global Market Intelligence, where they can pipe in their prepared remarks and we can, export these scores to them.

[00:14:57] Liam Hynes: So even before an earnings call happens now executives are prepping themselves and making sure that they're ready and that they're, you know, proactive and that they're looking at analyst questions that have happened, let's say so far in earnings call season. Or they know that there's gonna be certain analysts that on the call and they look at previous calls and they see that they're asking about these topics or teams, then that means that, okay, if I'm an executive and I see last, last week, one of my peers, my CEO peers was asked, you know, five questions around tariffs, for instance.

[00:15:30] Liam Hynes: I know now that I probably have to be proactive with my information around tariffs on that call and, and proactively give that information to the market. I know that I'll be rewarded for that. And then second, you know, the second component is I, I can now prep myself for those questions. See, the idea with the earnings calls is that the pre-prepared remarks is heavily scripted, right?

[00:15:53] Liam Hynes: The CEO writes that, but the investor relations department, communications department marketing, and legal. That is a very heavily vetted document. And essentially the CEO is an actor. They're neutral, they're reading a script, and they're, they're giving a presentation. What's in that presentation clearly matters.

[00:16:13] Liam Hynes: So executives are definitely spending a lot more time making sure that that presentation is as it can be. Right. And then the second component is that they're also prepping themselves to be much more. Well prepared for the q and a section. Right? You know, some executives might, might say, you know what?

[00:16:32] Liam Hynes: I can probably answer this off the cuff. And you know, that might be the minority. But more and more we see executives now going in identifying the analyst questions that were asked on previous call and prepping and making sure that they are able to remain on topic to those questions that they're being asked.

[00:16:50] Dana Gardner: And I should think that on the flip side, the people who are asking the questions, if they could avail themselves of this research, they might be able to come up with better questions or, or put them forward in such a way that they'll get more reliable results in terms of how this company's gonna perform in the future.

[00:17:05] Dana Gardner: And so that strikes me, Liam, as a very valuable process. Almost any company right now that's trying to grapple with how do we monetize and benefit from ai. We look for existing data. We look for ways that we can share this data with people that will help them in their jobs, and then we monetize that as a service.

[00:17:24] Dana Gardner: It seems to me that that's a rinse and repeat sort of benefit that AI has, you know, direct and demonstrable and significant business benefits. 

[00:17:34] Liam Hynes: Absolutely. If you think about it, we've kinda shown in the research that firms are rewarded, have higher price prices if their executives are proactive and on topic, versus they're being reactive and off topic.

[00:17:50] Liam Hynes: So now, you know there's a tool now that essentially can up the game for the executives, right? Executives know that they need to be, you know, much more transparent. They know that there's algorithms. That are watching and, and listening to the, to the call and identifying if they're going off topic. So essentially it's kind of raised the bar now for executives across the globe that they're going to have to, you know, up, up their game when it comes to proactiveness and, and on topic.

[00:18:18] Dana Gardner: And they don't have to guess. They have data and they have inference, and they have science behind them that says, here's how you should behave. Here's how you go about these questions, and here's how you're gonna get the best results. So no more gut instinct as much, much more of a data-driven approach.

[00:18:32] Liam Hynes: Yeah, absolutely. And, one of the things about that software that I mentioned earlier on is it helps, it helps the executives write their presentation, right? And so it, not only does it score them on. Let's say proactiveness, but it also scores them on language complexity, like am I using overly complex language?

[00:18:49] Liam Hynes: Some previous research that we've done identified that managers who use complex, overly complex language underperform managers who use much more straightforward and simple language outperform. You can also look at things like numerical transparency. So when an executive is issuing a statement, if they accompany that statement with numerical facts, it has a lot more weight than a statement without numerical facts.

[00:19:16] Liam Hynes: And another analogy I like to use is, you know, if I'm a, if I'm a football manager and I'm getting analyzed by a reporter at the end of the game, and let's say football manager a. Is asked about their striker and he says, when our striker plays more than average games per season, has the highest kilometers run on the pitch and scores, you know, more than average goals per game.

[00:19:41] Liam Hynes: And the second manager says, you know, our striker is in the 97 percentile. Uh, for time on the pitch, I average runs 11 kilometers and scores 1.2 goals per game. Second statement holds a lot more weight than the first statement because it's accompanied by facts. So there's these, these other components that can aid the executives in, in their prep to make sure that they're, you know, delivering concise and succinct messages to the audience.

[00:20:11] Liam Hynes: And one other thing about that software as well, we haven't embedded it into it yet, but phase two is where we're going to say. How, how do we prep 'em for the Q&A? So what we're going to do is we're going to feed all of the previous analyst questions from that company and that company's peers into a large language model.

[00:20:30] Liam Hynes: And then we're going to ask it to come up with hypothetical questions for the executives based on previous questions and questions of its peers. So it's almost like they'll be able to, you know, prep for Q&A on the fly with relevant information that's coming from the on. 

[00:20:46] Dana Gardner: Sure. The tool will be able to predict most likely the types of questions.

[00:20:49] Dana Gardner: And that way you can advance your preparations even more so. And while this works for finance and perhaps sports, it seems to me that this is a, a function that you can take to almost any instance where you have important communications that you wanna refine and improve. You have the data, you have the science to analyze it.

[00:21:08] Dana Gardner: Let's dig into the science a bit, Liam, what is it underneath the covers? What's the secret sauce that's allowing you to do this? And what is the partnerships, the stack and the underlying infrastructure that's come together at this auspicious time to enable you to do this fairly quickly and straightforwardly?

[00:21:25] Liam Hynes: Sure. I'll, I'll start with the second, the latter bird first. So we use Snowflake cortex AI platform, all of S&P Global Market Intelligence data sets are available on Snowflake. And the reason we wanted to use that infrastructure was you can essentially pick large language model APIs off a shelf from Snowflake, and there's other vector embedding tools and summarization tools that you can use in Snowflake.

[00:21:50] Liam Hynes: So the fact that we had, you know, all of that data sets that I was talking about earlier on, you know, the textual data, the, the pricing data, the financial data, all in Snowflake, available on Snowflake. Then also the ability for, you know, vector, embeddings, large language models, summarization tools.

[00:22:07] Liam Hynes: It just meant that we kinda had a one-stop shop to be able to do this analysis. So that's one component that was quite powerful. The tech stack that we used was, let's start with say, on topic ness, which is how do I, how? How do you identify if somebody's answered a question and remained on topic?

[00:22:27] Liam Hynes: Well, if they remain on topic, they're going to use similar concepts. Topics and language that was used in the question. So it means that we have to look at the question, at the language, in the question. We have to look at language in the answer, and we have to compare them to see if they're using similar concepts and, and, and similar topics.

[00:22:46] Liam Hynes: And we do that by. By taking the question and taking the answer pair. And we use a large language model vector embeddings to turn that textual data in the question and answer into numerical data. So think about vector embeddings as just a zip code or a post code for text. And so now that we've got numerical data to the question and answer, we can now compare that numerical data.

[00:23:14] Liam Hynes: And if you remember, high school trigonometry or secondary school trigonometry. When I look at two vectors, if I get the co-sign of those vectors, I can determine if they're similar or not. So if I have two vectors that are exactly the same, the angle between them is going to be zero, and the co-sign of zero is one.

[00:23:34] Liam Hynes: If I have a vector that is the exact same, it's going to have a co-sign score of one. So as an example, the only time you get a co-sign of score of one in a Q&A pair is if an analyst said, what's your guidance for the fourth quarter? And the CEO said, did you say what is our guidance for the fourth quarter?

[00:23:54] Liam Hynes: That's when you've got a perfect match. You know, that rarely happens and when it does happen, you know, we fold that Q&A pair into the next Q&A pair to combine the two of them. So what happens is that when you calculate a co-sign symbol score for the question vector and the answer vector, a manager that is on topic will have a high cosign score close to one, and a manager or an executive who is off topic will have a lower score.

[00:24:21] Liam Hynes: So we can essentially now have a score. It's a ratio anywhere between zero and one. And what we do then is we just go long short that score, so managers with a high score, we're on topic and we build long portfolios from them. Managers with a low score are off topic and we build short portfolios.

[00:24:41] Dana Gardner: Well, I can certainly see where a chief executive or a financial officer that had this tool available to them would want to take advantage of it and certainly be thinking that if my competition is doing this and I'm not, then I'm at a significant disadvantage. So, seems like it's a no brainer that you'd want to avail yourself of these services 

[00:24:59] Liam Hynes: Absolutely.

[00:25:00] Liam Hynes: Yeah. If you, if you wanna be ahead of the game, then you, you're, you need to be a CEO that analyzes your own language and how you articulate your results and the operations of your company too. To shareholders, you know, and so what, you know, once we're in the, once we're in the technical bit data, the interesting thing is that, so for on topicness, we looked at the question, we look at the answer and we see if they're using similar concepts or language.

[00:25:27] Liam Hynes: But for the proactiveness, it's a bit more technical, right? Essentially what we have to do is we have to identify the topics that the analysts are asking in their questions, and we have to determine were those topics mentioned in the pre-prepared remarks. Now, there's an old school natural language processing way that you can do that by, you know, counting up and identifying the topics in the question, seeing if it during the remarks, and coming up with a scoring mechanism for.

[00:25:54] Liam Hynes: But actually what we did is we looked and we trained an LLM or through prompt Engineering to pretend it was an executive on an earnings call. And what we did is we fed that LLM executive the analyst questions, and we asked it, we said, pretend you're an executive on an earnings call and answer these insight analyst questions.

[00:26:17] Liam Hynes: Now the difference is, is that we rig fenced the LLM to only to be able to answer the questions from the pre hurdle mark, right? So that meant the LLM answered the question, the analyst question, but only use the text from the pre remarks to, to be able to answer that question. And then we did the same process, right?

[00:26:39] Liam Hynes: We take the LLM answer. The original analyst question, and we compare them, right? We run the cosign similarities koan. So if the LLM was on topic, it meant that, well, if the LLM was on topic, it meant that topic must have been covered in the preprepared remarks, meaning that the executive was proactive, and if the LLM was off topic, it meant that it couldn't answer questions.

[00:27:02] Liam Hynes: So that topic wasn't covered in the pre-prepared remarks. Meaning that the executives being reactive when they were answering that question 

[00:27:09] Dana Gardner: That could tell you, we need to go, you know, if this happened in real time, then these analysts asking questions could get that red flag and say, oh, I need to drill down on that question.

[00:27:19] Dana Gardner: The LLM says that this is a potential area for a deeper dot. 

[00:27:23] Liam Hynes: Yeah. So if you had a sell side analyst who was waiting on the call to answer their question and they had an analyst that was, you know, an analyst before them, but asked the question. If they had something that was in real time to tell them, you know what, this topic wasn't covered in the pre-prepared remarks and might be important, they could potentially use their air time to ask that question as well.

[00:27:43] Dana Gardner: It's fascinating how you can take this in different tangents and, and so let's, let's look at that. What comes next? You mentioned Snowflake Cortex. We see more and more agents coming on board these days as we have more agentic AI capabilities. Where might this lead to? Is this just scratching the surface, Liam?

[00:28:01] Liam Hynes: Well, you know, there's an opportunity there for executives probably to set up some kind of sell side analyst agent, LLM agent, right? You know, they would train that sell side analyst to look at all questions that analysts have asked previously on, on their earnings calls. They could look at questions that that analyst has asked on that company's peers and competitors.

[00:28:26] Liam Hynes: And then they could essentially input all of those questions into an LLM. And then that LLM could come up with hypothetical questions, potentially even based on their presentation. So like a real life agent to help them prep for earnings calls to make sure they're sharp, concise, and on topic on earnings homes.

[00:28:45] Dana Gardner: So the equivalent of an AI sparring partner that you could get in the ring with and have a go few rounds before you get out into the real world. 

[00:28:53] Liam Hynes: That's it. Yep, exactly. Yeah. Prep yourself for those four calls a year. Very interesting. And, and I'm wanting to note as well is that, and you know, the benefit of Snowflake Cortex when we did this work is I think we processed 192,000 earnings call transcripts, and there's approximately, you know, 20 to 30 analyst questions in each transcripts.

[00:29:15] Liam Hynes: That's 2.2 million questions. So we gave 2.2 million questions to the LLM to answer. And then the LLM spells out, you know, 2.2 billion answers. So to be able to systematically do that was, you know, was a great benefit. And the fact that when you look at that much of a corpus of earnings calls to show that systematically proactiveness and on-topicness matter, you know, there's statistical significance there that showcases that, you know, this isn't just random.

[00:29:48] Liam Hynes: You know, proactive managers outperform reactive and off-topic managers. Sorry. On-topic managers outperform their off-topic peers, but actually interesting enough data. What we did is we tested those two signals mutually, exclusively, and then we sent, okay, well what happens when you have an executive that exhibits both of those characteristics?

[00:30:10] Liam Hynes: So we came up with four communication styles. One was. So the first one was proactive and on topic. So these are executives who give the market what they want. They're proactive with information, and when the analysts ask questions, they're on topic. We then had proactive and off topic, reactive and on topic.

[00:30:30] Liam Hynes: And then the flip side was reactive and off topic managers. So this is where communication is totally broken down. So they're being reactive. So the analysts aren't, aren't getting the information they want in the presentation. And when executives are quizzed for that information, they're going off topic.

[00:30:47] Liam Hynes: So it's almost like a double whammy. Executives aren't putting it in, maybe they're avoiding the subject of the topic. And when they're pressed on that topic, they're again being evasive and they're avoiding it. What we've noticed is, is that in the research, reactive and off-topic, executives are significantly penalized versus their proactive and on-topic peers.

[00:31:08] Liam Hynes: So when you combine the two signals together, it actually is a much more powerful signal than having them mission exclusive on.

[00:31:16] Dana Gardner: Well, it certainly sounds like a must have tool for the busy executive who. It can move markets with their words or, or lack of words. So a very, uh, interesting use case that I think opens up a whole new era of, of different types of tools across all sorts of different types of communication.

[00:31:33] Dana Gardner: And Liam, you're gonna be presenting more detail about this at the Snowflake Summit this June at San Francisco. I'm sure that will be a well attended event. 

[00:31:41] Liam Hynes: Yes, absolutely. We're really looking forward to that. So, I'm presenting on the Monday. I've got a 45 minute presentation. I'll be going, getting into the weeds on, you know, the economic rationale and you know, how we, how we constructed the signals and, and some of the, the back test results.

[00:31:57] Liam Hynes: And then actually on the Thursday, we've got a hands-on lab. So if you think about it, I'm handling the theory, let's say on the Monday, and then there's a 19 minute hands-on lab on the Thursday. For any data scientists really want to get into the weeds and replicate our work. 

[00:32:14] Dana Gardner: Well, great.

[00:32:15] Dana Gardner: I'm afraid we'll have to leave it there. Thanks so much to our latest Data Cloud Podcast guest, Liam Hynes, Global Head of New Product Development at S&P Global Market Intelligence. We really appreciate your sharing your thoughts, experience in this fascinating use case, Liam.

[00:32:31] Liam Hynes: My pleasure, Dana. Thanks, million.

[00:32:36] Producer: Witness the future of data, AI, and apps at Snowflake Summit 2025. Join Pioneers and industry leaders like Snowflake's, Sridhar Ramaswamy and Open AI's Sam Altman in San Francisco, June 2nd to 5th. Dive into 500 plus sessions, explore 190 plus partner solutions, and experience cutting edge demos. Transform your career and organization.

[00:33:01] Producer: Register now and build the future with Snowflake at snowflake.com/summit.