The Data Cloud Podcast

Driving Innovation and Accountability with Data with Carter Cousineau, VP of Responsible Data and AI at Thomson Reuters

Episode Summary

In this episode, Shannon chats with Carter Cousineau, Head of Data and Analytics and VP of Responsible AI and Data at Thomson Reuters about responsible AI, managing data quality and integrity, and the role of data governance during a digital transformation.

Episode Notes

In this episode, Shannon Katschilo, Country Manager of Canada at Snowflake, chats with Carter Cousineau, Head of Data and Analytics and VP of Responsible AI and Data at Thomson Reuters. They delve into the importance of responsible AI, managing data quality and integrity, and the role of data governance during a digital transformation. They also discuss how Thomson Reuters fosters data and AI literacy across the organization and what Responsible AI truly means.

---

Calling all developers, business leaders, IT execs, and data scientists! Snowflake World Tour is your chance to learn and network. Discover how Snowflake’s AI Data Cloud can transform your career and company. Experience the future – join us on tour! Learn more here.

Episode Transcription

[00:00:00] Producer: Hello and welcome to the Data Cloud Podcast. Today's episode features an interview with Carter Cousineau, Head of Data and Analytics and VP of Responsible AI and Data at Thomson Reuters, hosted by Shannon Katschilo. In this episode, Shannon and Carter delve into the importance of responsibility. Responsible AI managing data quality and integrity, and the role of data governance during a digital transformation. They also discuss how Thomson Reuters fosters data and AI literacy across the organization and what responsible AI truly means. So please enjoy this interview between Carter Cousineau and your host Shannon Katschilo.

[00:00:39] Shannon Katschilo: Well, hello everybody and welcome back to the Data Cloud Podcast where we explore how innovative companies are trans.

Forming their businesses through data collaboration and AI. So today, I'm excited to be joined by Carter Cousineau, Vice President of Responsible Data and AI at Thomson Reuters. Where she leads the charge in ensuring ethical AI implementation across one of the world's largest information service companies.

Carter, welcome to the show. Maybe for a bit of background, can you let us know about your role and maybe a little bit more about Thomson Reuters as well? 

[00:01:15] Carter Cousineau: Sure. It's great to be here and nice to see you again. So I'm currently serving as the head of data and analytics in the interim, as well as the vice president of responsibility AI and data at Thomson Reuters.

I lead a team that's focused on harnessing data and AI to drive both innovation and value back for our customers. So, Thomson Reuters is a global provider of specialized information, software, and tools supporting legal, tax, accounting, and corporate professionals, as well as governments. And we really do this through trusted data and insights that they need to navigate complex challenges in order to help make informed decisions.

So, With data and analytics and AI, we aim to create that accuracy, efficiency, and reliability based on those insights. 

[00:02:02] Shannon Katschilo: So, you're the VP of Responsible Data and AI, so what are the key ethical considerations you focus on when implementing AI solutions? And maybe can you explain for our audience the concept of Responsible AI and why it's crucial in today's business landscape?

[00:02:20] Carter Cousineau: Sure. In our field, Responsible AI centers around transparency, fairness, accountability, and privacy, security. For Thomson Reuters, these principles guide our AI implementation, ensuring that our tools do more than just deliver data. They provide insights that are trustworthy and ethically sound.

Responsible AI to us means developing AI solutions that align with both. The spirit of regulatory standards, as well as user expectations. And it's essential in today's landscape, especially in industries like legal and finance and different corporate environments where AI influences directly the impact of what those outcomes or decisions should and can be.

So being responsible with AI isn't just the right thing to do. It's more, how do we strengthen our customer trust and uphold the integrity within our products? So 

[00:03:16] Shannon Katschilo: TR is a highly innovative organization, and I'm wondering, how do you balance innovation with also that speed to market in the ethical considerations in AI development?

You know, you, as you said, you're in a highly regulated industry with legal and financial sectors. Really curious about how you find that balance. 

[00:03:40] Carter Cousineau: Yeah, this is such a good question. To me, balancing the speed with responsible AI requires a more foundational approach and embedded approach. At Thomson Reuters, we embed ethical considerations early in the AI and data lifecycle, but also across from ideation throughout to deployment of AI systems.

For example, in developing legal AI tools, be diligent around ensuring that their performance grade and fairness and transparency considerations have been accounted for, ensuring that their algorithms are balanced and decision making. We achieve this through balance setting and different performance evaluation methods during training, development, and post production.

But also there's several other process steps that teams have to take of reviewing and integrating almost like ethical checkpoints throughout the actual system lifecycle. Allowing us both to innovate responsibly and stay competitive without compromising the quality of product. The outputs in order to maintain open channels, I would say is probably the other aspect with our stakeholders to be both relevant for AI practices and based on real world.

If I were to give an example, just a more practical one of a control that internally you can start to see very quickly where yes, it's a, it's a control point, but it's also becoming a balancing efficiency for innovation longer term is something like model documentation templates. Thank you. that we expect all of our AI systems to have internally.

This is an internal facing document full of details around data, AI, system development, design, deployment, architectures. And the value already became early on when we were doing this internally was that as our developers or data scientists grow in their career and evolve and grow into different roles, the next person who picks up that AI system can use the model documentation template to understand the decision making behind the way the system was built.

So this is now alleviating some of that. Thinking or maybe almost the, the space of, of kind of reverse engineering and allowing for time to innovate faster, to be able to look at it more objectively of here's what's been done to date and here's what we can react to, and here's some of the different options of our path forward.

[00:05:59] Shannon Katschilo: One of the themes I'm really picking up on is transparency. It feels like you're spending a lot of time with different people across the business, bringing them into the fold. And that's part of your responsible AI framework. I'm wondering around communication, how are you communicating that AI decision making process to the end users?

[00:06:20] Carter Cousineau: Yes, yeah. Transparency is the cornerstone of Responsible AI approach. I mean, you even see it in NIST AI, RMF. Really, transparency embeds a lot of accountability concepts and other ethical AI or responsible AI concepts. In practice, we work to ensure that users, whether our users be internal users or external users, have an understanding of how the AI system provides its recommendations.

What was the system trained to do? What is it good at? And what is it not good at? In the easiest way of saying things. And then from there, what I would say is kind of demystifying what it can and can't do as best as possible so that the users have that full transparency. And then we also do, you know, going a step further, pulling on a different concept, more human in the loop of creating those kind of, they call them like product feature lists that kind of share within what it can and can't do.

And as we train our, our own customers and stakeholders, what is possible. It helps to then understand the ins and outs of what the system can do and what it's not capable of doing to help ensure that the use is, you know, the appropriate use that it's trained on and helping to ensure that the It's informing the right, informed, better decision making process that it might be used for.

[00:07:38] Shannon Katschilo: Fantastic, Carter. Really appreciate it. I think that this is a common challenge and questions organizations have around responsible AI framework is, which you just explained. How does this work in practice? Also very curious to hear about how you measure the success of a responsible AI initiative. 

[00:07:57] Carter Cousineau: Yeah, another great question. I mean, on multiple fronts, and I think that's extremely important, and not just for responsible AI, but also data and analytics. As we've been building out our 2025 to 2027 strategy for data and analytics more broadly, this has been very much top of mind of the way we measure. There's several different pillars I would look at and not disputing there's probably others that I'll miss in just thinking off the cuff.

But when you're looking at, you know, foundationally for any data and analytics team, there's There's a longer term play, and this is how I've been framing it up to the team that, you know, many companies, whether you're in banking, whether you're in the technology space or healthcare space, the investment in data and AI assets and the need for data and analytics is there, but when you make that investment very quickly within and internally, the company starts to realize big or small, There's a lot to do, and there's a lot of data, there's a lot of AI, and that will also increase over time.

What is that value back? And how do you measure value? Is that open question that is constantly evolving? And to be honest, I think that's the way we look at it. That if we're measuring it today, how can we better measure it more effectively in future? And what we've been doing as it relates to the broader strategy is tying that value back to the business unit themselves.

So, What we might be measuring or formally would be measuring in one metric, how do we combine those metrics to actually drive the bigger story behind what the actual data governance provided or the responsible AI practices provided, the analytics reporting that was Built in for that business unit.

Sometimes looking at it more objectively and seeing how you can tie those metrics together is where you can see the true efficiency gains and pulling on your other partners and peers. So what one team might've been capturing might be fit for purpose for their reporting. But it might not be for the use that we're trying to evaluate our metrics on.

So how can we have that kind of open conversation between our data producers and consumers and look at striving against better evaluation metrics for our overall value demonstrated? When we look at Responsible AI, the only other pieces I would consider is 100 percent being able to measure the granular degree of adoption of controls was number one, creating visibility to the environments and where they're.

On target and where they're kind of in progress so that we could be extremely grounded in understanding what controls were applied so that we could reach that a hundred percent adoption of legacy and new AI systems that we had did in the past when we had started, and then also the evolving regulatory landscape.

Which is a tricky one. So ultimately what we're getting to now is where because we built the responsible AI hub, that is just a much lower level of abstraction. So very granular. And what I mean is we have all the data and AI risks flagged in terms of inherent risks to be isolated. The mitigation techniques that would have been applied to that particular AI system.

And the residual risk, we're looking at mapping in time and measuring that based on what's a best of industry standards or research best practices that can give dollar values and or, you know, percentage values of when you do that one control point, how much closer to completion are you in alignment for some of these regulatory environments?

So that is more just taking what might feel as a task or a new thing to do and making your users or your. your team to have to go through it, understand, like, when I do this, there's a much bigger impact I'm having at play. 

[00:11:40] Shannon Katschilo: That's great. No, thank you for touching on that. I think, you know, that's part of change management as well is really getting internal party comfortable and understanding the context behind those regulatory compliances.

It feels like you've really kind of brought that into the cultural saprocks of how you're launching AI. So shifting gears slightly, TR manages massive amounts of data daily. So I'd love to hear from you, and we both know, you know, there is no AI strategy without a data strategy. And a lot of work has to go on building that foundation.

How do you and TR at large really work to ensure data quality and integrity? Such a vast scale of information that you have. 

[00:12:28] Carter Cousineau: Yes. I mean, I know you said it, but I would definitely restate there is no AI without data for some reason that still gets a little lost in, in the society at large. Absolutely.

There is no AI without data. So I agree now that the very simple mentality of garbaging, garbage out data quality and integrity for us is paramount. Giving a critical nature to creating data driven insights has always been number one prior to AI advancements and still today. So we, there's several things we use.

Very comprehensive data governance framework that includes different policies and standards all embedded in terms of the sets of controls that we apply against our data sets and our data practices for any use. Similar to what? You see our processes for AI systems because the two kind of have similarities, but also have different risks associated with them.

And then our teams engage with the continuous data teams across the entire enterprise to be able to bring them along those journey and ensure there's the appropriate training, they're prioritizing these practices that The data fueling our AI models is accurate and reliable. And especially this isn't something that I would say what's interesting about Thompson Reuters, and I've always said this, is it's something that the employees care about, our stakeholders care about, and our customers care about.

So you see it across the board and that's the type of company you want to be a part of, to be honest. Where they actually care about data. 

[00:14:02] Shannon Katschilo: And you're going and your company is going through a significant digital transformation as is a lot of similar organizations across the globe. So as this continues, how do you see the role of data governance evolving with that digital transformation?

[00:14:20] Carter Cousineau: Yes. So as the digital transformation continues to accelerate, data governance is evolving with the regulatory landscape and strategic advantage that it has in terms of its positioning. Our governance approach for both data and AI is not a one size fits all. And that is what kind of allows, going back to that innovation versus, it's a more pragmatic approach of reactivity to the use case at hand, some of the risks.

are not the same, because some are base risks, they truly are across any use case, but some are not, and they're heightened in different environments, and that needs to be appropriately assessed to then gauge the digital transformation and react. Thomson Reuters really looks at moving forward data governance that will integrate even in a tightly coupled AI ethics framework, and not just ensuring it's about the control, but enabling the innovation at scale.

So how can we do these? Control points, or just the, the different process changes, change management was and still is definitely number one when we're looking at how our teams operate. And we spent time relating to our digital transformation with the teams to understand what their workflows are, to try not to disrupt them.

How can we put our governance controls and practices within their data and AI life cycles to the workflows they already have in place? So not reinventing the wheel and more working extremely collaboratively with those on the teams who have a certain way of working and build those governance points in place.

Ideally, and I've always said this, we're not quite there, so being honest, but ideally every data governance and AI governance, you should not realize you're doing it. It's that seamless into your workflows. That's the ideal state. It takes a while to get there, but they shouldn't realize that. They're doing it. It just became a part of their day to day. 

[00:16:10] Shannon Katschilo: How about liken it to maybe, you know, when we first started wearing seatbelts back in the day. Now it was just, you know, where you can't even imagine going into a car without doing it. Love how you're kind of, again, building that in within like the fabric of the organization and really incorporating it into the day to day of your employee's lives.

[00:16:29] Carter Cousineau: Yep. I, if I was to run the seatbelt analogy, I couldn't agree more. And then you look at our kids with that. You know, grow up and they're like, I don't need a seatbelt. Yeah. And I, we're still trying to actually convince them. Yes, you do. And then they get to the point where they absolutely realize. Exactly. I'm with you. 

[00:16:47] Shannon Katschilo: So we at Snowflake have greatly valued the partnership with Thomson Reuters. It's so great to learn and to, to partner with such an innovative organization. Curious about your perspective on the Snowflake platform and, and You know, how it supported your data and AI initiatives to date and any chance, you know, you can share an example, maybe of a recent project where data and AI played a critical role in innovation.

[00:17:15] Carter Cousineau: Yes. Yes. Snowflake has been instrumental for us helping scale and manage our data assets efficiently and while providing the flexibility and computing power needed to handle complex data analytics. One recent project, I mean, there, there's several, I think when we went along the journey and we've been in this journey for several years now, but using Snowflake's platform to streamline data flows has been immensely valuable.

And those data flows are not just blowing it a little bit up for context for those listening. were a central function that supports all business units. So this could be finance, marketing, product, HR, and Stealth Lake has been able to streamline those data flows actually for many different groups. So you can see the value gains just there and the insights that provide each are very different.

There's nuances to each business unit and then in combination how you could use that data. This tool kind of has been leveraging vast data sets and for us, it's something that will can definitely be continuing along the journey. Snowflake's scalability is allowing us to bring the extensive data and modeling demands into a build one, use many mentality and something that, you know, as we're evolving our data products and our data and AI marketplace, I suspect Snowflake will be.

Just another pivotal reason of how that can evolve and make it just more efficient to build our data products that could be scalable for the enterprise. 

[00:18:47] Shannon Katschilo: So we're going through this massive transformation and I firmly believe it's gonna bring a lot of goodness to society, but one of the things that keeps me up at night is making sure that there is equity and that through this transformation compared to maybe other revolutions through throughout our society.

We can have more diversity and this can really uplift our economy as a whole. And so, you know, we at Snowflake, we are deeply passionate about the upscaling and literacy initiative that can really help provide more equity, provide a lot more people access and be part of this. So I know you two are passionate about this area.

Can you give us maybe some real life examples about how an organization Like Thomson Reuters is really fostering data and AI literacy and how you're bringing those business users along with this transfer. 

[00:19:45] Carter Cousineau: Yes, I am also very passionate about this because where I would start is your data fed into an AI system or your data in general in, in a report or even just in the data set.

Is telling you a story and that story is where society will have to continuously work from a responsible AI. Practice of ensuring that that story is fair, equitable, transparent. There is no imbalances across from a diversity perspective. And when used for a certain purpose that there are no marginalized group, I think Responsible AI is the exact place to help have that conversation and then the work needs to get done.

So when the story's telling you, and this is what I mean, when your data is telling you, if we run in some of the past unethical examples from different companies. Over time, this is well before generative AI, there was a need to go back and add more data to the systems, add more proxies, depending on What was, what the outputs of that system was telling you and the risks that may be present as it relates to responsible AI.

So there's a continuous need to evolve those refinements and everyone across the world when you're looking at using it internally and or as a secondary user, how to understand the art of the possible and the art of what is not possible within these systems. To then have a valid and fair representative response in your decision making process.

For Thomson Reuters, now separately, there's been a strong emphasis for us on data and AI literacy from day one across the enterprise. There are several really good initiatives we have outside of even data and analytics, but for us, both internally and within industries. Internally, we run workshops, we have enterprise wide training programs and modules that have been custom built for AI governance, AI foundations, and responsible AI principles that are open to their role based curriculum, but open to every employee to take as an upskilling.

And many have done, so we have thousands that have taken our training modules to date. And then our goal is to empower every employee, regardless of their role, to be able to understand and effectively leverage AI tools. This is the part of generative AI that I actually quite like. It's allowed us to look at our roles and be like, how could we do things better with this tool?

Because it was so usable. So now it's how could we use this tool more effectively for our roles, but in a responsible fashion is what we're trying to equip everyone on. And then key other KPIs that we've had is, you know, participation rates I mentioned on training programs. Improving data decision making capabilities, employee confidence in our scores, and then we have really good.

HR programs as well, so outside of data and analytics, where we have the mentor mentee providing different mentors across the organization related to the areas that they wanted to learn and explore deeper in. So there's many different initiatives. We have the global AI learning days as well that focus on.

Those very topics relating to AI and then just helping make sure that everyone can get to the same point of what that journey looks like, recognizing that that journey is different for everyone as well. So helping support their needs as they need it. 

[00:23:13] Shannon Katschilo: Fantastic to hear. And you touched on this earlier, but what a great way, again, to bring humans into the loop, right?

And part of your change management and really bringing everybody along to take full advantage of this. You know, you touched on it as well and making their lives, making their role more effective, probably reducing their cognitive load as well to be the most efficient, you know, employee that they can. So it's so great.

So many things that I think our listeners can take away and start to embrace and bring into their organization. So, looking ahead, three to five years, what emerging ethical challenges in AI do you anticipate? And how is Topps and Reuter kind of preparing for those? 

[00:24:00] Carter Cousineau: First, I am definitely fully anticipating new ones. New ones that I certainly am not sure, I don't think anyone knows what they would be called or what exactly they are as it relates to specific gen AI risks as well as just AI at large. So there will be new ones. I think there's already ethical challenges and those will be heightened or anticipated that they'll be heightened around deepfakes.

Algorithmic biases in, in different domains. And of course, privacy concerns around real time data in terms of data processing has been one, and I don't think that's going to go anywhere anytime soon because the capabilities of this technology has just widened. Thomson Reuters proactively prepares by refining our, our framework, constantly investing in additional tools on how we can mitigate some of these techniques.

What is kind of. Uh, the best approach or best technique for mitigation today might not be in a couple of months. So constantly scanning the research and regulatory environment for what are the, the possibilities because that also varies on your use case. A control that might work for a tabular use case.

Tech or text model might not work for your NLP model. So the actual more technical approaches you would apply differ and keeping a scan of that to make sure that we can at least apply the ones that are at present and in the market, whether that be a tool or custom built. And then we also collaborate closely with our external partners, our peers, partnering companies outside of to understand what they've applied.

I often give the example of AI supply chain transparency. Where if it's a third party system and a risk was flagged in our assessment, we'll go to that vendor and ask them, well, how they first detected and mitigated that responsible AI risk so that we can put the control point in place from our side, which is starting to pull on that supply chain transparency piece.

[00:25:55] Shannon Katschilo: So what advice would you give other organizations that are just starting to develop their responsible AI framework? When they're rolling out these generative AI initiatives. 

[00:26:05] Carter Cousineau: For organizations starting on this journey, start as soon as possible. And it might feel like there's a lot of different opinions, depending on which regulatory landscape you look at or where you start.

But starting is definitely number one. I recommend kind of defining clear ethical principles at the onset. And you can always pivot and improve from there. So, picking that landing point of which ethical concepts fit best within your organizations, the way you would categorize them and define them definitely would differ.

I mean, I've been using this statistic a lot, but there used to be, Privacy a couple years ago in the research had over a thousand different definitions. So, there is already a lot to choose from and just picking and, and being selective and intentional would be number one. And then building a multidisciplinary team that includes, you know, your ethicists, your, your experts in, in data science, legal experts, and coming together to address those challenges has been number one.

So, we built such a cross functional team. Just within the responsible AI team, then in a bigger scope within DNA, and then of course Thomson Reuters. But for us, when we were building out the team, Bringing in as much diversity, different experiences all together that you're, that when we meet around the room, we're looking at the challenges more objectively and together and identifying things that maybe we each wouldn't have in isolation.

Those are some of the things I would start on. And then when it comes to generative AI, being able to prioritize your starting point. So starting and starting small is okay as it relates to responsible AI. It allows you to feel more grounded in the program you've built. And you can constantly mature that program.

So to start with a very robust framework and work backwards might not get the results you're looking for as early as you're looking for them. So that would be, I guess, my few things of 

[00:28:02] Shannon Katschilo: advice. Amazing. Thank you so much. Fantastic advice and fantastic insight on your journey that I know our audience is going to deeply appreciate.

So, Carter, thank you for joining us today. All the best in your future initiatives, and thank you for being a guest on our podcast. 

[00:28:22] Producer: Calling all developers, business leaders, IT execs, and data scientists. Snowflake World Tour is your chance to learn and network. Discover how Snowflake's AI Data Cloud can transform your career and company. Experience the future. Join us on tour. Learn more at snowflake. com slash world dash tour.