Fair by Design: Reducing Bias in Modern Review Workflows | TCDI Talks: Episode 20

TCDI Talks Ep. 20 Fair by Design - Reducing Bias in Modern Review Workflows

TCDI Talks | Episode 20

Fair by Design: Reducing Bias in Modern Review Workflows

About TCDI Talks: Episode 20

What happens when human assumptions meet AI at scale? In this episode of TCDI Talks, host Michael Gibeault sits down with Caragh Landry, TCDI’s Chief Legal Process Officer, to explore what bias looks like in document review today.

From subtle reviewer assumptions to AI systems that amplify decisions across millions of documents, Caragh explains how bias quietly shapes outcomes and why many teams may not realize it’s happening. She shares practical ways to design fairer, more defensible workflows, the role diverse review teams play in reducing blind spots, and why greater transparency in AI may be the next frontier in legal tech.

Episode 20 Transcript

0:04 – Michael Gibeault

Welcome to TCDI Talks, where we spotlight the people and ideas driving innovation in legal services and technology. Today, we’re diving into a topic that’s becoming impossible for legal teams to ignore: bias in document review, both human and AI-driven, and what it really takes to design fair, defensible workflows.

I’m joined by my colleague, Caragh Landry, TCDI’s Chief Legal Process Officer, and the author of our latest article, “Training for Fairness: Reducing Reviewer and AI Bias in Document Review.” Welcome, Caragh.

0:42 – Caragh Landry

Hi! Thank you, Michael.

0:43 – Michael Gibeault

Well, we’re glad you’re here. Caragh brings nearly 30 years of experience in legal services with deep expertise in workflow design, technology-assisted review, and continuous improvement.

In this episode, we’ll talk about where bias shows up, why it matters, and how legal teams can build smarter, more equitable review processes. So, let’s dive right in Caragh. Your article tackles bias in document review, both human and AI. What made you choose this topic?

 

1:17 – Caragh Landry

That’s a great question. I chose it because I keep getting asked about it. So, it seemed topical to answer it.

But really, I mean AI isn’t new anymore, and it seems to be following the same process that predictive coding and active learning followed back, like in the 2010’s and 2015’s.

People are embracing these new tools that are obviously solving for problems. They’re solving for time and money. And as they’re using them, they’re trying to understand them more. Or, as they’re about to use them, they’re trying to understand them more. And any tool, any tool that amplifies a single reviewer’s decisions, there’s obviously concern for bias around that.

So, we’re, you know, as the professionals in the market, we are going back and we’re reusing some of the same answers from before, because the technology is similar and the answers are the same. So, bias does matter, and I’m glad people are talking about it.

2:11 – Michael Gibeault

Well, how would you sum up the core problem with bias in review workflows?

2:16 – Caragh Landry

The biggest problem is that people don’t know it exists, which I think is just across the board, in every facet of our lives: the problem that people don’t understand their own bias.

It happens quietly and it grows. And when you’re thinking about bias, if you don’t identify that you have it, or that it can exist, you can’t account for it up front, which means it’s going to take a long time to identify if you identify it at all.

So, I think the topic of bias is, it’s an interesting one in pretty much everything that I do. But for document review, how it impacts and influences the outcome of, you know, the results that we’re giving to our clients, it’s important. It can influence documents, and it can skew the outcome.

3:04 – Michael Gibeault

Well, you’ve spent decades designing workflows and process improvement. How does bias show up in ways people don’t expect?

3:13 – Caragh Landry

Again, when people aren’t looking for it, that’s the biggest, unexpected thing that shows up. That somebody points out that, you know, there may be some bias here.

Reviewer bias shows up as patterns, right? Patterns that link to experience or an expectation. Like in training materials, if you tell a reviewer to expect risk from a certain business group, they’re going to be on the lookout for risk in that business group. And bias like that can be important, because you do want them to find it. But they are going to find bias where it doesn’t exist.

And then, vice versa, you know, I mentioned in the article executive bias. People tend to think, you know, higher-ups are, you know, more important than, you know, operations folks or technical folks.

And so, as they’re coming across executive documents, they’re saying, “Oh, that’s important, because that person is important.” And so, they’ll flag it as hot or as responsive. But really, that person has a lot less to do with the issue at hand than the people actually creating it or running it. And so, that sort of bias can really have an impact where people aren’t looking for documents where they really should be.

Another example is financial documents, right? If you’re not a financial analyst, if you’re not in the financial industry, if you don’t love math, you’re going to find those documents boring, right? So, you’re going to maybe skate through documents that have really important content in them, because they’re of a financial background.

So, it’s these types of things that surprise people when you point out, “Well, you know, this whole review team over, you know, this whole part of the review team, or this group of people, or even this one individual, you know, they’re tagging these documents as not responsive, and they’re clearly responsive.”

And then you, you know, you run QC, and you run metrics, and you have lots of different people looking at documents in lots of different ways. And you bring it to their attention that these documents are being missed. And then when you talk to the reviewer, and you’re like, “Oh, you’re missing these types of documents,” it turns out, because, you know, they don’t like math.

So, there’s definitely unexpected bias, and those are just a couple of examples.

5:28 – Michael Gibeault

Well, Caragh, what role does diversity on the review team itself play in reducing those biases?

5:35 – Caragh Landry

I touched on just a couple in my last couple of answers, but, you know, diversity matters. It matters, again, like nothing I’m talking about in AI is exclusive to AI. Like diversity matters. Getting lots of different perspectives helps improve diversity. It helps reduce bias.

We look for it when we create review teams. We look for diverse review teams, right? We don’t want it to be all men. We don’t want it to be all women. We don’t want it to be one ethnicity and not a bunch of other ethnicities. We don’t want it to be people who have only ever worked on employment law.

We want diversity in backgrounds. We want diversity in geographical locations. We want diversity in gender, education. It matters. So, I think it plays a huge role, and I think you have to proactively look for, or proactively look to create, diversity rather than, you know, accidentally creating bias by not focusing on it.

6:34 – Michael Gibeault

Well, what’s the next frontier in fairness and legal tech that you’re eager to explore?

6:41 – Caragh Landry

Oh, that’s a good one. There are so many things, but since we’re talking about bias, I’ll focus on bias. I would love to see ways the AI could help us identify bias rather than creating bias.

I’d love to see different explanations or the way that it talks to us. I would love to see, like right now it tells us it’s reasoning or it’s rationale for making decisions. Like, if you mark something as responsive or privileged, it tells you, it gives you an explanation of what it thinks was responsive or privileged.

I would like that in terms of bias. Like, “Tell me why you think that the reviewer made this decision,” or “why did you take that thinking of the reviewer and amplify it across, you know, these millions of documents?”

I’d like to understand more of its thinking process. And I think that is the next frontier, because now that people are embracing AI, and they’re starting to use it, and they’re using it in real and impactful ways, I think they’re going to want to understand how it works more.

The next frontier for me, I think, is more transparency in terms of bias. Like, I’d love to see, I’d love to see how it thinks a little bit more so that I know that it’s on the right track. And again, I would love for it to be able to point out ways, and maybe reviewers or teams or segments or document types, that it feels is being influenced negatively by bias, or maybe even positively by bias.

8:10 – Michael Gibeault

Well, Caragh, thanks so much for joining us and sharing your perspective on reducing bias in AI for document review. This has been a great conversation, and there’s a lot here for organizations to think about when balancing technology and humans in review. We hope it inspires our listeners to implement AI in more thoughtful ways.

If you’d like to keep up with what’s next at TCDI, visit TCDI.com or connect with us on LinkedIn.

I’m your host, Michael Gibeault. Thanks for joining us.

8:43 – Caragh Landry

Thanks, Michael.

8:44 – Michael Gibeault

Thank you.

Meet the Expert Behind the Topic

Caragh Landry | Chief Legal Process Officer | TCDI

With nearly 30 years of experience in the legal services field, Caragh Landry serves as the Chief Legal Process Officer at TCDI. She is an expert in workflow design and continuous improvement programs, focusing on integrating technology and engineering processes for legal operations. Caragh is a frequent industry speaker and thought leader, frequently presenting on Technology Assisted Review (TAR), Gen AI, data privacy, and innovative lean process workflows.

In her role at TCDI, Caragh oversees workflow creation, service delivery, and development strategy for the managed document review team and other service offerings. She brings extensive expertise in building new platforms, implementing emerging technologies to enhance efficiency, and designing processes with an innovative, hands-on approach.

Meet Our Host

Michael Gibeault | Senior Vice President, Legal Services | TCDI

As Senior VP, Legal Services, Michael Gibeault works closely with corporate legal and law firm clients alike, providing forensics, eDiscovery, and managed document review solutions while managing a team of Legal Services Directors.

Michael’s tenured career has focused on supporting law firms and corporate legal departments with creative and cost-effective solutions that rely on cutting-edge technology and highly skilled legal professionals. Prior to joining TCDI in 2017, he served in executive positions at DTI Global, Epiq, Robert Half International, LexisNexis, and Martindale Hubbell.

In Case You Missed It