The AI Efficiency Trap: Why Structure is the Real Accelerator | TCDI Talks: Episode 22

TCDI Talks Episode 22: The AI Efficiency Trap

TCDI Talks | Episode 22

The AI Efficiency Trap: Why Structure is the Real Accelerator

About TCDI Talks: Episode 22

AI is quickly becoming part of everyday work, and for many organizations, it creates a sense that everything is moving faster and getting easier. But speed doesn’t always translate to better outcomes, especially in eDiscovery. In fact, it can introduce new challenges that aren’t immediately visible. In this episode of TCDI Talks, we take a closer look at why AI can feel efficient without actually improving how work gets done.

Host Michael Gibeault sits down with TCDI’s Senior Vice President of Emerging Technologies, Emily Fedeles Czebiniak, to discuss what she calls the “AI efficiency trap.” In this 10-minute interview, they explore how unstructured adoption leads to inconsistent results, where risks tend to go unnoticed, and why governance plays a much larger role than many organizations expect. 

Episode 22 Transcript

0:04 – Michael Gibeault

Well, welcome to TCDI Talks, where we spotlight the people and ideas driving innovation in legal services and technology. I’m your host, Michael Gibeault, and today’s conversation tackles a topic that’s on everybody’s mind but not always fully understood: AI and efficiency. More specifically, why so many organizations feel faster with AI but not necessarily more effective.

So, joining us today is my colleague Emily Fedeles Czebiniak, TCDI’s Senior Vice President of Emerging Technologies, who recently wrote an article entitled, “The AI Efficiency Trap: Why Structure is the Real Accelerator.”

In it, she challenges a common assumption that simply adopting AI leads to efficiency. Her article makes the case that without governance, AI can actually create more friction than it removes. Emily, welcome back.

1:10 – Emily Fedeles Czebiniak

Thanks, I’m really excited to be here today. As you said, this has been top of mind for a lot of people in the industry. So, it’s this gap between feeling faster and actually being more efficient, so I’m really excited to be here to talk about it today.

1:22 – Michael Gibeault

Well, let’s jump in at a high level, Emily. Your article pushes back on the idea that AI automatically creates efficiency. What are organizations getting wrong in that assumption?

1:35 – Emily Fedeles Czebiniak

Yeah, I think the biggest misconception is that efficiency is automatic. So, organizations assume that if you adopt AI, you just become more efficient. But what AI actually does is increase speed and not necessarily the structure.

So, without structure, speed amplifies existing problems like inconsistency, risk, and having to redo work. So, essentially you can think about it like AI accelerates whatever system you already have in place, whether it’s good or bad.

So, if your processes are fragmented, AI just helps you get fragmented faster.

2:07 – Michael Gibeault

Well, you described the idea of an “efficiency trap,” where individuals feel faster, but organizations become more fragmented. What does that look like in practice?

2:20 – Emily Fedeles Czebiniak

Yeah. You’re right. At the individual level, people are starting to feel incredibly productive. So, they’re drafting faster, they’re summarizing faster, they’re getting answers faster or whatever the case may be.

But at the organizational level, you start to see different teams using different tools. You see inconsistent outputs from these tools, because there’s no shared standards across the organization.

So, yes, in the individual instance, work is getting done faster. But it’s also having to be redone, it’s being questioned, it needs validation later. And in that sense, fast work doesn’t hold up as efficient, it’s just deferred work. So, it creates problems.

2:57 – Michael Gibeault

Emily, are there any early warning signs leaders should look for that signal they’re falling into that trap?

3:06 – Emily Fedeles Czebiniak

Yeah, great question. A few big ones that I think we see come up time and time again.

The first one is kind of a low-hanging fruit one, which is just there’s no clear answer to “What AI tools are we using at an organizational level?” Another one is that, “Are different teams getting different answers to the same question?” That’s certainly a flag that comes up. Legal or compliance departments finding out about tools after they’re already in use at the organization, that’s something to be aware of.

Another is heavy reliance on outputs without clear validation standards. Definitely an issue. And finally, a big one is people trusting the output more than they understand the process.

3:42 – Michael Gibeault

Well, you mention that AI adoption is often organic rather than top-down. How does this create inefficiencies within the organization?

3:52 – Emily Fedeles Czebiniak

Yeah, great question. Organic adoption feels innovative, but oftentimes it’s actually uncoordinated. So, each team is just optimizing for their own workflow. Like marketing would use one tool, legal uses another tool, Ops uses something else. And no one is optimizing for the organization kind of across the board as a whole. So, you end up with local optimization and global inefficiency.

4:16 – Michael Gibeault

You mention hidden risks of AI, like inconsistent outputs, and unclear data handling and even hallucinations being used without validation. Which of these risks do you think organizations are underestimating the most right now?

4:34 – Emily Fedeles Czebiniak

Yeah, I think early on people were really worried about data exposure, and that is certainly a big risk. But I don’t think that’s the biggest risk.

I think the most underestimated risk is overreliance on unvalidated outputs. So, just because you’re getting something out from an AI tool, just saying “Okay, thumbs up! This must be correct.” Organizations are kind of moving so fast they’re skipping the verification, the sourcing, the human review, the human-in-the-loop stage.

And that can create bad decisions or inconsistent work product or downstream cleanup of the outputs. So, the risk isn’t just bad output, it’s bad output being trusted blindly.

5:10 – Michael Gibeault

So, one of the most interesting points I found in your article, you talk about when an organization integrates AI, they think they’re just buying a tool. But really it’s an ecosystem.

Can you explain more about the concept and the layers many organizations are missing in the ecosystem?

5:31 – Emily Fedeles Czebiniak

Yeah, absolutely. So, this can certainly be a big disconnect for organizations. So, as you said, organizations think they’re just buying a tool. But in reality they’re entering a layered ecosystem.

So, you have the application layer which is what users see, what they interact with, what the interfaces is for the user. And then below that is the LLM provider. It’s what powers that application layer that users use. And below that, even further, is the subprocessor layer. So, where the data is stored and processed, kind of that bottom level piece for the data.

And oftentimes organizations are only evaluating that top level, that outward facing piece. And if you only evaluate that interface, you’re not evaluating the system. And that can cause a lot of risk, because that’s where the inefficiency lives – in that disconnect between kind of the data, the underlying data level, the LLM level and the outer level.

6:25 – Michael Gibeault

Well, let’s talk about governance, because that word tends to make people a little nervous. You argue that governance doesn’t slow AI down. It actually allows for its use at scale. How should leaders rethink governance in the context of AI?

6:43 – Emily Fedeles Czebiniak

Yeah, absolutely. Governance can get a bad reputation. It sounds like control or limitation. And in the industry, we’ve been talking about data governance for a while dating back to the data privacy laws that went into place.

And it’s not necessarily a fun topic to roll up your sleeves and dive into, but it’s super important. Because governance is what enables consistency, and repeatability, and trust.

And without governance, AI can remain fragmented, and it doesn’t scale up at an organizational level. So, it’s really important to think about governance not as slowing AI down but instead thinking that governance is what keeps AI from breaking at scale.

7:19 – Michael Gibeault

You also highlight the role of external partners and how they can either accelerate progress or create more complexity. What mistakes do organizations commonly make when managing AI vendors?

7:33 – Emily Fedeles Czebiniak

I think a big one right off the bat is treating vendor management as separate from AI governance. In reality, they just go hand in hand.

Other mistakes I see are not having clearly defined roles between the internal teams and the external vendors, not having clear visibility into the data usage (where it’s going, what it’s being used for), and also having fragmented communication with the external vendors. And sometimes another big one we see pop up is overestimating what the vendors actually responsible for.

So, you can outsource expertise, but you cannot outsource accountability. At the end of the day, that still stays with your organization.

8:10 – Michael Gibeault

So, there’s a line that stands out in your article. You quote, “If you don’t understand your vendors, you don’t understand your AI.” So, how should organizations think about accountability when so much of AI capability sits outside their walls?

8:29 – Emily Fedeles Czebiniak

Great question. I think this is where we see some organizations needing a mindset shift. So, even if the technology sits outside of your walls, the accountability does not. Again, it stays with the organization. So, the organization is still responsible for how it’s used, what data goes into the tool, how the outputs are relied on or not relied on.

So, if you again, if you don’t understand all of these pieces of your vendor and your AI tool process, then you can’t actually understand the AI itself.

8:57 – Michael Gibeault

Emily, as we look ahead, you note that the question is shifting from “Are you using AI?” to “Can you demonstrate control?” How quickly do you think that expectation is becoming the norm?

9:12 – Emily Fedeles Czebiniak

Yeah, I think very quickly, and even faster than some organizations expect. We’re already seeing regulators asking these questions, clients asking these questions, and boards asking these questions. So again, like you said, it’s not just “Are you using AI” or kind of “How are you using AI?” It’s

  • How are you controlling it?
  • How are you governing it?
  • What systems and processes are in place around it?

So, all of this, this aspect of control, it requires documentation, it requires structure, and it requires governance at the organizational level.

9:41 – Michael Gibeault

So, Emily, if you could leave listeners with one mindset shift about AI and efficiency, what would it be?

9:48 – Emily Fedeles Czebiniak

One takeaway: tough question, but I like it. So, I think I would really want to emphasize that efficiency isn’t about using AI, it’s about controlling how it’s used. So, the organizations that succeed in this space, they won’t just move faster, they’ll move in a way that’s more consistent, defensible, and scalable.

10:07 – Michael Gibeault

Emily, thanks so much for joining us and sharing your perspective today. The big takeaway here is that AI alone doesn’t create efficiency. Structure does. And the organizations that get it right won’t just move faster, they’ll move smarter with consistency, trust, and control.

We hope it inspires our listeners to implement AI in more thoughtful ways. If you’d like to keep up with what’s next at TCDI, visit tcdi.com or connect with us on LinkedIn. Thanks so much for joining us, and we’ll see you next time on TCDI Talks.

10:47 – Emily Fedeles Czebiniak

Thank you.

Meet the Expert Behind the Topic

Emily Fedeles Czebiniak | SVP, Emerging Technologies | TCDI

Emily brings over 14 years of experience as a practicing lawyer with a career spanning litigation, eDiscovery, privacy, cybersecurity, and artificial intelligence (AI). Most recently, she has focused on advising on AI governance and the use of GenAI and large language models (LLMs). She is known for bridging the gap between people and technology to deliver innovative, practical solutions to complex client challenges.

At TCDI, Emily plays a pivotal role in advancing our technology strategy, particularly in integrating AI into workflows and innovative solutions. As recognized thought leader in the fields of technology, data, and privacy, Emily frequently shares her expertise through speaking engagements and published scholarship, making her a trusted strategic partner to clients and colleagues alike.

Meet Our Host

Michael Gibeault | Senior Vice President, Legal Services | TCDI

As Senior VP, Legal Services, Michael Gibeault works closely with corporate legal and law firm clients alike, providing forensics, eDiscovery, and managed document review solutions while managing a team of Legal Services Directors.

Michael’s tenured career has focused on supporting law firms and corporate legal departments with creative and cost-effective solutions that rely on cutting-edge technology and highly skilled legal professionals. Prior to joining TCDI in 2017, he served in executive positions at DTI Global, Epiq, Robert Half International, LexisNexis, and Martindale Hubbell.

In Case You Missed It