The 2026 Sedona Conference Working Group 13 Annual Meeting in Austin was, as always, an engaging and energetic few days filled with candid and thought-provoking dialogue.  By deliberately fostering an environment with diverse viewpoints, the organization has cultivated the ideal place to discuss the challenges and advantages of using AI in the legal workplace.

The conversations at the meeting made one thing clear: the legal industry is grappling with how to operationalize AI responsibly at scale. What stood out wasn’t just the pace of technological advancement, but the depth of engagement around governance, risk, and long-term impact. In many ways, the event both reinforced core industry principles and challenged long-standing assumptions about how legal work gets done.

Reinforcing the Fundamentals While Raising the Bar

The Working Group 13 Meeting reaffirmed enduring legal priorities:

  • Accuracy
  • Defensibility
  • Confidentiality
  • Ethical responsibility

But AI is stress-testing each of these. Discussions around hallucinated citations, AI-generated evidence, and evolving evidentiary standards underscored that traditional safeguards are no longer sufficient on their own.

Information governance, which has long been viewed as a foundational, yet underleveraged discipline, is quickly becoming mission critical. Items like data classification, lifecycle management, and auditability are now necessary prerequisites for deploying AI in any meaningful way.

At the same time, the event discussions challenged the long-standing norm that legal expertise alone is sufficient to ensure quality outcomes. Increasingly, quality is a function of system design, including how tools are architected, how workflows are structured, and how outputs are validated. The implication is subtle but significant: legal judgment remains essential, but it must now operate within, and alongside, technical systems that require their own rigor and oversight.

A Shift from Tools to Systems and from Capability to Accountability

One of the clearest shifts in thinking is the move beyond viewing AI as a collection of discrete tools. The focus has turned toward integrated, agentic systems capable of executing multi-step legal tasks. Advances in approaches such as neurosymbolic AI point to a future where systems are advancing beyond generating language to reason and interact with complex legal workflows.

Yet, as capabilities expand, the center of gravity is shifting decisively toward accountability. Multiple sessions emphasized that access to powerful models doesn’t matter as much as their implementation. Organizations are being evaluated by how well they can govern AI use, validate outputs, and demonstrate defensibility in regulatory or judicial settings.

This is playing out against a rapidly evolving external landscape. Courts are sanctioning the misuse of AI and considering new rules to address AI-generated evidence. Additionally, regulators across jurisdictions are advancing both comprehensive and sector-specific frameworks. The message is clear: expectations are being set in real time, and organizations will be held to them, regardless of whether internal policies have caught up.

Implications of AI in the Legal System

Taken together, these developments signal a structural shift in the practice of law. AI is so much more than a productivity tool. It is quickly becoming part of the operational and evidentiary fabric of the legal system itself, which has three important implications.

  • The risk profile is changing. Errors are no longer confined to human oversight. They can be embedded in systems, scaled across workflows, and exposed under scrutiny in court. This raises the stakes for how AI is being governed, including validation and documentation.
  • Competitive advantage is being redefined. It will not come from simply adopting AI, but from building trusted, well-governed systems on which clients, courts, and regulators can rely. In that sense, trust becomes a product of architecture as much as expertise.
  • The profession itself is evolving. From law school classrooms to judicial chambers, there is growing recognition that the next generation of legal professionals must be fluent in doctrine and in how AI systems behave, fail, and are controlled. The biggest challenge will be maintaining the analytical rigor and judgment that define the profession, even as automation becomes more deeply embedded in daily practice.

Why this Moment Matters

If there was a unifying thread throughout the event, it was this: the legal industry is entering a phase where thoughtful integration, not rapid adoption, will separate leaders from laggards. The organizations that succeed will be those that treat AI as an extension of their professional obligations by designing, governing, and deploying it with the same care as the legal work itself.

Through its thoughtful, consensus-based scholarship, and its ability to bring together judges, regulators, and practitioners on both sides of the “v,” The Sedona Conference continues to fulfill its mission of moving the law forward in a just and reasoned way.

I am grateful to be a part of such a wonderful organization, and I am particularly proud to have participated as a dialogue leader on a drafting team that is examining existing regulations and their application to AI. The in-progress work product discussed at the meeting is evidence of The Sedona Conference’s commitment to thoughtful progress in the legal industry.

Emily Fedeles Czebiniak

Emily Fedeles Czebiniak

Author

Share article:

Emily is an attorney with more than 14 years of experience across litigation, eDiscovery, privacy, cybersecurity, and artificial intelligence. At TCDI, she helps shape the company’s technology strategy, focusing on responsible AI governance and the integration of generative AI into innovative workflows. A recognized thought leader in technology and data, Emily regularly shares her insights through speaking engagements and published work.

Learn more about Emily >