In May, I had the privilege of attending the CLOC Global Institute 2024, a premier event that brings together legal professionals, technology experts, and thought leaders to discuss the latest trends and challenges in the legal industry. Throughout the event, one topic consistently dominated conversations: the responsible use of artificial intelligence (AI).

As technology continues to advance, attorneys, non-attorneys, and novices have tools at their fingertips they never dreamed of having. These range from legal research tools like Bloomberg Law and West Law’s co-counsel to Microsoft’s Co-pilot and the most infamous “free” search engine, ChatGPT.

Due to the explosion that is AI, it’s hard for a novice to really understand the implications of using tools like ChatGPT. For instance, many, if not the majority of people, do not know where the data is stored, how it’s being used, who the data is being shared with, and its reliability.  At a minimum, one must be educated on OpenAI’s privacy policy.

A Cautionary Tale

The Mata v. Avianca case gained significant attention in 2023 and highlights critical lessons about the practice of law and the use of AI. The case taught us that it is essential to familiarize yourself with the strengths and limitations of this new technology and to verify your work. In this example, an attorney used the software to research legal precedents and included a reference to a non-existent case in a filed complaint. The judge issued an order to show cause, requiring an explanation for the false citation.

This incident demonstrates the importance of verifying information provided by AI tools, especially in legal matters. You must fact-check to make sure the results are accurate and the information and case law in the matter exists. Most of us have heard about ChatGPT hallucinations, and this is a clear use case of the tool fabricating case law. 

Using AI Responsibly

Despite the buzz surrounding ChatGPT at CLOC, there was a noticeable lack of concrete guidelines on how to use AI responsibly. In my quest to better understand this topic, I have done some research and these are the do’s and don’ts I have learned:

  1. Avoid Sharing Personal and Sensitive Information: When using ChatGPT, remember that the data you enter may be stored, processed, and used to train the AI model. As a rule of thumb, if you wouldn’t be comfortable with the information being made public, do not share it within the platform. This includes personal details, confidential client information, intellectual property, or any other sensitive data that could be compromised or misused, potentially leading to legal and ethical issues. Always prioritize data privacy and security.
  2. Use the System for its Intended Purpose: ChatGPT and similar AI tools are designed to assist with various tasks, such as answering questions, providing information, and generating content. It’s essential to use these tools for their intended purposes and not rely on them to make critical decisions, especially in legal or medical contexts.
  3. Use an Anonymous Account: When signing up for AI services, consider using an anonymous account that is not linked to your personal or professional identity. This helps protect your privacy and prevents potential data breaches from being traced back to you. If you need to input sensitive information, anonymize it by removing or replacing identifying details.
  4. Use a Strong Password/Secure Access: As with any online service, it’s imperative that you use a strong, unique password to secure your account. Enable two-factor authentication if available, and avoid sharing your login credentials with others. Be cautious when accessing the service from public or shared devices, and always log out when you’re done.
  5. Fact Check Generated Content: AI models are trained on vast amounts of data, but they can sometimes generate incorrect, outdated, or biased information. It’s essential to fact-check the responses provided and verify it against reliable sources. Again, see the reference to Mata v. Avianca.
  6. Use AI Systems Ethically: Attorneys must use AI tools in a manner consistent with their professional responsibilities, ensuring that AI-generated content aligns with rules of competence, diligence, and candor. They should also be transparent about AI assistance while exercising independent judgment, because at the end of the day they remain accountable for their work.
  7. Be Transparent in Your Use: If you use AI-generated content, be transparent about it. Disclose to your clients, colleagues, or audience that certain parts of your content were created using AI tools like ChatGPT. This transparency helps maintain trust and allows others to evaluate the information accordingly.
  8. Ensure You Have Secure Access: In addition to using strong passwords and enabling two-factor authentication, be mindful of who has access to your AI accounts. Regularly review and update access permissions, especially in professional settings, to ensure that only authorized individuals can use the tools on behalf of your organization.
  9. Be Mindful of Data Privacy: Understand how data is stored, processed, and used by familiarizing yourself with the privacy policies and data handling practices of the AI services you use. If you have concerns about data privacy, consider alternative AI providers or self-hosted solutions that give you more control over your data.
  10. Stay Educated: AI technology is rapidly evolving, and new use cases, risks, and best practices emerge regularly. Make an effort to stay informed about the latest developments in AI safety, security, and ethics. Attend workshops, read industry publications, and engage with AI experts to deepen your understanding to ensure that you’re using AI responsibly and effectively.

This last point of staying informed and using AI responsibly cannot be overstated, especially as it becomes more integrated within our daily lives. In fact, the topic of safe and secure AI has become such a priority that the White House issued an Executive Order on the Safe, Secure & Trustworthy Development and Use of AI in October 2023.

Final Thoughts

As we venture into this unchartered AI territory together, we know AI can be a promise and a peril to society. As its use cases continue to unfold, it’s important to keep the dialogue open. Lean into organizations who are developing AI in a safe and secure way, who are transparent, continuously learning, questioning, testing, and understanding this new technology. I know that at TCDI, the responsible use of AI is at the top of our priority list. It is just one of the many reasons why we have been in business for over three decades.

SUE

Sue Fong

Author

Share article:

Sue serves as Director of Legal Services for the West Coast at TCDI. She is based in the San Francisco Bay Area and has over 20 years of  experience providing legal services and software to her corporate and law firm clients. Her journey, starting out as an attorney document reviewer, has given her a better understanding of the problems her clients face, especially when it comes to dealing new technology and the increased volume of data that litigation professionals have to manage. When she is not working, she can be found hiking Mount Tam or cooking up new recipes for family and friends. Learn more about Sue >