Each month, OpenAI hosts a Build Hour where team members share insights with the community on the always-changing world of generative AI (GenAI). This month’s session focused on AI prompt engineering, and with the recent release of ChatGPT-5, I wanted to highlight their guidance so we can all make the most of this new version of the tool.
At TCDI, we understand the value of precision in communication, whether it’s drafting arguments in litigation, managing massive data sets in eDiscovery, ensuring accuracy in compliance reviews, or framing the right questions for GenAI systems. During this Build Hour, experts shared guiding principles for prompting. They’re not rules per se, but practical starting points that will help us get better results working with all LLMs, not just ChatGPT-5.
TCDI’s Tech Lab and Collaboration Center (Tech Lab) is committed to exploring new technology and how it applies in the legal industry. I am looking forward to experimenting with these prompt principles, stress-testing how they apply to real-world legal, compliance, and technology workflows. The Tech Lab is our dedicated environment for piloting emerging technologies where we can test, refine, and share best practices before scaling them across client engagements.
Here are my takeaways on those areas, why they matter, and what they can look like in practice.
1. Avoid Conflicting Instructions
Earlier models of ChatGPT would often “read between the lines” when given imprecise or contradictory instructions. GPT-5, however, interprets words literally. If your guidance pulls in two directions, the model won’t smooth it over. Instead, it will echo the conflict back, weakening results.
Examples:
- Bad: Write a concise one-page summary that includes as much detail as possible.
- Good: Write a one-page summary that highlights three main themes with 2–3 supporting details for each.
Signals that you need more clarity:
- Responses feel inconsistent across iterations
- The model delivers outputs that meet one part of your request but ignores others.
- You spend extra time editing to fix contradictions.
2. Use the Right Amount of Effort
Think of “reasoning effort” as how much brainpower GPT-5 puts into a task. More effort gives you deeper, more careful answers but can take longer and cost more. Less effort is quicker and cheaper but may be more surface-level. The goal is to match the effort to the task so you don’t waste time or miss important details.
Low Effort: Quick recall or formatting
- List five U.S. states
- Convert these bullets into a one-sentence headline
- Extract dates and parties from this paragraph
Medium Effort: Balanced analysis
- Summarize this 5-page case brief in one paragraph
High Effort: Complex planning and synthesis
- Design a plan for automating contract review, including phased rollout and risk analysis
Signals you need to adjust:
- If the model misses steps, ignores constraints, or oversimplifies, you should increase the effort provided in your prompt.
- If results feel bloated, overexplained, or slow for the task at hand, you should decrease the details given in the prompt.
3. Structure Prompts Intentionally
Formatting isn’t cosmetic, it’s cognitive. The way you structure a prompt tells the model how to organize its reasoning. Loose, unstructured prompts will produce loose, unstructured answers (garbage in = garbage out). Explicit formatting, on the other hand, gives GPT-5 a map to follow.
We’ve found that this matters most when outputs feed into tasks like case chronologies, privilege logs, or QC workflows. Structured prompts reduce rework, improve consistency, and make downstream automation possible.
Example Prompt – Deposition Summaries:
- Summarize the attached deposition transcript
- Length: 500–1,000 words
- Focus: key witness admissions
- Constraints: provide bullets for ‘Key admissions’ (one per bullet) and a narrative summary of testimony flow
Signals you need more structure:
- Outputs vary greatly between iterations
- The model misses constraints like length or focus
- You spend extra time reformatting before using the results
4. Try Meta Prompting
In essence, meta prompting is using an LLM to develop or refine a prompt to achieve better, more consistent results. So, before asking GPT-5 to fix an output, ask it to briefly diagnose why the error happened. Then use that diagnosis to drive a targeted revision of the prompt. In Lean Six Sigma, this is the “Analyze” phase of DMAIC where you identify the root cause of a problem before jumping to implement a solution. (DMAIC is Define / Measure / Analyze / Improve / Control).
Why it works:
- Surfaces hidden assumptions. For example, prioritizing liability over damages.
- Prevents patchwork fixes by addressing the root cause, not just the symptom.
- Produces more consistent, explainable improvements, which is crucial for defensibility binders.
When to use meta prompting:
- Repeated misses on the same constraint (length, focus, citation style).
- Hallucinations or overconfident summaries.
- Formatting drift when outputs feed into workflows like chronologies or QC.
The Three-Step Loop
Prompt Example:
- Diagnose (short, specific): ask for reasons in 40 words or fewer.
- Revise: restate constraints and request the corrected info only.
- Verify: have the model audit its output against your constraints with a PASS/FAIL checklist.
Response Example:
- Problem: Deposition summary ignored damages.
- Diagnose: I emphasized liability because the prompt highlighted it first.
- No explicit instruction to discuss damages.
- Assumed damages details were out of scope.
- Revise: Add damages subsection while keeping within the word limit.
- Verify: Checklist confirms both liability and damages are addressed.
5. Give Room for Planning and Self-Reflection
GPT-5 works best when it doesn’t have to jump straight in to provide an answer. By giving it space to plan, draft, and think, you encourage deeper reasoning, structured outputs, and fewer blind spots. Think of it like a junior associate preparing a brief. You want the outline and quality check, not just the final draft.
Three-step approach:
- Plan: Outline steps first.
- Do: Execute the outline.
- Check: Review output against requirements.
Example – Litigation Readiness Checklist:
- Plan: Identify data sources, custodians, retention policies.
- Do: Draft a bulleted checklist.
- Check: Reflect on missing elements like escalation or recovery protocols.
Other examples:
- eDiscovery Workflows
- Plan steps of data collection
- Draft workflow
- Check against compliance standards
- Deposition Summaries
- Plan structure (facts, issues, admissions)
- Draft the summary
- Check against transcript for gaps
- Compliance Reviews
- Plan key areas (contracts, policies, training)
- Draft a review outline
- Check for coverage of HIPAA or GDPR.
6. Balance Agenticness
Agenticness describes how much initiative GPT-5 takes in moving a task forward. At one extreme, the model may overstep, acting without enough direction. At the other, it can stall progress by deferring back to you constantly. The goal is a balance where the model contributes meaningful work while leaving critical decisions to the human.
The Two Extremes:
- Too Agentic: Drafts a 10-page motion on shaky assumptions.
- Too Deferential: Asks permission after every section.
The Balanced Middle:
- Allow assumptions, but require they be flagged
- Define scope and format clearly, such as “3 pages with bulleted recommendations”
- Ask it to pause at natural checkpoints for approval
- Balanced Prompt Example: Draft a 3-page motion to dismiss. Make assumptions where necessary, but flag them clearly at the end for my review. Do not exceed the scope without my approval.
When to Adjust:
- Dial down if it hallucinates or overreaches.
- Dial up if it stalls or asks for excessive input.
Balanced Use Cases:
- Litigation Prep – Drafts a motion with placeholders for exhibits, flagged as ‘to confirm’.
- eDiscovery Workflows – Proposes a phased workflow, notes open decisions, leaves final selections for you.
- Compliance Review – Produces a draft review with identified gaps, each marked for confirmation.
Prompting GPT-5 is all about developing intuition and learning how to shape instructions so GenAI becomes a better partner in your work. These guiding principles above give us a framework for how to do that: be precise, calibrate reasoning, provide structure, ask for reflective iteration, and create balanced collaboration.
A Combined Effort
At the heart of it, success comes down to asking better questions. Each prompt is an opportunity to clarify intent, set boundaries, and invite GenAI to think alongside us. When we do this well, the output isn’t just more accurate; it’s more useful, more consistent, and easier to integrate into the workflows that matter.
With techniques like these, we’re building smarter partnerships with technology. Tools like GPT-5 will never replace the nuance, judgment, or creativity of the human process, but it can take on the heavy lifting of synthesis and structure, freeing us humans to focus on strategy, advocacy, and insight.
As you experiment with these prompting techniques, notice where they save you time, reduce rework, or spark new insights. Share these successes with your team and colleagues and with us in the Tech Lab by sending your results to info@tcdi.com. The more we compare notes and refine our collective approach, the more effective our partnerships with GenAI will become. Prompting is an emerging skill, and every thoughtful experiment by one of us, if shared, can help all of us move forward.
Caragh Landry
Author
Share article:
Caragh brings over 20 years of eDiscovery and Document Review experience to TCDI. In her role as Chief Legal Process Officer, she oversees workflow creation, service delivery, and development strategy for our processing, hosting, review, production and litigation management applications. Caragh’s expertise in building new platforms aligns closely with TCDI’s strategy to increase innovation and improve workflow. Her diverse operational experience and hands on approach with clients is key to continually improving the TCDI user experience. Learn more about Caragh.