As I’ve written about previously, Predictive Coding became a negative buzzword in our industry and has been recently replaced with the term AI.  When we speak about AI, most people are still talking about Predictive Coding whether they know it or not.

Predictive Coding got this reputation for being a short cut or a risky method for conducting document review akin to not reviewing documents at all. A ‘toss over the wall’ to some degree.  But I still think that is because potential users weren’t understanding the true potential for how much this technology, when coupled with the right workflows, could help them meet their goals.

A simple use for TAR is in Prioritized Review. In this workflow, Responsive documents are submitted to the algorithm as exemplars, and then it is required to provision for review other similar documents based on that training.

Those provisioned likely Responsive documents are reviewed, the coding decisions on those documents are submitted for re-training and the algorithm again provisions the next set of likely Responsive documents.

Continue on this path until the algorithm cannot find any documents similar to the ones you’ve submitted as Responsive and no more likely Responsive documents can be provisioned. Here you have your useful documents, for case strategy usage or for production purposes.

You can review all the remaining documents, the likely Not Responsive population, or cluster the results for easy stratified sampling. In this workflow, the technology is not doing anything scary or complicated – it is simply Finding Similar documents to those submitted as Responsive and prioritizing likely Responsive for expedited review. It is easy to explain, easy to follow and easy to defend.

Other workflows using TAR technology can be equally helpful and uncomplicated.  Suggested Coding, which is simply auto-coding without the definitive coding marker, can be an incredible help in QC workflows.  Once 1st level review is complete, compare those decisions against what the TAR agent would code and only send the disagreements to QC.

This takes a crowd-sourcing approach to review decisions- if 2 reviewers (yes, I include the TAR agent as a reviewer) agree on a positive responsiveness decision, then those documents can be sent for further usage, and no time need be wasted on verifying the decision.

If a negative responsiveness decision is reached by 2 reviewers, then those documents can be discarded from further action. Focus only where the 2 reviewers disagree –this is the best place for QC or remediation.

Auto-coding and categorization can also be used to pre-review documents to better organize them for further action. In investigations and with received productions, these tools are invaluable in providing transparency and insight into the documents. These tools will create buckets of themes or groupings of similar documents and remove the issue of not knowing what you collected or received.

Nothing in TAR says you can’t look at every document – that is a decision review leads make with pressure to reduce time and cost. Expediency and budget must be balanced with reasonableness of review confidence in order to come up with the right decision on how much, or how little, to rely on the suggestions of TAR tools.

Reporting and process documentation can alleviate much of the perceived risk in Predictive Coding workflows and given a little time, comfort can be gained with using these tools in the above methods.

No one is suggesting tossing documents over the wall. Instead, we advocate the use of TAR tools in workflows that give the user a leg-up when it comes to transparency and insight. TAR is aptly name – it is a tool designed to help. Now we just need people to stop overcomplicating it!