There are a few simple tricks you can try to improve the pace, accuracy and consistency of your document review team.

1. Batch by Thread

If you aren’t reviewing only the most inclusive email (the longest email in the conversation that includes all other emails in the conversation), then batching by thread complete is essential to fast and consistent review.

Most processing and review tools can identify a thread group and this should be used to batch all conversations in a thread to a single reviewer. Email is a conversation and in order to make a good in/out decision on any email in a conversation, the email thread should be viewed and coded in full context.

Not reviewing all emails in a thread in a single batch causes a lot of re-reading of single emails (wasted time) and could cause inconsistent coding (high risk).

2. Batch by Near Dupe

For non-email threads, batching by Near Duplicates will bring all documents that are very similar in content into review for a single reviewer.  In most systems, Near Duplicates are those documents that have 90% or more text in common.

Great examples of these are drafts of documents (group all drafts together) and various copies of the text in various programs which have various formatting and therefore will not hash as duplicates (think PDF of a Word document).

Batching these documents together for a single reviewer will allow the review to move more quickly through the documents (as they are very similar) and will improve consistency because a) they are very similar and will likely be coded similarly and b) they can copy and paste coding or batch tag since they are all very similar

3. Allow Batch Coding

Yes, there some Reviewers out there who, if you allow them to batch tag, will sacrifice quality for speed to ‘game the system’.  But don’t let a few bad apples spoil the bunch. QC will likely catch the errors anyway and inflated throughput numbers will send a flag on the play. Rogue Reviewers are not the pervasive issue we make them out be.

For the most part Reviewers are attorneys, and even when they are not (paralegal or law student), this is their job. They, like you, want to do a good job and strive for high quality.  Additionally, their performance dictates their usability – most reviewers want to be asked back to do more work.

Allowing them to batch tag is augmenting their ability to go faster with high consistency. Which are two things we actually demand of them anyway, so why not help them achieve those goals we set for them? If you find a problem with a reviewer’s work, and you think batch tagging is the problem, then disallow that one reviewer from batch coding, not the whole team.

4. Use Visual Groupings

If your review platform has visual groupings, use them

Intelligently batching by thread and near-duplicate is great (as mentioned above) but even that will only enhance speed and consistency so much (usually ~30% over straight linear review). If your review platform has visual grouping for Reviewers, this can engage the reviewer even more and empower them to increase their speed and consistency. Most review platforms out there that have visual groupings have concept cluster abilities.

Use them.

This is going to further present documents to the reviewer in related grouping.  At TCDI, we make all fields ‘clusterable’, including concepts, because we know that other factors may dictate Responsiveness or Privilege (like Custodian or Date or Title or Previous Coding).

More and more of our everyday, and our work, is leaning toward the visual, and I don’t think document review should be any different. The traditional document results or batch grid has its time and place, but visual grouping is where we need to move towards for better decisions.

5. Add a Little Light Competition

I once saw a Review Manager post review paces on a board like the squirt gun carnival game with the horse race.  She cut out a horse for each reviewer, drew 15 lines on the board and every line was a horses pace for the day before, ranging from 25 docs an hour up to 250. She moved the horses around each day based on the reviewers pace, so each day when the reviewers came in, they would see where they fell within the pack.

At first I thought it would make people uncomfortable, and maybe it did, but it also helped people know where they stacked up in the team, and overall it improved review rates because people now knew where they stood within the team.

As I said above, most reviewers want to be asked back to work on the next project and they all know that they are being judged on pace, as well as accuracy.

Letting them know how they are stacking up, and giving them a chance to improve (adding a little healthy competition) can be a good thing.

In the horse race example I gave, the fact that the horses had names (identifying each individual’s pace) still makes me uncomfortable, which is why on our review platform, we have it so it’s anonymous.

You can see your pace and quality compared to the rest of the team, but the other names are anonymized. I think this level of transparency amongst a review team can help Reviewers understand when they are, and are not, meeting expectations and help them better self-manage.

Each of these hacks above should yield increased throughput and/or consistency results. If you combine several of them, I would expect you would see considerable improvements to your timeline, quality rates and overall cost of the project.

Happy Reviewing!