Sunday, February 16, 2014

Olympics of eDiscovery – One Can Dream

Most evenings the past week and a half my wife and I have managed to catch some of the Olympic competition currently taking place in Sochi, Russia.  Something we tune into with some interest every few years.  The Olympics really are a great concept, men and women athletes of diverse background and culture converging together for a few weeks of competition and sport, putting aside differences, history, and politics (for the most part) to compete and prove who is the best at various disciplines.

The competition has inspired me (no not to compete, that would be too cliche) to wonder, what if we could have an Olympics of eDiscovery?  In a geeky eDiscovery way, wouldn't that be great?  I image a competition amongst the various software and tool providers to determine who is best at different tasks: collection, processing, culling, review, TAR, and production to name a few.  This would not be a Gartner style report (which I do find helpful and a must read by the way) but instead, the tools would go head to head at the same time and place using the same data set and hardware horsepower.  Everything would be transparent and there would be a level playing field – no marketing or PR spun statistics, and no closed door exercise where only the “results” are presented.

Medals would be given in each category for different aspects such as speed, accuracy, efficiency, cost, and ease of use for example.  The end result would be bragging rights for the software producers and real useful knowledge and results for consumers like you and me who would finally have some objective data points to make apples to apples comparisons to the extent that is possible in this industry, and also hopefully a little fun as well.

I invite all software vendors big and small to consider this idea and throw your hat in the ring.  If you agree to participate, we, the users, will come.  So kCura, Symantec, Ipro, Kroll, Lexis, FTI, and any others, are you up for it?  I for one would love to see this, and think it would be of great interest to the eDiscovery community.

Sunday, February 9, 2014

Quality Control in eDiscovery – The difference Between Luck and Repeatable Success

As an eDiscovery project manager and Director of Client services responsible for ensuring the successful management of my client’s eDiscovery needs, having in place solid processes and procedures that are repeatable and defensible are keys to my success, my team’s success, and most importantly the success of my clients’ projects individually and collectively. Quality Control (“QC”) efforts are a crucial component to my processes and to the success of any project and I strongly encourage you to build them in to your eDiscovery processes and procedures in order for you and your clients to have full confidence in your eDiscovery.

Price, reputation, and plans are all important things to question your eDiscovery vendor about, but so too is QC, and it is not something you should wait until the end of a project to discuss.  All too often at the beginning of a project, people are focused on things like search terms and deadlines, and only turn to QC once the project is ready to wrap up, but really QC should be thought of from the start and should be built into any eDiscovery process, whether it be for preservation, collection, review or production (or any others).  QC will have its greatest impact and save you the most time and money the sooner you start it.  While it can be a cleanup tool at any time in the process, it can serve to prevent further error if started early in a project and its results are then used to identify points of misunderstanding or deficiency in your training or process.  Particularly in review (although not exclusively), once identified, the lessons learned during QC can become examples to provide to your team and retrain them to prevent future error and minimize the amount of recoding or other rework needed at the end of a project, which could blow budget and deadlines.

How much QC you perform and how you carry it out are secondary to the fact that you are performing it; amount QC’d and method of QC are only means to the end, which is accuracy.  If you are correcting the mistakes and have a clean product, that is ultimately what matters.  That being said, there is no one universal QC method to employ in all cases or all situations.  My teams have certain standard QC processes that we perform across clients and across projects, but for each project we also devise QC procedures unique to the purpose and idiosyncrasies of that project. 

My team’s familiarity with our clients, the tools we use, and our eDiscovery subject matter expertise allow us to properly craft these.  However, more and more eDiscovery tools are building methods and applications to assist even non-savvy users in QC.  One such functionality that many document review platforms are starting to incorporate is a method for creating random samples either by front end users or on the back end by administrators.   But even if you program does not offer this capability, you could use Excel to create a random sample of your material for QC; QC is not limited to only those who are technologically sophisticated or have the funds to afford expensive eDiscovery software.

To close out this article, I would like to again stress that while how you QC is important, the fact that you are doing it and doing it early in your project are what matters most.  Although performing QC will still have utility if you start it late in a project (and indeed at times it may be unavoidable),  in most instances the sooner you start the better, so you can identify issues and correct them before they perpetuate and potentially blow your budget or deadlines at the end.  That is not to say that performing QC at the start of a project alleviates the needs to QC at the end, rather QC at the beginning sets up a successful, succinct, and efficient QC at the end of a project.

QC may not make your product perfect, and it does not mean mistakes will not happen and still may not be caught, but what it will do is minimize those risks, while also providing an air of reasonableness to your actions so that if something does go wrong you can stand behind your efforts to avoid the error and point to your repeatable defensible process.