Sunday, February 16, 2014

Olympics of eDiscovery – One Can Dream

Most evenings the past week and a half my wife and I have managed to catch some of the Olympic competition currently taking place in Sochi, Russia.  Something we tune into with some interest every few years.  The Olympics really are a great concept, men and women athletes of diverse background and culture converging together for a few weeks of competition and sport, putting aside differences, history, and politics (for the most part) to compete and prove who is the best at various disciplines.

The competition has inspired me (no not to compete, that would be too cliche) to wonder, what if we could have an Olympics of eDiscovery?  In a geeky eDiscovery way, wouldn't that be great?  I image a competition amongst the various software and tool providers to determine who is best at different tasks: collection, processing, culling, review, TAR, and production to name a few.  This would not be a Gartner style report (which I do find helpful and a must read by the way) but instead, the tools would go head to head at the same time and place using the same data set and hardware horsepower.  Everything would be transparent and there would be a level playing field – no marketing or PR spun statistics, and no closed door exercise where only the “results” are presented.

Medals would be given in each category for different aspects such as speed, accuracy, efficiency, cost, and ease of use for example.  The end result would be bragging rights for the software producers and real useful knowledge and results for consumers like you and me who would finally have some objective data points to make apples to apples comparisons to the extent that is possible in this industry, and also hopefully a little fun as well.

I invite all software vendors big and small to consider this idea and throw your hat in the ring.  If you agree to participate, we, the users, will come.  So kCura, Symantec, Ipro, Kroll, Lexis, FTI, and any others, are you up for it?  I for one would love to see this, and think it would be of great interest to the eDiscovery community.

Sunday, February 9, 2014

Quality Control in eDiscovery – The difference Between Luck and Repeatable Success

As an eDiscovery project manager and Director of Client services responsible for ensuring the successful management of my client’s eDiscovery needs, having in place solid processes and procedures that are repeatable and defensible are keys to my success, my team’s success, and most importantly the success of my clients’ projects individually and collectively. Quality Control (“QC”) efforts are a crucial component to my processes and to the success of any project and I strongly encourage you to build them in to your eDiscovery processes and procedures in order for you and your clients to have full confidence in your eDiscovery.

Price, reputation, and plans are all important things to question your eDiscovery vendor about, but so too is QC, and it is not something you should wait until the end of a project to discuss.  All too often at the beginning of a project, people are focused on things like search terms and deadlines, and only turn to QC once the project is ready to wrap up, but really QC should be thought of from the start and should be built into any eDiscovery process, whether it be for preservation, collection, review or production (or any others).  QC will have its greatest impact and save you the most time and money the sooner you start it.  While it can be a cleanup tool at any time in the process, it can serve to prevent further error if started early in a project and its results are then used to identify points of misunderstanding or deficiency in your training or process.  Particularly in review (although not exclusively), once identified, the lessons learned during QC can become examples to provide to your team and retrain them to prevent future error and minimize the amount of recoding or other rework needed at the end of a project, which could blow budget and deadlines.

How much QC you perform and how you carry it out are secondary to the fact that you are performing it; amount QC’d and method of QC are only means to the end, which is accuracy.  If you are correcting the mistakes and have a clean product, that is ultimately what matters.  That being said, there is no one universal QC method to employ in all cases or all situations.  My teams have certain standard QC processes that we perform across clients and across projects, but for each project we also devise QC procedures unique to the purpose and idiosyncrasies of that project. 

My team’s familiarity with our clients, the tools we use, and our eDiscovery subject matter expertise allow us to properly craft these.  However, more and more eDiscovery tools are building methods and applications to assist even non-savvy users in QC.  One such functionality that many document review platforms are starting to incorporate is a method for creating random samples either by front end users or on the back end by administrators.   But even if you program does not offer this capability, you could use Excel to create a random sample of your material for QC; QC is not limited to only those who are technologically sophisticated or have the funds to afford expensive eDiscovery software.

To close out this article, I would like to again stress that while how you QC is important, the fact that you are doing it and doing it early in your project are what matters most.  Although performing QC will still have utility if you start it late in a project (and indeed at times it may be unavoidable),  in most instances the sooner you start the better, so you can identify issues and correct them before they perpetuate and potentially blow your budget or deadlines at the end.  That is not to say that performing QC at the start of a project alleviates the needs to QC at the end, rather QC at the beginning sets up a successful, succinct, and efficient QC at the end of a project.

QC may not make your product perfect, and it does not mean mistakes will not happen and still may not be caught, but what it will do is minimize those risks, while also providing an air of reasonableness to your actions so that if something does go wrong you can stand behind your efforts to avoid the error and point to your repeatable defensible process.

Friday, March 1, 2013

Musing on the First Day of the Third Annual ACEDS Conference


Last evening, while pondering such existential questions as who would win in a fight between the Retrivika pink elephant and the Megaputer sphinx, or who would dominate a closed ring wrestling match between Charles Itriago and Patrick Oot, my mind slowly turned to the panels and participants from the first day of the third annual ACEDS eDiscovery conference.

Speaking with attendees, it seems the conference delivered on educational content (panels included predictive coding, malpractice, data retention and project management) and creative delivery format – the most interesting format included a parade of 20-30 experts who each provided a best practice tip in one to two minutes.  ACEDS’ genuine effort put into developing new content and formats is commendable and something that the participants stated leads to an enjoyable conference experience.

The predictive coding panel, like most of its kind, focused very much on the future and what will be next with that technology.  Somewhat conversely, the malpractice session offered advice and tips for preventing and preparing for malpractice and obtaining malpractice insurance today.   Essentially the message was: get your ducks in a row before the issue arises or before an event occurs.  The other panels delivered everyday tips for pragmatic application and use.

One interesting observation came from a vendor attendee.  This individual noted that there were fewer sponsors at the conference this year compared to the previous year.  Interestingly this individual said this is a trend they are seeing across the eDiscovery conference industry.  What does this indicate?  Is the conference scene too saturated?  Is the ROI for attending a conference the past few years no longer attractive?  Are the conferences not bringing value from the vendor perspective?  More importantly, what is the impact if this trend of fewer sponsors continues, and will it have a negative impact on the knowledge base and discussion in the eDiscovery field?  I certainly hope that is not the case and that this trend does not have a negative impact on the eDiscovery industry.

Thursday, February 28, 2013

ACEDS Third Annual Conference Begins – Predictive Coding in the Spotlight


The third annual ACEDS conference kicked off today.  The conference, which takes place in South Florida brings together top industry experts and focuses on the delivery of eDiscovery related knowledge from both a legal and technology perspective to individuals who are eDiscovery novices, experts, and all those in between. 

The agenda for this year’s conference covers a wide array of topics, but, not surprisingly has a heavy focus on predictive coding; indeed predictive coding has been a hot topic in eDiscovery for at least the last year, and will continue to be the main talking point in the industry for this coming year as well.  If the conference did not address this topic it would be odd indeed.

I am attending the conference as a participant and as a sponsor/vendor for the company I work for.  I thought this would be a good opportunity to end my hiatus from blogging.  I hope to provide you analysis and summaries of some of the sessions, as well as my thoughts that are inspired by the sessions.
The first panel of the conference discussed predictive coding and provided a primer or introduction to its current state and what it is.

The panel noted aptly that there are currently about 7 cases that have written opinions addressing the topic, but that number will likely will be 70 by this time next year – right now you have can have a firm, detailed, grasp of all case law on the subject, but in the future that will not be the case.  This demonstrates how this is an emerging trend and technology that the courts are catching up to.  The cases thus far point in the direction of predictive coding becoming more important in the sense that if the technology is really better at identifying responsive material that current practices (with the corollary that that data will be produced) then it should be used – potential implications include sanctions if you do not use the technology because of the inference that you are not turning over relevant data if you are not using predictive coding – of course we are probably a long way from any such a finding or opinion, but it is a glimpse into judicial thinking and a future distant but growing closer every day.

Personally, I know that proponents will continue to push this technology, and rightly so, but concepts such as proportionality, accessibility, and fairness still override.  Meaning that due to cost or some other factor, predictive coding still may not be the best solution for any given matter.  A $50,000 matter is still only a $50,000 matter and extensive discovery costs will rarely be warranted in such a matter regardless of how effective a technology is or is not.  Likewise, a $100 million dollar matter with millions of documents comprised largely of spreadsheet type data is not a good use case for predictive coding despite the value and data volume, because at this point in time, the technology does not work well on that data type.  There are still many variables that need to be considered on a case by case basis when deciding if you will use predictive coding in a given instance; evaluate all of your options including, but not limited to, predictive coding technologies.

Importantly, as this technology develops, what companies need to start looking for are experts in predictive coding technology, its use, its limits, and when and how to use it efficiently and effectively.  Such experts may or may not exist at this time, but one thing that the past development of eDiscovery related technology has taught us is that today’s expert and today’s top performing tools, may be outdated and archaic next year.  The eDiscovery field, and particularly the applications that support it, are ever evolving at a pace far greater than many other areas of technology, and certainly much faster than virtually all other legal related technologies.  This makes it difficult for individuals and corporations, whose sole focus is not eDiscovery, to stay on the cutting edge and ensure they are meeting their needs (whether that be the best of the best technology or something that at a minimum adequately gets the job done, even if it not the best tool).  For such individuals and corporations, their eDiscovery and technology experts and vendors will be key drivers of their success (or lack thereof) and readiness to adopt the best technology. 

My advice to you is be aware of and understand the predictive coding concept, so that you can ensure your vendor has the requisite knowledge and is actively participating in the predictive coding discussion and is on the cutting edge of this trend, vetting and finding the best solutions for you and your case(s). 

Friday, July 27, 2012

eDiscovery 2012 – Where We Have Been and Where We are Going – A Look Back At the First Half of the Year and Predictions For the Last Half of the Year


2012 has been an active and interesting one on the eDiscovery front thus far.  What follows are a few trends from the first part of the year and some predictions for the remainder of 2012 and beyond.

Where the eDiscovery Industry has been Over the First Part of 2012

1.  Predictive Coding – It is all the rage and this year’s hot topic in eDiscovery.  Will it revolutionize the industry and document review in particular?  Possibly.   Is it going away anytime soon?  Nope.  As an eDiscovery practitioner, do you need to know about it?  You bet you do.  The first part of 2012 has witnessed all of the major platform providers rushing to integrate this technology into their product, and some will ultimately have better products and be more successful than others are.  Remember, not all so-called predictive coding tools and technologies are created equal.  So, while the trend is to offer predictive coding, time and customer satisfaction will sort out who offers the best product for the right price.  Regardless of which company or companies win this battle, quality predictive coding products are starting to be, and in the future certainly will be, major players in the field for years to come.

2. Da Silva Moore – Any discussion of predictive coding in 2012 would not be complete without a mention of the Da Silva Moore case.  The most highly discussed and scrutinized eDiscovery case in years, Da Silva Moore once focused on judicial approval of predictive coding but quickly denigrated into a motion battle focused on Judge Peck’s actions rather than the merits and proper use of predictive coding.  Nevertheless, the case has brought tons of publicity to predictive coding, and may yet have a larger impact on the technology, as the case, and all the acrimony, churn slowly on without a definitive resolution to the predictive coding aspects.

3.  Spoliation and Proportionality – These topics have played second fiddle to predictive coding this year, but case law indicates that courts are considering these principals more and more and they are holding litigants to tighter standards.  No longer can clients or their attorneys get away with claims of being unaware or ignorant when it comes to spoliation.  Likewise, litigants are becoming bolder in challenging requests for large amounts of data, and judges are agreeing to limit requests in greater frequency.  Furthermore, it is a proportionality argument that lies at the heart of predictive coding’s value and reason for use; given the ever-expanding amount of data in the world, it is no longer proportional to review every document without the aid of technology and technology assisted review, such as predictive coding.

4.  Consolidation – The software products used in the eDiscovery field and the companies that create them are in an arms race to see who can add the most functionality to their product across the EDRM spectrum.  This creates one-stop shop products, but may also drive niche, one function, products out of the market and raise prices.  Additionally, although the products may do it all, they may not do it all well.  Similarly, law firms are challenging eDiscovery vendors by creating their own eDiscovery practice groups and bringing the latest technology in house in an effort to bring those billable hours back into the firm, but at what cost to clients?

5. Model Orders, State Rules, and Pilot Programs Oh My   Since late 2011, there have been a plethora of eDiscovery related standards, rules, model orders, and programs unveiled by different entities around the country, including: the U.S. Court of Appeals for the Federal Circuit, the U.S. District Court for the Southern District of New York, the U.S. District Court for the Eastern District of Texas, the U.S. District Court for the District of Delaware, the State of Pennsylvania, and the State of Florida.  Additionally, the Seventh Circuit recently concluded phase two of its Pilot Program on eDiscovery.   These various efforts are driven by a desire to standardize procedures and practices to contain eDiscovery costs and avoid unnecessary delays and disagreements.  Some will have greater longevity than others will, but they are all evidence of a growing judicial and administrative recognition of the impact eDiscovery is having on our legal system and the need to do something to improve the situation.  Likewise, the diversity of solutions offered is evidence of eDiscovery’s complexity and the lack of consensus regarding how to approach and manage it.

Where will the eDiscovery Industry Go Over the Next Six Months and Beyond

1.  Da Silva Moore – The Da Silva Moore case will continue to dominate the eDiscovery headlines, both as theater, and eventually as precedent (even if unofficial).  This is by far the highest profile predictive coding case that exists and everyone in the eDiscovery industry is waiting to see how it turns out.  Given its high profile, there will undoubtedly be much analysis and commentary on the outcome of the predictive coding battle and the case itself.  Hopefully, the scrutiny will shed some light on the cost, accuracy, and efficiency of predictive coding in a real case using real data.  If that does in fact occur, that will be lasting legacy of Da Silva Moore on the eDiscovery world, one that is much nobler and of higher value than the soap opera it currently perceived as.

2.  The Cream Will Rise to the Top – Certifications, conferences, and eDiscovery education providers will continue to vie for prestige, patronage, and above all your long-term support.  Over the past few years, we have seen numerous eDiscovery organizations and conferences spring up, including, among others, ACEDS and its annual conference, the Carmel Valley eDiscovery Retreat, and the Electronic Discovery Institute’s EDI Leadership Summit.  At times, these events have directly competed against each other and the various organizations and conferences that already exist.  At the same time, longstanding original players like Sedona and EDRM are looking at their purpose and goals and deciding on what and how they should focus their energy in the future to remain relevant and influential.  The eDiscovery conference market has reached a point of saturation, with people in the industry only willing to attend so many events a year and recognizing that there are only so many relevant panel topics.  From a participant’s perspective, why would you spend thousands of dollars to attend a conference that has four to five panels on the same topic (which topic by the way is also discussed at every other industry conference)?  From a vendor’s perspective, why would you spend thousands of dollars for an exhibit at a conference that is primarily attended by other vendors?  These competing organizations and conferences must find ways to differentiate themselves and provide a unique value proposition or the market may force them out.

3.  Smart Phones, Tablets, and Social Media are Game Changers - More and more I am hearing how e-mail will soon be replaced as a communication medium by methods such as texting and tweeting among others.  While I am not ready to declare e-mail dead (or even dying), there is no doubt that data created by non-traditional devices and/or in non-traditional sources (such as smart phones and tablets and on social media sites) will continue to proliferate both in data volume as well as in potential collection sources.  New niche industries and players (X1 is an example) will develop to preserve and collect this data in an accurate and useable format, and practitioners will need to adapt and fit this data and this new technology into their processes and workflows.  Individual social media sites and companies may disappear, as may technology brands and models, but the mobile social media lifestyle itself, and the challenges it poses for eDiscovery will not disappear.  The eDiscovery industry needs to catch up as quickly as it can.

What exactly the next big thing or big case in eDiscovery will be is difficult to predict, but regardless, the eDiscovery industry has been, is, and will continue to be an interesting, evolving, fast pace industry that is one to keep an eye on.

Tuesday, June 12, 2012

Gartner Releases 2012 “Magic Quadrant for E-Discovery Software”


Gartner recently released its now yearly report “Magic Quadrant for E-Discovery Software.”  The report analyzes the biggest names in the eDiscovery software field and categorizes them into one of four groups: Leaders, Challengers, Visionaries, or Niche players.  The report focuses heavily on consolidation within the industry as well as the EDRM lifecycle, placing a high value on companies and software that service the entire EDRM lifecycle.

The writers designated six companies as leaders:

- AccessData
- Autonomy
      - Guidance Software
      - Recommind
      - Symantec (Includes Clearwell)
      - ZyLAB

To be leader the company had to offer functionality that covers the complete EDRM lifecycle.  Additionally, offering predictive coding technology was an important positive factor in this analysis.

Some changes from the 2011 report include the exclusion of Epiq and IPRO because they no longer met at least one criteria for inclusion in the Magic Quadrant, the inclusion of KPMG and UBIC in the Magic Quadrant, and the change in status for FTI and kCura from leaders to challengers. 

kCura and FTI were no longer considered leaders because both focus on the right hand side of the EDRM only, rather than focusing on the complete model.  This fact emphasizes how much weight the Gartner writers placed on servicing the EDRM lifecycle.  To be clear, the report noted that kCura’s Relativity product is still a best in class product.  It also spoke very highly of FTI noting “[t]he company performs well all over the world, whereas others in its class do not necessarily have the presence or ‘bench strength’ to cover the globe, which is what many corporations need.”  Nevertheless, it likewise noted that many vendors are responding to the market with “broader end-to-end” functionality.

I agree with the report that the industry is moving toward greater consolidation and products that do it all, and I have written about that movement on this blog (http://ediscoverynewssource.blogspot.com/2012/04/consolidation-of-services-and.html )  However, I believe that Gartner placed too much emphasis on this factor by making it a requirement to be a leader in the Magic Quadrant.  Certainly, one-stop products and companies that do it all offer convenience, and perhaps cost savings, and can absolutely be the best choice for you and your companies.  Likewise, I continue to think that more and more products will move in that direction.  However, at this point in time, choosing a product that does it all means sacrificing quality and functionality for convenience; products and companies that service the entire EDRM lifecycle may be competent at each area, but they are not going to be the best at each area.  Depending on your situation, choosing multiple products that are the best product available for each task may be a better option.  You should ask yourself, do you want one product that does everything, but only one of those things really well, or do you want three or four products that are all the best at what they do?  There is no one answer, but it is something to consider, and this will remain a choice you have to make until there is one product that is the best at everything, which could take a while.

Although the Gartner report is subjective and by no means does it analyze every product or company in the industry, overall, the creators did a good job and the report provides some interesting information and analysis.  The report concludes that the eDiscovery software industry will remain relevant while becoming more competitive, and that consolidation and the proliferation of one-stop shops and products will continue.  This prediction is spot on.

Sunday, May 20, 2012

Contract Attorneys – The Latest Addition to the Endangered Species List

Last week I read an article on law.com titled “Does Predictive Coding Spell Doom for Entry-Level Associates?”  The article was prompted in part by the attention predictive coding is currently receiving as the de jure eDiscovery topic and the starring role it has played in the increasingly soap opera like Da Silva Moore case.  The article concluded that entry-level associates were still necessary and vital assets, even with the rise of predictive coding. 

I agree with the article’s conclusion, and am happy for the associates, but what about their less well placed colleagues, contract attorneys?   The threat for survival that contract attorneys face comes not just from predictive coding but from law schools that spill new graduates like a broken faucet, as well as from employers that take advantage of the situation by offering unscrupulously low wages knowing that for every position they have, there are several applicants willing to fill it at almost any rate or cost.  So, is there still a place for contract attorneys?  Will predictive coding and the deluge of law school graduates wipe out their positions, or depress their value to the point where the attorneys would make more money working at McDonalds?  I hope the answer is no, and the answer should be no if the legal community takes a moment to realize they need to treat contract attorneys  like the nonfungible assets they can be, rather than as pariahs who are undeserving of earning even $20 an hour. 
Despite their persona non grata reputation, a quality contract attorney is worth their weight in gold, and the legal industry should do everything it can to ensure they do not go the way of the dodo, whether because of technology, wages, or anything else.  Contract attorneys’ hands-on expertise and knowledge of review platforms and software can add great efficiency and effectiveness to a project.  Their in-depth familiarity with the documents and details of a case can be illuminating, and their understanding of the eDiscovery process can be a difference maker.  The truly good contract attorneys are knowledgeable experts that can be leveraged to your advantage and provide valuable input and consultation to your case and how you prepare for it.  More than hired mercenaries whose goal it is to plow through data as quickly as possible, contract attorneys can be your eyes and ears in the data.
At the end of the day, you get what you pay for, and nowhere is that more true than with contract attorneys.  You may be able to fill positions offering wages as low as $15 an hour, but that will not get you much more than a warm body.  With such a low rate of pay, a contract attorney will have every incentive to look everywhere and anywhere for a different job.  They will lack quality, consistency, motivation, and loyalty, resulting in a poor quality review, even if cheap.
Alternatively, as with most positions in life, the more faith and responsibility you show contract attorneys (along with paying them a decent wage for someone with a law degree) the more you obtain from them and the more value they will add to your case.  I urge you to look beyond the mere number efficiencies technology such as predictive coding can provide, to look beyond the hourly rate you are paying, and to focus instead on the intangible values added to your overall case.  That is where you find the true value and worth of your contract attorneys, and where you will find, if utilized properly, the good ones are invaluable and indispensible.  Do not get me wrong, I am not suggesting that you should forgo the use technology or that you should be offering your contract attorneys partner level compensation.  I am simply saying that technology should be used to supplement and enhance your contract attorneys’ value and capabilities, not replace them. 
Despite advances in technology, the human element of eDiscovery remains more vital and important than ever.  A key component of this human element is the contract attorney.  Even with the advance of predictive coding and like technologies, skilled contract attorneys should continue to be valuable commodities undeserving of a place on any endangered list.