Thursday, February 28, 2013

ACEDS Third Annual Conference Begins – Predictive Coding in the Spotlight


The third annual ACEDS conference kicked off today.  The conference, which takes place in South Florida brings together top industry experts and focuses on the delivery of eDiscovery related knowledge from both a legal and technology perspective to individuals who are eDiscovery novices, experts, and all those in between. 

The agenda for this year’s conference covers a wide array of topics, but, not surprisingly has a heavy focus on predictive coding; indeed predictive coding has been a hot topic in eDiscovery for at least the last year, and will continue to be the main talking point in the industry for this coming year as well.  If the conference did not address this topic it would be odd indeed.

I am attending the conference as a participant and as a sponsor/vendor for the company I work for.  I thought this would be a good opportunity to end my hiatus from blogging.  I hope to provide you analysis and summaries of some of the sessions, as well as my thoughts that are inspired by the sessions.
The first panel of the conference discussed predictive coding and provided a primer or introduction to its current state and what it is.

The panel noted aptly that there are currently about 7 cases that have written opinions addressing the topic, but that number will likely will be 70 by this time next year – right now you have can have a firm, detailed, grasp of all case law on the subject, but in the future that will not be the case.  This demonstrates how this is an emerging trend and technology that the courts are catching up to.  The cases thus far point in the direction of predictive coding becoming more important in the sense that if the technology is really better at identifying responsive material that current practices (with the corollary that that data will be produced) then it should be used – potential implications include sanctions if you do not use the technology because of the inference that you are not turning over relevant data if you are not using predictive coding – of course we are probably a long way from any such a finding or opinion, but it is a glimpse into judicial thinking and a future distant but growing closer every day.

Personally, I know that proponents will continue to push this technology, and rightly so, but concepts such as proportionality, accessibility, and fairness still override.  Meaning that due to cost or some other factor, predictive coding still may not be the best solution for any given matter.  A $50,000 matter is still only a $50,000 matter and extensive discovery costs will rarely be warranted in such a matter regardless of how effective a technology is or is not.  Likewise, a $100 million dollar matter with millions of documents comprised largely of spreadsheet type data is not a good use case for predictive coding despite the value and data volume, because at this point in time, the technology does not work well on that data type.  There are still many variables that need to be considered on a case by case basis when deciding if you will use predictive coding in a given instance; evaluate all of your options including, but not limited to, predictive coding technologies.

Importantly, as this technology develops, what companies need to start looking for are experts in predictive coding technology, its use, its limits, and when and how to use it efficiently and effectively.  Such experts may or may not exist at this time, but one thing that the past development of eDiscovery related technology has taught us is that today’s expert and today’s top performing tools, may be outdated and archaic next year.  The eDiscovery field, and particularly the applications that support it, are ever evolving at a pace far greater than many other areas of technology, and certainly much faster than virtually all other legal related technologies.  This makes it difficult for individuals and corporations, whose sole focus is not eDiscovery, to stay on the cutting edge and ensure they are meeting their needs (whether that be the best of the best technology or something that at a minimum adequately gets the job done, even if it not the best tool).  For such individuals and corporations, their eDiscovery and technology experts and vendors will be key drivers of their success (or lack thereof) and readiness to adopt the best technology. 

My advice to you is be aware of and understand the predictive coding concept, so that you can ensure your vendor has the requisite knowledge and is actively participating in the predictive coding discussion and is on the cutting edge of this trend, vetting and finding the best solutions for you and your case(s). 

Friday, July 27, 2012

eDiscovery 2012 – Where We Have Been and Where We are Going – A Look Back At the First Half of the Year and Predictions For the Last Half of the Year


2012 has been an active and interesting one on the eDiscovery front thus far.  What follows are a few trends from the first part of the year and some predictions for the remainder of 2012 and beyond.

Where the eDiscovery Industry has been Over the First Part of 2012

1.  Predictive Coding – It is all the rage and this year’s hot topic in eDiscovery.  Will it revolutionize the industry and document review in particular?  Possibly.   Is it going away anytime soon?  Nope.  As an eDiscovery practitioner, do you need to know about it?  You bet you do.  The first part of 2012 has witnessed all of the major platform providers rushing to integrate this technology into their product, and some will ultimately have better products and be more successful than others are.  Remember, not all so-called predictive coding tools and technologies are created equal.  So, while the trend is to offer predictive coding, time and customer satisfaction will sort out who offers the best product for the right price.  Regardless of which company or companies win this battle, quality predictive coding products are starting to be, and in the future certainly will be, major players in the field for years to come.

2. Da Silva Moore – Any discussion of predictive coding in 2012 would not be complete without a mention of the Da Silva Moore case.  The most highly discussed and scrutinized eDiscovery case in years, Da Silva Moore once focused on judicial approval of predictive coding but quickly denigrated into a motion battle focused on Judge Peck’s actions rather than the merits and proper use of predictive coding.  Nevertheless, the case has brought tons of publicity to predictive coding, and may yet have a larger impact on the technology, as the case, and all the acrimony, churn slowly on without a definitive resolution to the predictive coding aspects.

3.  Spoliation and Proportionality – These topics have played second fiddle to predictive coding this year, but case law indicates that courts are considering these principals more and more and they are holding litigants to tighter standards.  No longer can clients or their attorneys get away with claims of being unaware or ignorant when it comes to spoliation.  Likewise, litigants are becoming bolder in challenging requests for large amounts of data, and judges are agreeing to limit requests in greater frequency.  Furthermore, it is a proportionality argument that lies at the heart of predictive coding’s value and reason for use; given the ever-expanding amount of data in the world, it is no longer proportional to review every document without the aid of technology and technology assisted review, such as predictive coding.

4.  Consolidation – The software products used in the eDiscovery field and the companies that create them are in an arms race to see who can add the most functionality to their product across the EDRM spectrum.  This creates one-stop shop products, but may also drive niche, one function, products out of the market and raise prices.  Additionally, although the products may do it all, they may not do it all well.  Similarly, law firms are challenging eDiscovery vendors by creating their own eDiscovery practice groups and bringing the latest technology in house in an effort to bring those billable hours back into the firm, but at what cost to clients?

5. Model Orders, State Rules, and Pilot Programs Oh My   Since late 2011, there have been a plethora of eDiscovery related standards, rules, model orders, and programs unveiled by different entities around the country, including: the U.S. Court of Appeals for the Federal Circuit, the U.S. District Court for the Southern District of New York, the U.S. District Court for the Eastern District of Texas, the U.S. District Court for the District of Delaware, the State of Pennsylvania, and the State of Florida.  Additionally, the Seventh Circuit recently concluded phase two of its Pilot Program on eDiscovery.   These various efforts are driven by a desire to standardize procedures and practices to contain eDiscovery costs and avoid unnecessary delays and disagreements.  Some will have greater longevity than others will, but they are all evidence of a growing judicial and administrative recognition of the impact eDiscovery is having on our legal system and the need to do something to improve the situation.  Likewise, the diversity of solutions offered is evidence of eDiscovery’s complexity and the lack of consensus regarding how to approach and manage it.

Where will the eDiscovery Industry Go Over the Next Six Months and Beyond

1.  Da Silva Moore – The Da Silva Moore case will continue to dominate the eDiscovery headlines, both as theater, and eventually as precedent (even if unofficial).  This is by far the highest profile predictive coding case that exists and everyone in the eDiscovery industry is waiting to see how it turns out.  Given its high profile, there will undoubtedly be much analysis and commentary on the outcome of the predictive coding battle and the case itself.  Hopefully, the scrutiny will shed some light on the cost, accuracy, and efficiency of predictive coding in a real case using real data.  If that does in fact occur, that will be lasting legacy of Da Silva Moore on the eDiscovery world, one that is much nobler and of higher value than the soap opera it currently perceived as.

2.  The Cream Will Rise to the Top – Certifications, conferences, and eDiscovery education providers will continue to vie for prestige, patronage, and above all your long-term support.  Over the past few years, we have seen numerous eDiscovery organizations and conferences spring up, including, among others, ACEDS and its annual conference, the Carmel Valley eDiscovery Retreat, and the Electronic Discovery Institute’s EDI Leadership Summit.  At times, these events have directly competed against each other and the various organizations and conferences that already exist.  At the same time, longstanding original players like Sedona and EDRM are looking at their purpose and goals and deciding on what and how they should focus their energy in the future to remain relevant and influential.  The eDiscovery conference market has reached a point of saturation, with people in the industry only willing to attend so many events a year and recognizing that there are only so many relevant panel topics.  From a participant’s perspective, why would you spend thousands of dollars to attend a conference that has four to five panels on the same topic (which topic by the way is also discussed at every other industry conference)?  From a vendor’s perspective, why would you spend thousands of dollars for an exhibit at a conference that is primarily attended by other vendors?  These competing organizations and conferences must find ways to differentiate themselves and provide a unique value proposition or the market may force them out.

3.  Smart Phones, Tablets, and Social Media are Game Changers - More and more I am hearing how e-mail will soon be replaced as a communication medium by methods such as texting and tweeting among others.  While I am not ready to declare e-mail dead (or even dying), there is no doubt that data created by non-traditional devices and/or in non-traditional sources (such as smart phones and tablets and on social media sites) will continue to proliferate both in data volume as well as in potential collection sources.  New niche industries and players (X1 is an example) will develop to preserve and collect this data in an accurate and useable format, and practitioners will need to adapt and fit this data and this new technology into their processes and workflows.  Individual social media sites and companies may disappear, as may technology brands and models, but the mobile social media lifestyle itself, and the challenges it poses for eDiscovery will not disappear.  The eDiscovery industry needs to catch up as quickly as it can.

What exactly the next big thing or big case in eDiscovery will be is difficult to predict, but regardless, the eDiscovery industry has been, is, and will continue to be an interesting, evolving, fast pace industry that is one to keep an eye on.

Tuesday, June 12, 2012

Gartner Releases 2012 “Magic Quadrant for E-Discovery Software”


Gartner recently released its now yearly report “Magic Quadrant for E-Discovery Software.”  The report analyzes the biggest names in the eDiscovery software field and categorizes them into one of four groups: Leaders, Challengers, Visionaries, or Niche players.  The report focuses heavily on consolidation within the industry as well as the EDRM lifecycle, placing a high value on companies and software that service the entire EDRM lifecycle.

The writers designated six companies as leaders:

- AccessData
- Autonomy
      - Guidance Software
      - Recommind
      - Symantec (Includes Clearwell)
      - ZyLAB

To be leader the company had to offer functionality that covers the complete EDRM lifecycle.  Additionally, offering predictive coding technology was an important positive factor in this analysis.

Some changes from the 2011 report include the exclusion of Epiq and IPRO because they no longer met at least one criteria for inclusion in the Magic Quadrant, the inclusion of KPMG and UBIC in the Magic Quadrant, and the change in status for FTI and kCura from leaders to challengers. 

kCura and FTI were no longer considered leaders because both focus on the right hand side of the EDRM only, rather than focusing on the complete model.  This fact emphasizes how much weight the Gartner writers placed on servicing the EDRM lifecycle.  To be clear, the report noted that kCura’s Relativity product is still a best in class product.  It also spoke very highly of FTI noting “[t]he company performs well all over the world, whereas others in its class do not necessarily have the presence or ‘bench strength’ to cover the globe, which is what many corporations need.”  Nevertheless, it likewise noted that many vendors are responding to the market with “broader end-to-end” functionality.

I agree with the report that the industry is moving toward greater consolidation and products that do it all, and I have written about that movement on this blog (http://ediscoverynewssource.blogspot.com/2012/04/consolidation-of-services-and.html )  However, I believe that Gartner placed too much emphasis on this factor by making it a requirement to be a leader in the Magic Quadrant.  Certainly, one-stop products and companies that do it all offer convenience, and perhaps cost savings, and can absolutely be the best choice for you and your companies.  Likewise, I continue to think that more and more products will move in that direction.  However, at this point in time, choosing a product that does it all means sacrificing quality and functionality for convenience; products and companies that service the entire EDRM lifecycle may be competent at each area, but they are not going to be the best at each area.  Depending on your situation, choosing multiple products that are the best product available for each task may be a better option.  You should ask yourself, do you want one product that does everything, but only one of those things really well, or do you want three or four products that are all the best at what they do?  There is no one answer, but it is something to consider, and this will remain a choice you have to make until there is one product that is the best at everything, which could take a while.

Although the Gartner report is subjective and by no means does it analyze every product or company in the industry, overall, the creators did a good job and the report provides some interesting information and analysis.  The report concludes that the eDiscovery software industry will remain relevant while becoming more competitive, and that consolidation and the proliferation of one-stop shops and products will continue.  This prediction is spot on.

Sunday, May 20, 2012

Contract Attorneys – The Latest Addition to the Endangered Species List

Last week I read an article on law.com titled “Does Predictive Coding Spell Doom for Entry-Level Associates?”  The article was prompted in part by the attention predictive coding is currently receiving as the de jure eDiscovery topic and the starring role it has played in the increasingly soap opera like Da Silva Moore case.  The article concluded that entry-level associates were still necessary and vital assets, even with the rise of predictive coding. 

I agree with the article’s conclusion, and am happy for the associates, but what about their less well placed colleagues, contract attorneys?   The threat for survival that contract attorneys face comes not just from predictive coding but from law schools that spill new graduates like a broken faucet, as well as from employers that take advantage of the situation by offering unscrupulously low wages knowing that for every position they have, there are several applicants willing to fill it at almost any rate or cost.  So, is there still a place for contract attorneys?  Will predictive coding and the deluge of law school graduates wipe out their positions, or depress their value to the point where the attorneys would make more money working at McDonalds?  I hope the answer is no, and the answer should be no if the legal community takes a moment to realize they need to treat contract attorneys  like the nonfungible assets they can be, rather than as pariahs who are undeserving of earning even $20 an hour. 
Despite their persona non grata reputation, a quality contract attorney is worth their weight in gold, and the legal industry should do everything it can to ensure they do not go the way of the dodo, whether because of technology, wages, or anything else.  Contract attorneys’ hands-on expertise and knowledge of review platforms and software can add great efficiency and effectiveness to a project.  Their in-depth familiarity with the documents and details of a case can be illuminating, and their understanding of the eDiscovery process can be a difference maker.  The truly good contract attorneys are knowledgeable experts that can be leveraged to your advantage and provide valuable input and consultation to your case and how you prepare for it.  More than hired mercenaries whose goal it is to plow through data as quickly as possible, contract attorneys can be your eyes and ears in the data.
At the end of the day, you get what you pay for, and nowhere is that more true than with contract attorneys.  You may be able to fill positions offering wages as low as $15 an hour, but that will not get you much more than a warm body.  With such a low rate of pay, a contract attorney will have every incentive to look everywhere and anywhere for a different job.  They will lack quality, consistency, motivation, and loyalty, resulting in a poor quality review, even if cheap.
Alternatively, as with most positions in life, the more faith and responsibility you show contract attorneys (along with paying them a decent wage for someone with a law degree) the more you obtain from them and the more value they will add to your case.  I urge you to look beyond the mere number efficiencies technology such as predictive coding can provide, to look beyond the hourly rate you are paying, and to focus instead on the intangible values added to your overall case.  That is where you find the true value and worth of your contract attorneys, and where you will find, if utilized properly, the good ones are invaluable and indispensible.  Do not get me wrong, I am not suggesting that you should forgo the use technology or that you should be offering your contract attorneys partner level compensation.  I am simply saying that technology should be used to supplement and enhance your contract attorneys’ value and capabilities, not replace them. 
Despite advances in technology, the human element of eDiscovery remains more vital and important than ever.  A key component of this human element is the contract attorney.  Even with the advance of predictive coding and like technologies, skilled contract attorneys should continue to be valuable commodities undeserving of a place on any endangered list.

Friday, May 11, 2012

Native Redactions – An Emerging Trend

It is a commonly accepted practice within the eDiscovery industry to image documents for production.  Likewise, it is now a commonly accepted practice, and indeed even a preferred practice, to exempt spreadsheets (and some other file types) from that requirement, instead producing those documents natively.  The idea being that parties would rather obtain native spreadsheets allowing them to work with and view the content in a meaningful manner rather than receive spreadsheet images that can be useless, cumbersome, or exceedingly difficult to accurately use and understand.  There is a nascent trend of not only producing spreadsheets in native format, but redacting them in native format as well (the concept has existed for years but is becoming an increasing point of emphasis as of late).  

The inherent nature of a spreadsheet means that it often contains complex data located in multiple rows, columns, and tabs. The data often includes or involves the use of formulas, sorting, or filtering amongst other features.  Macros, pivot tables, and hidden content add to the complexity.  If printed, the data often falls across multiple pages in a less than complete and less than orderly manner resulting in a confusing mess that is difficult to cobble together, let alone read and use.  The fact of the matter is that images simply are unable to capture the complexities many spreadsheets contain, so if the document and its content are to be useful and meaningful, you must produce them natively.  Most litigants now recognize this and are comfortable with, and often require, the native production of spreadsheets.  Yet, traditionally they have been less than enthusiastic about redacting spreadsheets in native format. 
Given that it is an accepted practice to produce spreadsheets natively, because that is how they will be most useful, why should redactions change that?  The answer is that it should not, and more and more practitioners are beginning to realize this.  Redacting changes the data in the spreadsheet, but it does not change the nature of the spreadsheet, the functionality of it, or how one uses the spreadsheet.  If a spreadsheets needs to be produced natively to be useful in its non-redacted original state, then logically it should be produced natively to be useful in a redacted state.
Anecdotally speaking, as time goes on, I am seeing much more acceptance and understanding of the native redaction practice across the industry.  My colleagues are telling me that they are seeing the same thing.   I am confident that it is only a matter of time before redacting spreadsheets in native format is the norm and an accepted standard and practice by courts and litigants alike; native redactions simply make the most sense for spreadsheets.
One of the hang-ups for those who are unfamiliar with native redactions lays in the subconscious or gut feeling associated with making redactions to a native document.  Redacting (i.e. deleting) content from native format documents that you are producing somehow feels inherently wrong, as if there is somehow a difference between covering up the data in an image redaction and deleting it in a native redaction.  In reality, and despite this feeling, if done properly, there is no meaningful difference between image and native redactions, or between covering up and deleting.  With each method, you are hiding data in an attempt to ensure the opposing party does not see it.  Whether the data is hidden beneath a box or darkened out area on an image, or deleted from a native document, the goal and result (hopefully) is the same: the data is not visible or searchable.  As long as you redact properly, and are open and honest with the opposing part about what type of redactions you are making, why, and how, there should be very little issue when redacting spreadsheets natively rather than via image.
Of course there are risks with native redactions, and native productions in general, including the loss of metadata, loss of formulas, changing dependencies (e.g. cell values based on formulas or the values in other cells) and the risk of manipulation by the opposing party to name a few.  However, there are methods and mechanisms for addressing these risks, and you can, and should, discuss them with your eDiscovery experts and the opposing party, before taking action.
However, from a strictly results perspective, if done properly there is no reason why the native redaction of spreadsheets should not be acceptable.  This argument carries even more weight if the parties are producing non-redacted spreadsheets natively; in that instance the parties identified value in producing non-redacting spreadsheets natively, and that same value would exist for redacted spreadsheets.  Driven by this logic and the comfort that will come as litigants gain familiarity with native redactions, more and more parties will turn to native redactions for documents like spreadsheets.  In the not so distant future, natively redacting spreadsheets will be a commonly accepted practice and standard in the eDiscovery industry.

Wednesday, April 25, 2012

Da Silva Moore, Global Aerospace, and Kleen Products – Hyped Triumvirate, But Dispositive Opinion Is Yet To Come

Three recent cases have taken the spotlight in the eDiscovery world, lauded as groundbreaking for their approval of predictive coding. This blog is no exception, having contributed to the commotion, particularly that surrounding Monique da Silva Moore, et. al. v. Publicis Group SA, et al.

In Da Silva Moore, the parties initially agreed to use predictive coding (although they never agreed to all of the details) and Magistrate Judge Peck allowed its use.  Plaintiffs have since attacked Judge Peck and most recently formally sought his recusal from the matter.  That request is currently pending.
Global Aerospace Inc., et al, v. Landow Aviation, L.P. dba Dulles, is the most recent case to address predictive coding, and it goes a step further than Da Silva Moore.  In Global Aerospace, the defendants wanted to use predictive coding themselves, but plaintiffs objected.  Virginia County Circuit Judge James H. Chamblin, ordered that Defendants could use predictive coding to review documents.  Like Da Silva Moore, the court did not impose the use of predictive coding, rather, the court allowed a party to use it upon request.
Kleen Prods., LLC v. Packaging Corp. of Am. goes the furthest, and is perhaps the most interesting of the three predictive coding cases because it is different than Da Silva Moore and Global Aerospace in one very important way: the plaintiffs in Kleen are asking the court to force the defendants to use predictive coding when defendants review their own material.  The court has yet to rule on the issue.
These three cases are in the spotlight because the use of predictive coding is seemingly at issue, and yet, in some ways, predictive coding is only marginally at issue.  Yes, in one sense the courts are ruling upon the technology itself and whether it is viable; if a court allows it to be used, it is implicit recognition that the technology works, at least enough to try it out and see how it goes.  However, these cases are really about who gets to choose the technology and method utilized.  These cases and disputes could exist with fact patterns where the parties are arguing over key word searches or linear review and the analysis would be much the same as they are now with predictive coding.  Can the parties pick and agree to a review method and technology?  In Da Silva Moore, Judge Peck said yes.   Can one party pick how they perform their review?  The Virginia court in Global Aerospace said yes.   Can one party force another party to use certain technologies and methods to perform their review?  The Kleen court has yet to rule on the issue. 
These questions are not new and novel and so far, the answers have not been so either.  Yes, they have allowed the parties to use predictive coding, but like with other technologies, the courts have taken a wait and see approach.  If the predictive coding technologies and/or processes used are unsuccessful in meeting obligations and needs, the courts appear more than willing to make adjustments, and perhaps embrace different technologies at that time; they are willing to give predictive coding a shot, but they are not betting the house on it either.
It is understandable why proponents of predictive coding are happy and view these cases as victories.  After all, these are the first opinions approving the technologies use, even if in a somewhat implicit manner.  However, the industry and the legal community must wait before drawing our final conclusions.  Only after a party has successfully used predictive coding in a case and survived a challenge of the results/end product (not just a challenge of its use) and it is captured in written opinion or order, will a true victory be won by predictive coding proponents.  Until then, predictive coding is still the equivalent of a highly rated draft pick; there is a lot of potential, and most people think it will succeed, including myself, but it still needs to prove itself in the trenches.  The predictive coding industry is bullish about its potential for success, and it may only be a matter of time until they are proved right, but only time will tell.

Monday, April 23, 2012

Plan on Planning – Help Your eDiscovery Personnel Help You

I had lunch with an eDiscovery colleague last week and he related to me a recent case he worked on. A few weeks ago, his client informed him that they had agreed with opposing party to make a production in four days. The client did not have a production population determined, and they had no idea how long it would take to create and run a production before they agreed to the deadline with opposing counsel; they picked a date in no way related to the reality that was their data set. None of this had an effect on their expectations for the viability of the project of course. The result? A rush project, extra people working extra hours to get the job done, tension, and having to renegotiate a new deadline with the opposing party because the date was simply unrealistic given the amount of data eventually involved. Ideal? No. Fun? No. Avoidable? Yes.

The above story exemplifies (although perhaps somewhat to the extreme) the experience eDiscovery personnel (whether in-house, outside counsel, or vendor) have with far too many clients in far too many cases. eDiscovery personnel are often left out of the decision-making process and have to scramble to meet artificially created deadlines that have little or no bearing to the work. We all have deadlines beyond our control, so eDiscovery personnel are no different than most in that regard, however, what can be exasperating for eDiscovery personnel, is that in the case of eDiscovery, the deadlines need not necessarily be so tight and out of our or control, or at least knowledge.

To avoid such rush projects, unobtainable deadlines, and wasted time and money, counsel should plan ahead for eDiscovery and include their eDiscovery personnel in that process, as well as in the negotiation of deadlines, to the extent possible (even if just as a point of reference and knowledge). Some easy things you can do to help your eDiscovery personnel better meet your needs, include:

• Create an eDiscovery Plan ASAP – Ideally you would create this before the case begins or soon thereafter. Be sure to include your eDiscovery personnel in this planning so that they can assist with properly setting eDiscovery related deadlines and expectations.

• Leverage Your eDiscovery Personnel’s Expertise – A classic example would be engaging them for search term analysis before agreeing to terms with the opposing party and before you make any productions. Provide the terms to your eDiscovery personnel for testing and sampling, leveraging their ability to write searches and manipulate review platforms. Via such exercises, they can sample documents testing for precision and recall, with the ultimate goal being to create a data set that is defensible and proportionate to the value of the case.

• Do Not Agree to eDiscovery Deadlines Before You Know What the Job Will Entail and Without Input from Your eDiscovery Personnel About How and If It Can Be DoneI-Med Pharma, Inc. v. Biomatrix, Civ. No 03-3677 (DRD), (D.N.J. 2011) is a great of example of why you need to know what the task entails before agreeing. The plaintiff’s in the matter agreed to search terms without testing them and without the advice of their eDiscovery personnel. The terms generated over 64 million hits and 95 million pages, unreal (and expensive) numbers.

• Build in Extra Time and Do Not Wait Until the Last Minute – The only thing worse than trying to complete a complex and important project precisely and accurately, is doing so with little notice and no time for mistakes. By engaging your eDiscovery personnel early in a matter, you not only put them on notice, but it will help them help you obtain the knowledge you need to negotiate and enter into reasonable deadlines and tasks with plenty of time.

You may be asking Why should I do all this, after all, are not my eDiscovery personnel paid to work for me? The answer is, aside from making your eDiscovery personnel happier and more motivated, it will also improve your case; you will have more time to do a better job and implement quality control measures, the court and opposing party will appreciate that you can deliver on what you promise, and by planning ahead, you can create cost saving efficiencies and avoid increased fees for rushed projects.