Archive

Archive for the ‘eDiscovery’ Category

Stop comparing Information Governance to Records Management – Take 2!

A debate is a blogger’s ultimate reward

Judging by the sheer number of retweets, favorites and comments that I had as a response, I seem to have hit a raw nerve with my last posting on the relationship between Information Governance (IG) and Records Management (RM). Feedback is a great source of knowledge for me. Debate is always good for our industry.

Laurence Hart (@piewords to his friends) was kind enough to specifically comment on my article in his blog. I have a lot respect for Laurence’s opinion and always enjoy reading his views, even if we don’t always agree. As it turns out, in this instance, we agree more than we disagree.

There are a few things on my original article that I’d like to clarify though, just to avoid ambiguity, and in the process address some of the points that Laurence makes:

 

“IG is a discipline, not a tool”, I wrote…

A few people took exception to that. Nobody disputed the fact, but they assumed that I somehow implied that RM is not a discipline, only a tool: something I never said! I take it for granted that everyone, at least everyone reading these discussions, knows that RM is a discipline too. The point I wanted (and obviously failed) to make was very different: The term Information Governance has been hijacked by a large number of vendors (ECM, eDiscovery, Storage, Security, Big Data, etc.) to peddle their wares. I have seen an inordinate amount of marketing atrocities being perpetrated in the name of Information Governance. My point is that the tools will not sort out the IG problem, it requires a different way of thinking. With hindsight, I can see why people misread what I wrote though.

 

Divorce Information Governance from the discussion of how it is going to be done

This seems to be Laurence’s main contention with my views. Interestingly, I don’t think I said that anywhere in my article either, but it must have been implied somehow. Laurence is right: the WHY and the HOW of IG cannot be divorced, of course, otherwise IG will always remain an academic exercise. The point I was making is that IG needs to have a coherent, consistent and complete overview of the principles behind all information management within the organisation. It is the decision making hub. Underneath that hub there are a number of spoke mechanisms that manage different aspects: RM is just one of them; eDiscovery, Classification, Legal Holds, Privacy & Security, Archiving, Application decommissioning, Storage tiering, Location management, etc., are various others. These should all be driven from a single, unified, coherent and authoritative decision making framework, which is what I see as the role of IG.

 

Of governments and armies…

Laurence, inadvertently perhaps, came up with a much better analogy of the distance between IG and RM. I created a metaphor liking them to Government and school governors, but Laurence compared them to Government and the Military. A much better analogy! The Military has a very specific and defined jurisdiction for enforcing Government policy and law. It has ultimate planning and execution responsibility for military personnel, but it cannot enforce law on civilians (at least not in most democracies, anyhow…). The Government has responsibility for every law in the country, regardless if it applies to civilians or military. Just as IG has responsibility for all decision making for Information Management, RM has responsibility for enforcing some of the functions on some of the overall Information estate.

 

“That which we call a rose, by any other name would smell as sweet”

I am not interested in the semantics of where IG definitions overlap with IM or RM, or the delineation between the policies (WHY), the practices (HOW) or the tools (WHAT WITH). My point is that IG and RM are two different, if overlapping, disciplines and that the functions that I defined in my 8 points in the earlier article, must be addressed by a coherent information governance framework which, historically, has not been an area where traditional RM excels. If you prefer to call that evolving business function a “Holistic RM”, “RM Continuum”  or “Super RM” or whatever else, I’m not worried about the nomenclature as long as we agree that it needs doing, and that it needs doing properly.

 

How the other half live…

There was something very paradoxical about the comments on my original article, by the RM community: Inevitably, the experienced, established and professional Records Managers, will object to my simplified definition of RM. They know how much bigger the problem is and most of them have extended their reach and responsibilities to address some of the IG issues within their organisations. Kudos and respect to them. But they see the RM world through rose tinted glasses, because it is the world they have created and influenced.

I, however, am not a Records Manager. I have seen and talked to a lot of organizations where their RM program either does not exist, or is extremely narrow, or very badly implemented, or lives on a folder on a shelf, or a PDF file on the intranet, or manages a spreadsheet fileplan, mapped to a folder structure on a shared drive. Most of these organizations, have an even bigger IG problem: No information disposition program, no unified classification, no automation of anything, no association between security policy and security reality, no mechanisms to address Data Protection, an un-managed email archive that grows exponentially, scores of network drives with debris and “just in case” copies of data, and many many many other issues. These organizations do not have the luxury of a well-established Holistic RM program, or the time to implement one. They have a very real IG itch that needs scratching… And a lot of vendors are quick to exploit that.

 

In my view, RM will always be a subset of IG. If you understand the bigger scope of IG, and you are already addressing it under an RM moniker, or any other name, then pat yourself on the back. But on the other hand, if you are a CIO looking at IG issues, do not assume that it is RM’s problem to sort out. And if you are a records manager, don’t assume for a minute that your RM world will not go through a radical transformation, if you try to take on the IG requirements, on top of RM.

Advertisements

A clouded view of Records and Auto-Classification

When you see Lawrence Hart (@piewords), Christian Walker (@chris_p_walker) and Cheryl McKinnon (@CherylMcKinnon) involved in a debate on Records Management, you know it’s time to pay attention! 🙂

This morning, I was reading Lawrence’s blog titled “Does Records Management Give Content Management a Bad Name?”, which picks on one of the points in Cheryl’s article “It’s a Digital-First World: Five Trends Reshaping Records Management As You Know It”, with some very insightful comments added by Christian.  I started leaving a comment under Lawrence’s blog (which I will still do, pointing back to this) but there are too many points I wanted to add to the debate and it was becoming too long…

So, here is my take:

First of all, I want to move away from the myth that RM is a single requirement. Organisations look to RM tools as the digital equivalent to a Swiss Army Knife, to address multiple requirements:

  • Classification – Often, the RM repository is the only definitive Information Management taxonomy managed by the organisation. Ironically, it mostly reflects the taxonomy needed by retention management, not by the operational side of the business. Trying to design a taxonomy that serves both masters, leads to the huge granularity issues that Lawrence refers to.
  • Declaration – A conscious decision to determine what is a business record and what is not. This is where both the workflow integration and the auto-classification have a role to play, and where in an ideal world we should try to remove the onus of that decision from the hands of the end-user. More on that point later…
  • Retention management – This is the information governance side of the house. The need to preserve the records for the duration that they must legally be retained, move them to the most cost-effective storage medium based on their business value, and actively dispose of them when there is no regulatory or legal reason to retain them any longer.
  • Security & auditability – RM systems are expected to be a “safe pair of hands”. In the old world of paper records management, once you entrusted your important and valuable documents to the records department, you knew that they were safe. They would be preserved and looked after until you ask for them. Digital RM is no different: It needs to provide a safe-haven for important information, guaranteeing its integrity, security, authenticity and availability. Supported by a full audit trail that can withstand legal scrutiny.

Auto-categorisation or auto-classification, relates to both the first and the second of these requirements: Classification (using linguistic, lexical and semantical analysis to identify what type of document it is, and where it should fit into the taxonomy) and Declaration (deciding if this is a business document worthy of declaration as a record). Auto-classification is not new, it’s been available both as a standalone product  and integrated within email and records capture systems for several years. But its adoption has been slow, not for technological reasons, but because culturally both compliance and legal departments are reluctant to accept that a machine can be good enough to be allowed to make this type of decisions. And even thought numerous studies have proven that machine-based classification can be far more accurate and consistent than a room full of paralegals reading each document, it will take a while before the cultural barriers are lifted. Ironically, much of the recent resurgence and acceptance of auto-classification is coming from the legal field itself, where the “assisted review” or “predictive coding” (just a form of auto-classification to you and me) wars between eDiscovery vendors, have brought the technology to the fore, with judges finally endorsing its credibility [Magistrate Judge Peck in Moore v. Publicis Groupe & MSL Group, 287 F.R.D. 182 (S.D.N.Y.2012), approving use of predictive coding in a case involving over 3 million e-mails.].

The point that Christian Walker is making in his comments however is very important: Auto-classification can help but it is not the only, or even the primary, mechanism available for Auto-Declaration. They are not the same thing. To take the records declaration process away from the end-user, requires more than understanding the type of document and its place in a hierarchical taxonomy. It needs the business context around the document, and that comes from the process. A simple example to illustrate this would be a document with a pricing quotation. Auto-classification can identify what it is, but not if it has been sent to a client or formed part of a contract negotiation. It’s that latter contextual fact that makes it a business record. Auto-Declaration from within a line-of-business application, or a process management system is easy: You already know what the document is (whether it has been received externally, or created as part of the process), you know who it relates to (client id, case, process) and you know what stage in its lifecycle it is at (draft, approved, negotiated, signed, etc.). These give enough definitive context to be able to accurately identify and declare a record, without the need to involve the users or resort to auto-classification or any other heuristic decision. That’s assuming, of course, that there is an integration between the LoB/process and the RM system, to allow that declaration to take place automatically.

The next point I want to pick up is the issue of Cloud. I think cloud is a red herring to this conversation. Cloud should be an architecture/infrastructure and procurement/licensing decision, not a functional one. Most large ECM/RM vendors can offer similar functionality hosted on- and off-premises, and offer SaaS payment terms rather than perpetual licensing. The cloud conversation around RM however, comes to its own sticky mess where you start looking at guaranteeing location-specific storage (critical issue for a lot of European data protection and privacy regulation) and when you start looking at the integration between on-premise and off-premise systems (as in the examples of auto-declaration above). I don’t believe that auto-classification is a significant factor in the cloud decision making process.

Finally, I wanted to bring another element to this discussion. There is another RM disruptive trend that is not explicit in Cheryl’s article (but it fits under point #1) and it addresses the third RM requirement above: “In-place” Retention Management. If you extract the retention schedule management from the RM tool and architect it at a higher logical level, then retention and disposition can be orchestrated across multiple RM repositories, applications, collaboration environments and even file systems, without the need to relocate the content into a dedicated traditional RM environment. It’s early days (and probably a step too far, culturally, for most RM practitioners) but the huge volumes of currently unmanaged information are becoming a key driver for this approach. We had some interesting discussions at the IRMS conference this year (triggered partly because of IBM’s recent acquisition of StoredIQ, into their Information Lifecycle Governance portfolio) and James Lappin (@JamesLappin) covered the concept in his recent blog here: The Mechanics on Manage-In-Place Records Management Tools. Well worth a read…

So to summarise my points: RM is a composite requirement; Auto-Categorisation is useful and is starting to become legitimate. But even though it can participate, it should not be confused with Auto-Declaration of records;  “Cloud” is not a functional decision, it’s an architectural and commercial one.

Looking for Mr. Right – Revisited

I was reading a recent article by Chris Dale, where he gave an overview of Debra Logan‘s “Why Information Governance fails and how to make it succeed” keynote speech. It’s difficult to disagree with most points made in the session, but one point in particular caught my attention. Chris transcribes Debra’s thoughts as:

“…we are at the birth of a new profession, with hybrid players who have multiple strands of skills and experience. You need people with domain expertise, not just about apps and servers but data and information. The usual approach is to take people who already have jobs and give them something else to do on top or instead. You need to find people who understand the subject and teach them to attach metadata to their material, to understand document retention, perhaps even send them to law school to turn them into a legal/IT/subject matter expert hybrid.”

In parallel, I have also had several conversations, recently, relating to AIIM‘s new “Certified Information Professional” accreditation (which I am proud to possess, having passed their stringent exam). It is a valiant attempt to recognise individuals who have enough breadth of skills in Information Management, to cover most of the requirements of Debra’s “new profession“.

These two – relatively unrelated – events, prompted me to go and re-discover an article that I wrote for AIIM’s eDoc online magazine, published sometime around June 2005. Unfortunately the article is no longer online, so apologies for  embedding it here, in its entirety:

Looking for Mr. Right

Why advances in ECM technology have generated a serious skills gap in the market.

ECM technologies have advanced significantly in the last ten years. The convergence of Document/Content Management, Workflow, Searching, web technologies, records management, email capture, imaging and intelligent forms processing, has created a new information management environment that is much more aware of the value of information assets.

Most analysts agree that we are entering a new phase in ECM, where medium and large size organizations are looking to invest in ECM as a strategic enterprise deployment in order to leverage their investment in multiple business areas – especially where improving operational efficiencies and compliance are the key drivers, as these tend to have a more horizontal appeal across the organization.

But as ECM technologies are starting to become pervasive, there is a lot of confusion on the operational management of these systems. Technically, the IT department is responsible for ensuring the systems are up and running as optimally as the technology permits. But whose responsibility is it, to make sure that these systems are configured appropriately and that the information held within them is managed correctly as a valuable asset?

Think about your own company: Who decides how information is managed across your organization? With ECM, you are generating a virtual library of information that should be used and leveraged consistently across departments, geographical boundaries, organizational structures and individual responsibility areas. And if you include Business Process Management in the picture, you are also looking for common, accountable and integrated business practices across the same boundaries. Does this responsibility sit within the business community, the IT department or as a separate internal service function? And what skills would be required to support this?

There is a new role requirement emerging, which is not very well defined or understood at the moment. There is a need for an individual or a group, depending on the size of the organization, who can combine the following capabilities:

  • identify what information should be managed and how, based on its intrinsic value and legal status
  • implement mechanisms for filtering and purging redundant information
  • design and maintain information structures
  • define metadata and classification schemes and policies
  • design folder structures and record management file plans
  • define indexing topologies, thesauri and search strategies
  • implement policies and timelines for content lifecycle management
  • devise and implement record retention and disposition strategies
  • define security models, access controls and auditing requirements
  • devise schemes for the most efficient location of information across distributed architectures
  • devise content and media refresh strategies for long-term archiving
  • consolidate information management practices across multiple communication channels: e.g. email, web, fax, instant messaging, SMS, VoIP
  • consolidate taxonomies, indexing schemes and policies across organizational structures
  • etc.

And all of this, for different business environments and different vertical needs with a good understanding of both business requirements and the capabilities offered by the technology –  someone who can comfortably bridge the gap between the business requirements and the IT department.

People who can effectively combine the skills of librarian, administrator, business analyst, strategist and enterprise architect are extremely rare to find. If you can find one, hire them today!

The closest title one can use for this role today is “Information Architect” although job descriptions with that title differ significantly. More importantly, people with this collective skill set are very difficult to find today and even more difficult to train since a lot of “best practices” in this area are not established or documented.

This is a wakeup call for universities, training agencies, consultants and people wanting to re-skill: While the ECM technology itself is being commoditised, more and more application areas are opening up which will require these specialist skills. Companies need more people with these capabilities and they need them today. Without them, successful ECM deployments will remain difficult and expensive to achieve.

The more pervasive ECM becomes as an infrastructure discipline, the bigger the skill gap will become, unless we start addressing this today.

Apart from feeling slightly proud that I highlighted in June 2005 something that Gartner is raising as an issue today, this doesn’t reassure me at all: 7 years have passed and Debra Logan is (and organisations are…) still looking for Mr. Right!

I am happy that Information Governance has finally come to the forefront as an issue, and that AIIM’s CIP certification is making some strides in helping the match-making process.

But I really hoped we would have come a bit further by now…

Content Obesity – Part 2:Treatment

(…continued from Content Obesity – Part 1: Diagnosis)

You can’t, and don’t want to, stop data growth.

The growth of digital volume has been instrumental in driving major operational and cultural change in today’s business. Better, more personalised customer interaction; Insight from BigData business analytics;  Social media and collaboration;  effective training and multi-media marketing, all rely on the flow of much higher volumes of information through the organisation. Not taking advantage of this would make your organisation less competitive.

So, if reducing the volume of data being consumed is not an option, how else can you manage Content Obesity? There are two approaches to this:

Managing the symptoms

There are some key technologies that help alleviate some of the symptoms of content obesity. These, in our human analogy, are the equivalent of liposuction and nip-and-tuck.

  • De-duplication can identify and remove multiple copies of identical documents. It is only effective if you can apply it across all your document stores (ECM systems, Records management, Shared file drive, personal file drives, SharePoint, email servers, etc.). This rarely happens, and when it does, it is usually restricted to one or two of these sources and focuses only on files, not structured data.
  • Archiving and tiered storage Being able to select the most appropriate storage type for archived data, can have a positive impact on reducing storage costs. Not everything needs to be stored in expensive high-availability devices. A lot of the organisation’s data can sit on lower cost equipment, that can be restored from backups in hours, or days, rather than instantly. But how do you decide which information goes where? Most organisations will use this expensive high-availability storage for core systems, regardless of the age or significance of the date stored by these systems, as there is no easy way to apply policies at a granular level. There is certainly no way to map those logical “shared” network drives, where the majority of documents is stored, to tiered storage.
  • Compression. There are storage systems that use very sophisticated algorithms to reduce the physical space required, by compressing the data when stored and de-compressing it when it needs to be used. These are also expensive and require additional computing power to be able to maintain reasonable speeds in the compressing and de-compressing process.

All of these techniques offer some relief, but the relief is marginal, if it’s not driven by a unified policy, and they do not address the fundamental issue: Whilst they temporarily reduce the impact of storage cost, they do not curb the information growth rate.

They also do not address any of the compliance or legal risks associated with content obesity: The same logical volume of data needs to be preserved, analysed and delivered to litigation and the same effort is required to manually manage the multiple retention policies and respond to regulatory challenges.

Treating the disease

In order to properly resolve content obesity, we need to consider the organisation’s metabolism: How quickly information is digested, which nutrients (value) can be extracted from content and how the organisation disposes of the waste.

The key question to ask is: “How much of this content do organisations actually need to keep?”, Discussions with our customers indicate that an average of 70% of all retained data, is obsolete! (the actual number will vary somewhat by organisation, but I’ll use the 70%/30% analogy for the purposes of this article) This represents information that is duplicated, it is outdated, it has become irrelevant or has no business value. Or, it is content that can be readily obtained or reproduced from other sources.

The problem, however, is that nobody within the organisation knows which 70% of the data is obsolete. So nobody has the knowledge, or the authority, to allow that content to be deleted. The criteria for defining or identifying which information that 70% represents, are virtually impossible to determine systemically.

A more drastic and more realistic approach is required, to provide a permanent solution to the problem.

The concept behind treating Content Obesity is simple: If, and only if, the organisation was able to identify the 30% of information which they need to keep then, by definition, any information that falls outside that, could be legitimately deleted.

If this level of content metabolism could be controlled automatically, regularly, and effectively, it would free up critical IT storage resources and the corresponding budget that can be used to invest in growth projects instead.

What organisations need, is the equivalent of a Thyroid gland: A centralised Information Lifecycle Governance mechanism, that monitors the all the different retention requirements, regulates the content metabolism and drives a digestive system that extracts the value from the content and disposes of all the waste. Most organisations do not have such a regulating organ, or function, at all.

Sounds simple enough, but how can you create a centralised policy that determines precisely, which 30% of the content, needs to be preserved?

Studies conducted by the CGOC (Compliance, Governance and Oversight Council), have shown that there are only three key reasons why companies need to preserve data for any length of time:

  • Regulatory obligation – controlled by Records Managers
  • Litigation – controlled by the Legal department
  • Business Utility – controlled by each business function or department.

These are the three groups in the organisation that are responsible for the metabolic rate of content. Yet these groups rarely connect with each other, do not use the same terminology and, certainly, never had common policies and control mechanisms that they can communicate to IT. The legal group issues data preservation orders (legal holds) to custodians. Records Managers define taxonomies, fileplans and retention schedules, and task the business to abide by them. Business functions have more important things to do (like… keeping the business running) and, frankly, don’t have much appetite for understanding, let alone complying with, either legal hold orders or retention schedules. Business functions need the correct information to be available to them, at the right time, to make decisions on and to service their customers.

And who has the responsibility to physically protect, or to destroy, digital information? The IT group, which is not usually part of any of the conversations above.

At the heart of an Information Lifecycle Governance function, is a unified policy engine. A common logical repository, where Records Managers can document, manage and communicate their multiple retention schedules and produce consolidated fileplans; the Legal Group, can manage its ongoing legal matters, issue legal hold and preservation orders and communicate with custodians and the other parts of the business; IT and the business functions can identify and document which information is stored in each device and each application, and the business requirements for information preservation. A place where all of these disparate groups can determine the value that each information asset brings to the business – for both structured and unstructured information.

Once this thyroid function is established to control the content metabolism, it is key to connect it to the mechanisms that physically manage information – the “organs”. Connecting this policy engine to the document collection tools and repositories, records management systems, structured data archives, eDiscovery tools, tiered storage archives, etc., provides the instrumentation which is needed to monitor the data growth, execute the policies and provide the auditability and defencibility that is needed to justify regular content purging.

Conclusion

There is no quick fix for Content Obesity and, like medical obesity, it requires a fundamental change in behaviour. But it is achievable. Organisations need to design a governance model that transparently joins the dots: The business needs to describe the information entities, based on their value and utility, mapping them to the asset, system and application descriptions that IT understands. Legal can then manage their legal holds and eDiscovery, based on knowing what information exists, what part of the business it relates to, and where information lives, not only by custodians. Compliance groups can then consolidate their records management directives and apply a unified taxonomy and disposition schedule, relevant to the territory and business function. When all of these policies are systematically connected to the data sources, IT can accurately identify what information should be preserve and, by definition then, what information can be justifiably disposed of. (IBM calls this process Defensible Disposal).

Content Obesity – Part1:Diagnosis

Obesity: a medical condition in which excess body fat has accumulated to the extent that it may have an adverse effect on health, leading to reduced life expectancy and/or increased health problems

Content Obesity: An organisational condition in which excess redundant information has accumulated to the extent that it may have an adverse effect on business efficiency, leading to depleted budgets, reduced business agility and/or increased legal and compliance risks.

First of all, let me apologise to all the people who are currently suffering from obesity, or who are supporting friends and family that do. I have no intention of making fun of obese people and I have great sympathy and respect for the pain they are going through. I lost my best friend to a heart attack. He was obese.

In a recent conversation with a colleague, about Information Lifecycle Governance and Defensible Disposal, I made a casual remark about an organisation suffering from Content Obesity. I have to admit that it was an off-the-cuff remark, but it conveyed very succinctly the picture I was trying to paint. Since then, the more I think about this analogy the more sense it makes.

People are not born obese, they become obese. And they don’t become obese overnight, it’s a slow, steady process. Unless it’s addressed early, the problem grows in very predictable stages: gaining weight, being overweight, being obese, being morbidly obese, dying. Most people, however, do not want to acknowledge the problem until it is too late. They live in denial, they make excuses, they make jokes. Until it’s often too late to reverse the process.

Organisations consume and generate content at an incredible rate: IDC’s Digital Universe study (2011), predicts an information growth factor of 50x between 2010 and 2020. Just to give that figure some context: If an average grown up person would grow at the same rate, they would weigh 3.5 tons by 2020!. Studies we conducted with our own customers, puts the annual growth rate at a slightly more conservative figure of 35-40% per year, which is still significant.

We love our digital content these days, we can’t get enough!

We all create office files and our presentations are growing larger, our email rate is not slowing down (we have several accounts each), we communicate with our customers electronically more than ever before, we collaborate inside and outside the firewall, we engage in social media, we text, we document life with our mobile phones’ cameras and we use YouTube videos extensively for marketing and education. We collect and analyse blogs and conferences and twitter streams. We analyse historical transactional data and we create new predictive ones. And if collecting our own streams is not enough, we also collect those of our competitors so that we can analyse them too. Our electricity meter collects data, our car collects data, our traffic sensors collect data, our mobile phones collect data, our supermarkets collect data. We have an average of two game consoles per family (all of which connect to the internet), we watch high-definition TV, from every fixed or portable device that has a screen, our kids have mobile phones, and PSPs and DSs and laptops. We have our home computer, our work laptop, our BYOD tablet and our smart phones. Our average holiday yields over 500 pictures, all of which are 12 Megapixel. And the kids take another 500 with their camera… In fact we generate so much digital data, that we now have special ways of handling it with Big Machines that manage Big Data to give Big Insights. And that is all wonderful, and it all exploded in the last five years.

I’ll say it again: We love digital content.

Going back to my health analogy, you could say that we gorge on content. The problem is, we are now overweight with content, since most of that content has been accumulated without any particular thought of organisation or governance. So today, we can’t lose weight, we can’t clean it up because IT doesn’t know what it is, where it is, who owns it or if it’s of any use to anyone. And, frankly, because it’s far too much hassle and we have better things to do.  It’s all digital so… “storage is cheap, we’ll just buy some more storage”: A staggering 78% of respondents to another recent study, stated that their strategy for dealing with data growth was to “buy more storage”!

Newsflash: Storage is not cheap! By the time you create your high-availability, tier-1 storage with 3 generations of backup tapes and put it in a data centre, pay for electricity and air-conditioning, and pay people to manage it, it’s no longer cheap. Even if storage prices go down by 20% per year, if your data grows at 40%, you are still 20% worse off… Simple maths!

Most organisations are still in denial about the problem. The usual answer to the question “How much storage do you currently have and how much does it grow each year?” is “We don’t really know, we never measured it that way”. Well, I would argue that whoever is writing the cheque to the storage vendors every year, ought to know.

Fortunately, for large multinational organisations (banks, pharmaceuticals, energy, etc), the penny has finally dropped. Growth rates of 40%, on a storage estate of 20 Petabytes, translates to an increase of dozens of millions of storage costs per year. In an economy where IT budgets are shrinking, this is not a pleasant conversation to have with your CFO. These organisations are now self-diagnosed as Content Obese, and are desperately looking for ways to curb the growth, before they become Morbidly Obese.

And, similarly to the human disease, Content Obesity has side effects. Even if you could somehow overcome (or overlook, or sweep under the carpet…) the cost implications, it creates huge health risks for the organisation.

Firstly, it creates risks for the Business. Unruly, high volumes of content clog up processes, the arteries of the business. Content that is lost in the bulk, uncategorised and not readily available to support decision making, is slowing down the flow of information across the organisation. Content that is obsolete or outdated can create confusion and lead to incorrect decisions. Unmanaged content volumes do not lend themselves to fast changing business models, marketing innovation, shared services or better customer support. And by consuming huge amount of IT capital, they also stifle investment and innovation into new business services.

Secondly, it creates a huge Legal risk. All electronic content in the organisation, is potentially discoverable. The legal group has a duty to preserve information that is relevant to litigation. When information is abundant and not governed, the only method that the legal group has to identify and preserve it, is by notifying all people that may have access to it – custodians – asking them to protect it. This approach is inaccurate, expensive and time consuming. And when it comes to delivering that information to opposing parties or the courts, the organisation has to sift through these huge volumes of content to identify what is actually relevant, often incurring huge legal fees in the process. (Unashamed plug: If you are interested to find out more about the role of Information governance in UK civil litigation, I recommend this excellent IBM paper authored by Chris Dale, respected author of the eDisclosure Information Project)

Finally, Content Obesity creates a huge Compliance risk. Different regulations dictate that records are kept for defined periods of time. Privacy and data protection regulations, dictate that certain types of content are disposed of, after defined periods of time. Record Managers often have to comply with multiple (and often conflicting) regulations, from multiple jurisdictions, affecting hundreds of systems and millions of records. An ever-growing volume of unclassified content, means that records cannot be correctly identified, disposition schedules cannot be executed consistently and policies remain on a binder on the shelf (or in a PDF file somewhere on the intranet). Regulatory audits become impossible, wasting valuable resources and often leading to significant fines (As the regulator put it in one of many examples: “These failings were made worse by their inability to determine the areas in which the breakdown in its record keeping systems had occurred“)

So, how much of that content do organisations actually need to keep? And who has the responsibility and the right to get rid of it?

Next: Content Obesity – Part 2: Treatment

Lawyers are from Mars, Technology is from Venus

September 16, 2011 Leave a comment

I spent two excellent days last week at the Legal Week’s Corporate Counsel Forum, where I’ve met several new and interesting people and learned an awful lot of things I didn’t know.

But I left the conference very frustrated.

The forum audience comprises primarily senior lawyers: General Counsel and Heads of Legal departments. The topics covered were as wide as crisis management, ‘moral’ compass, employment, Bribery Act, ‘Tesco’ law, cross-border teams, intellectual property, competition, etc., etc. Fascinating subjects, some of which admittedly I knew nothing about and learned a lot. It gave me a small insight into “a day in the life of a General Counsel” and the sheer volume of diversity that they have to be knowledgeable about, deal with and protect themselves (and their company) from.

And in 8 out of 10 conference sessions I wanted to shout: “There is a solution that can help here!”.

It amazes me (and frustrates me!) how much of the technology that other parts of the organisation take for granted seems to be absent from the legal department. As if they are the poor relatives in the organisation. I am not talking about highly specialised legal technologies such as eDiscovery, Content Analytics or even Information Risk & Compliance Governance (although these too are available and seem to be missing from many legal officers’ armoury, but that’s another conversation…). I am talking about basic capabilities that make the daily office operation significantly more efficient:

  • Digitising paper – avoiding the costs, avoiding delays of shifting piles of paper around and the risk of losing them by accident or in a crisis
  • Electronic document repositories – managing security and access controls, reducing duplication, managing versions, allowing online access from anywhere and simple searching
  • Case management – allowing lawyers to organise their work, negotiate with third parties, monitor progress, apply rules and generate reports automatically instead of using spreadsheets
  • Email management – capturing, filtering, organising and routing emails, ensuring compliance
  • Collaboration software – communicating amongst large teams, dispersed in different geographies and timezones

The list goes on… This isn’t trailblazing, these are automation tools and capabilities that have proven their value and have been helping organisations remove basic inefficiencies, for the last 10-20 years.

I am not advocating that technology is the answer to everything. Some business problems can be improved with some common sense and a bit of reorganising. Others are far too complex to be tackled by technology alone. But there is certainly enough basic technology to make a General Counsel’s life much simpler.

One of the key messages coming out of the conference was the resource constraints that legal departments are facing. Too much to do, too little time, too few people, too much information to process, too much knowledge to upkeep, too many risks to avoid, too many departments to coordinate, too many regulations to adhere to and too many stakeholders to appease.

So why are you wasting time on menial tasks that can be simplified, automated, or eliminated by use of simple tools, instead of using that time effectively to add value to the elements of the process where technology can’t  help.

Whenever I asked that question, the answer is typically “We don’t control the budget” or “We have other priorities” or “We don’t have the time to look at new tools”, etc.

Excuses! The question here is not “have I got time to worry about technology?”. The question is “Can I afford the luxury of NOT using it?”.  If these technologies can improve the productivity and reduce costs in the operations department, the marketing department, the sales department, the procurement department, why not use them to improve the efficiency of the legal department too?

(I would love to hear your views on this, especially if you are and in-house lawyer or work in a legal department)

Law – The fire within…

Writing is a form of therapy (Jan Timmons)

(Photo by kind permission of Jan Timmons)

I must confess: I am not a legal expert and my closest encounter with a courtroom is through the safety of a television screen.

I realised recently however, that inside my brain I have multiple and conflicting views of “The law”.

I grew up in Athens and, even though my grandfather was a lawyer (or maybe echoing his cynicism), I have grown up with an inherent mistrust of all thing ‘legal’: Legalese language that seeks to confuse and befuddle the average mortal; vulcher lawyers that procrastinate in order to maximise their hourly fees; legal cases that run for years and years because scheduled court hearings get postponed on technicalities; the list goes on…

In another compartment of my brain lives the virtuous, almost glamourous, world of TV courtroom drama with a very diverse portrayal of reality, ranging from Rumpole Of the Bailey and Kavanagh QC to Ally McBeal and Law and Order. Where young and old conscientious lawyers are burning the midnight oil, over endless stacks of case law books, looking for the one nugget that will exonerate their Ill-accused client and where honour, ethics and the omnipotent sage of the presiding Judge, prevail to save the day.

Many many years ago, I was involved in the delivery of early, bespoke Document Management systems to large law firms, such as Clifford Chance, Linklaters, Cameron Markby Hewitt (as it was then…), and others, which gave me yet a different perspective: One where Law firm partners are considered akin to deity, hordes of hopeful legal students and young lawyers work through endless hours of menial tasks in order to establish themselves on a career ladder, where information is king but information systems are a foe and where laborious, manual processes represent the status quo. Admittedly that experience was over ten years ago, but it was a cut-throat business then and I doubt that much has changed since.

More recently, I have been marginally involved with the world of electronic discovery and reading about legal proceedings on both sides of the Atlantic, often through the excellent commentary of Chris Dale’s insightful blog.  Through this, I have seen a more earthy view of litigation, where monetary considerations, negotiations, common sense (if such a thing exists….), judgments written in plain English, project management, geopolitical variances and the general admission that nobody, not even judges, are immune to the complexities of technological innovation, paint a picture of a legal environment that looks, well… almost business like! Commercial reality (and the associated astronomical costs of litigation) often dictate that cases are assessed, negotiated and settled on the merits of cost and objectives, not just “fairness” and “justice”.

Which of the views in my brain is more realistic? I don’t know. I find all of them fascinating: I am intrigued, watching an industry which is thousands of years old, constantly evolving and seeking to learn new tricks, acknowledging its own shortcomings and fighting to keep up with technological innovation – just like the rest of us!

Categories: eDiscovery, legal Tags: , , ,
%d bloggers like this: