Archive

Posts Tagged ‘Content Analytics’

I buy, sell, market, service… When did ECM become a Monte Carlo celeb?

P1030993sI am writing this at 40,000 feet, on a morning flight to Nice, final destination Monte-Carlo, for what promises to be a very busy 4-day event. The European leg of IBM’s Smarter Commerce Global Summit runs from 17-20 June at the Grimaldi Forum in Monaco, and in a strange twist of fate I am neither a speaker nor an attendee. I am staff!

The whole event is structured around the four commerce pillars of IBM’s Smarter Commerce cycle: Buy, Sell, Market and Service. Each pillar represents a separate logical track at the event, covering the software, services and customer stories.

Enough with the corporate promo already, I hear you say, where does Enterprise Content Management come into this? Surely, SmarterCommerce is all about retail, transactional systems, procurement, supply chain, CRM and marketing campaign tools?

Yes and no. It’s true that in the fast moving, high volume commercial transaction world, these tools share the limelight. But behind every new promotion, there is a marketing campaign review; behind every supplier and distributor channel, there is a contract negotiation; behind every financial transaction there is compliance; behind every customer complaint there is a call centre; and behind every customer loyalty scheme, there is an application form: ECM underpins every aspect of Commerce. From the first approach to a new supplier to the friendly resolution of a loyal customer’s problem, there is a trail of communication and interaction, that needs to be controlled, managed, secured and preserved. Sometimes paper-based, but mostly electronic.

ECM participates in all commerce cycles: Buy (think procurement contracts and supplier purchase orders and correspondence), Sell (invoices, catalogues, receipts, product packaging, etc.), Market (collateral review & approval, promotion compliance, market analysis, etc.).

But the Service cycle is where ECM has the strongest contribution, and its role goes much beyond providing a secure repository for archiving invoices and compliance documents: The quality, speed and efficiency of customer service, relies on understanding your customer. It relies on knowing what communication you have previously had with your customer or supplier (regardless of the channel they chose), it relies on understanding their sentiment about your products, it relies on anticipating and quickly resolving their requests and their problems.

As a long-standing ECM advocate, I have had the privilege of leading the Service track content at this year’s IBM Smarter Commerce Global Summit in Monaco. A roller-coaster two month process, during which we assembled over 250 breakout sessions for the event, covering all topics related to commerce cycles, and in particular for customer service: Advanced Case management for handling complaints and fraud investigations; Content Analytics for sentiment analysis on social media; Mobile interaction monitoring, to optimise the user’s experience; Channel-independent 360 degree view of customer interaction; Digitising patient records to minimise hospital waiting times; Paperless, on-line billing; Collaboration tools to maximise the responsiveness of support staff; and many more.

A global panel of speakers, with a common goal: putting the customer at the very centre of the commercial process and offering the best possible experience with the most efficient tools.

More comments after the event…

Lawyers are from Mars, Technology is from Venus

September 16, 2011 Leave a comment

I spent two excellent days last week at the Legal Week’s Corporate Counsel Forum, where I’ve met several new and interesting people and learned an awful lot of things I didn’t know.

But I left the conference very frustrated.

The forum audience comprises primarily senior lawyers: General Counsel and Heads of Legal departments. The topics covered were as wide as crisis management, ‘moral’ compass, employment, Bribery Act, ‘Tesco’ law, cross-border teams, intellectual property, competition, etc., etc. Fascinating subjects, some of which admittedly I knew nothing about and learned a lot. It gave me a small insight into “a day in the life of a General Counsel” and the sheer volume of diversity that they have to be knowledgeable about, deal with and protect themselves (and their company) from.

And in 8 out of 10 conference sessions I wanted to shout: “There is a solution that can help here!”.

It amazes me (and frustrates me!) how much of the technology that other parts of the organisation take for granted seems to be absent from the legal department. As if they are the poor relatives in the organisation. I am not talking about highly specialised legal technologies such as eDiscovery, Content Analytics or even Information Risk & Compliance Governance (although these too are available and seem to be missing from many legal officers’ armoury, but that’s another conversation…). I am talking about basic capabilities that make the daily office operation significantly more efficient:

  • Digitising paper – avoiding the costs, avoiding delays of shifting piles of paper around and the risk of losing them by accident or in a crisis
  • Electronic document repositories – managing security and access controls, reducing duplication, managing versions, allowing online access from anywhere and simple searching
  • Case management – allowing lawyers to organise their work, negotiate with third parties, monitor progress, apply rules and generate reports automatically instead of using spreadsheets
  • Email management – capturing, filtering, organising and routing emails, ensuring compliance
  • Collaboration software – communicating amongst large teams, dispersed in different geographies and timezones

The list goes on… This isn’t trailblazing, these are automation tools and capabilities that have proven their value and have been helping organisations remove basic inefficiencies, for the last 10-20 years.

I am not advocating that technology is the answer to everything. Some business problems can be improved with some common sense and a bit of reorganising. Others are far too complex to be tackled by technology alone. But there is certainly enough basic technology to make a General Counsel’s life much simpler.

One of the key messages coming out of the conference was the resource constraints that legal departments are facing. Too much to do, too little time, too few people, too much information to process, too much knowledge to upkeep, too many risks to avoid, too many departments to coordinate, too many regulations to adhere to and too many stakeholders to appease.

So why are you wasting time on menial tasks that can be simplified, automated, or eliminated by use of simple tools, instead of using that time effectively to add value to the elements of the process where technology can’t  help.

Whenever I asked that question, the answer is typically “We don’t control the budget” or “We have other priorities” or “We don’t have the time to look at new tools”, etc.

Excuses! The question here is not “have I got time to worry about technology?”. The question is “Can I afford the luxury of NOT using it?”.  If these technologies can improve the productivity and reduce costs in the operations department, the marketing department, the sales department, the procurement department, why not use them to improve the efficiency of the legal department too?

(I would love to hear your views on this, especially if you are and in-house lawyer or work in a legal department)

“Hey, Watson! Is Santa real?” – Why IBM Watson is an innocent 6-year old…

I love the technology behind “IBM Watson“. I think it’s been a long time coming and I don’t doubt that in a matter of only a few years, we will see phenomenal applications for it.

Craig Rhinehart explored some of the possibilities of using Watson to analyse social media in his blog “Watson and the future of ECM”. He also set out a great comparison of “Humans vs. Watson”, in the context of a trivia quiz. However, I believe that there is a lot more to it…

Watson is a knowledgeable fool. A 6-year old kid, that can’t tell fact from fiction.

When Watson played Jeopardy!, it ranked its possible answers against each other and the confidence that it understood the questions correctly. Watson did not for a moment question the trustworthiness of its knowledge domain.

Watson is excellent at analysing a finite, trusted knowledge base. But the internet and social media are neither finite, nor trusted.

What if Watson’s knowledge base is not factual?

Primary school children are taught to use Wikipedia for research, but not to trust it, as it’s not always right. They have to cross-reference multiple research sources before they accept the most likely answer. Can Watson detect facts from opinions, hearsay and rumours? Can it detect irony and sarcasm? Can it distinguish factual news from political propaganda and tabloid hype?

If we want to make Watson’s intelligence as “human-like” and reliable as possible, and to use it to drive decisions based on internet or social media content, its “engine” requires at least another dimension: Source reliability ranking. It has to learn when to trust a source and when to discredit it. It has to have a “learning” mechanism that re-evaluates the reliability of its sources as well as its own decision making process, based on the accuracy of its outcome. And since its knowledge base will be constantly growing, it also needs to re-assess previous decisions on new evidence. (i.e. a “belief revision” system).

Today, Watson is a knowledge regurgitating engine (albeit a very fast and sophisticated one). The full potential of Watson, will only be explored when it becomes a learning engine. Only then can we start talking about real decision intelligence.

What IS the value of a document?

September 1, 2010 5 comments

I just read an interesting blog “What is the cost of a Lost Document” by Jeff Shuey.

The points he makes about the risk of not capturing information appropriately, are of course valid and often quoted in the world of document management. But it got me thinking on a more fundamental issue: How do you determine what the real value of a document is?

Clearly not all documents have the same value: losing a grocery receipt is fundamentally different to losing your driver’s license or your passport. Misplacing an expenses claim receipt might be worth £20, misplacing a vital piece of evidence in a litigation case might be worth £20 million. Today’s Financial Times is vital for making business decisions tomorrow, but worthless recycling paper next week.

Which makes the generic numbers like the PriceWaterhouse ones quoted in the Jeff’s blog seem just as relevant as ordering clothes for 2.75 children.

So how can we measure the value of a document? What is it worth? None of the Content Management systems that I’m aware of today, have provisions for assigning individual value to stored content, let alone managing its lifecycle differently based on that value.

Is it even possible to determine the value of a document? (And for the purposes of this discussion “a document” could be anything from a 140-character tweet message, to a 300,000-page drug application…) Where does the value come from?

  • The cost and effort of preparing or acquiring it?
  • The cost of storing it and managing it?
  • The context in which it has been used in the past, or may be used in the future?
  • Its rarity or brevity or accuracy?
  • Its relevance now? Its potential relevance in the future?
  • How often it has been accessed and referenced and by whom?
  • Who it is relevant to?
  • The length of time that it retains its value?
  • At what point does its value peak and when does it wane?
  • The risk it carries, by its existence or by its absence?
  • etc., etc. …

The list goes on! And this is before we even start thinking about assigning metrics or actual monetary value to any of the above.

Common sense says it’s probably some combination of all of the above. But do we measure any of this today? Should we?

Imagine the potential scenarios, if every document in a Content Management system carried a continually adjusted “Relative Content Value” property (You’ve heard it here first: a document’s RCV! 🙂 ). We could easily foresee…

  • A system that automatically discards a document, because it’s readily and securely available online, storing a reference instead
  • A system that automatically archives and protects an email that has been used in contract negotiations
  • A system that automatically hides a document that contains personal or confidential information
  • A system that automatically discard or hides documents that have repeatedly appeared in search results but nobody chooses to read
  • A system that automatically relocates content to different risk mediums based on its value
  • A system that automatically calculates insurance premiums for insuring against loss of its content, based on the total content’s value to the organisation
  • A system that can determine the likely life expectancy of a document, based on the history of how similar documents have been accessed in the past.
  • A system that would notify you, the author, when facts in your original research sources have been disputed or have changed, rendering your document misleading.
  • etc., etc. …

Actually, we have technologies today to implement most of these things, if we knew what that “relative content value” was. What we are missing is a coherent way of calculating and storing that value on an ongoing basis.

Which brings me back to my original conundrum: Is it ever possible to determine what IS the value of a document? How?

George

OMG! ECM is OCD for LOB!

We are obsessed! It dawned on me the other day, when I was trying to write up a requirements questionnaire for a client who is implementing an archiving system.

When I say “we”, I mean the ECM professionals. You need to have a good deal of OCD (Obsessive Compulsive Disorder) to be in the ECM business. Whether we are records managers, archivists, consultants, document managers or process designers.

We love things being neat. We love organising information. We obsess about making sure that everything is captured and has a place to go. We love our folders and hierarchies and fileplans. We put labels on everything: We tag and categorise, and add metadata. And then we make lists, and lists of lists, to be able to find stuff. We need rules to abide by, and ideally we like to make the rules ourselves. And we like things that repeat and work the same way every time. We want to know who is who and we are paranoid about security, in case someone sees something they shouldn’t. We need things to be predictable and under control and we don’t like exceptions.

Doesn’t that sound like OCD to you? Come on, admit it. I dare you to try and convince me otherwise…

Now, is that a bad thing? No, not necessarily. The business and to a certain degree the law, needs this kind of rigour and precision. Vast amounts of information would be forever lost at the bottom of the sock drawer, if we didn’t organise things properly. Decisions would take a lot longer and any kind of auditability and transparency would be questionable. The get-on-your-bike-and-see-where-it-takes-you approach does not work in business. Correct? Well, maybe…

ECM is on a collision course. The world of tight controls and neat labels fundamentally contradicts the free Enterprise 2.0 spirit of collaboration and social media. Blogs, wikis, Twitter and Googlewave are there to allow everyone to jump in and do their bit. In real-time. There are very few imposed rules. The blending of personal opinion and work interaction is encouraged. Traditional barriers and organisational structures (from the department to the whole corporation or even across industries) are torn down in favour of exchanging ideas and learning from each other. We don’t have to preserve everything. It’s OK for information to end up in a heap, where analytics can find insights that traditional ECM discipline couldn’t. It’s OK for large communities of common interest – very much like Open Source software – to contribute, correct, expand and share knowledge for the benefit of the common good. It’s OK to have ad-hoc processes that define themselves reactively, based on contextual priorities instead of prescribed recipe.

All of this seemingly anarchic chaos, is revolutionising information management and knowledge sharing. But it has also created a lot of anxiety for most of us OCD types, who still think in terms of folders and hierarchies, and metadata and labels and disposition dates. Will there be a new generation of “free-style” ECM to cater for this? Will we end up with two Information management disciplines – “tightly managed” and “freeflow”? Will the legal and regulatory systems move with the times or shut their eyes pretending the change is not happening? Only time will tell…

But next time you are thinking of architecting an ECM environment, don’t assume that your neat little boxes and clearly labelled compartments will be there forever. They will not!

Are Content Analytics turning the grubby ECM worm into a butterfly?

Colleagues that have known me for a while, have all heard me bemoaning the use of the term “unstructured” to describe text-based content. Without boring you again to tears, my main issue is that the ECM industry has been largely treating content files as amorphous “unstructured” blobs, ignoring the rich value that is locked inside these content objects.

For the last twenty years or so, ECM systems have been providing a cocoon, where documents and media files have been stored, preserved, secured, archived and generally left to their own devices. But we have been focusing in protecting the whole container, the box, based on the label it has outside and only looking inside the box, one box at a time.

There is change afoot! 2010 looks set to be the year of Content Analytics, which promises to finally unlock the value that is locked inside our gigantic festering ECM repositories. And if the early success signs of IBM’s new Content Analytics software is anything to go by, we are starting to witness a fundamental transformation in the way content is leveraged in large organisations.

Much in the same way that Data Warehousing and Business Intelligence transformed the bland data storage provided by databases in the mid-90s, Content Analytics is today bringing natural language processing, trends analysis, contextual discovery and predictive analytics to the “unstructured” world.

Purists will argue that these algorithms are not new and, to a certain extent, that is true. However, this is the first time that we are seeing these technologies applied easily, (i.e. with off-the-shelf products, without the need of a PhD statistician or linguist by your side…) in real commercial applications, to solve real business problems: Car manufacturers avoiding recalls with early fault trends analysis; Pharmaceutical companies recognising equipment failure trends much earlier; large multi-nationals saving millions in litigation fees, etc.

The ECM industry may still be thriving, but in terms of innovation it has reached a plateau that makes most of us uncomfortable (or complacent… depending on your point of view). Basic content management functionality is being commoditised with CMIS, OpenSource and SharePoint leading the charge. There’s nothing wrong with that, it’s the natural maturity curve for any 20-year technology sector. We’ve created a very big ECM cocoon and we’ve filled it to the brim with content worms. It’s time to innovate again!

Making no apologies for the crass analogy (it is March after all and, allegedly, spring is coming…), Content Analytics are starting to finally poke the cocoon, making the value of content slowly emerge, transformed from archived fodder into real business insight.

%d bloggers like this: