Archive

Posts Tagged ‘IBM’

The mobile (R)evolution – A historical review

Unless you live in a cave, you will have not failed to notice that mobility has taken over our life. As I write this, I’m sitting in a train full of commuters who, almost to a man, are holding a smart phone, a tablet or a laptop. The odd ones out, are reading a book… on a Kindle.

There is no denying that mobility is an established phenomenon and it’s here to stay. The IT industry is actively embracing it as the new Amalthean horn (alongside that other nebulous revolution – The Cloud). With Mobile First (IBM), The Mobile Imperative (Gartner), Enterprise Mobility(Accenture), 3rd Platform (IDC), etc., etc. .. one by one every major vendor and analyst is releasing their “mobile” strategy that will drive growth in the next 3, 5 or 10 years. And undoubtedly, it will.

But is our current obsession with mobility, really that revolutionary? Is the change in our culture and behaviour really so sudden and dramatic? Prompted by a very stimulating conversation at AIIM’s Executive Leadership Council (see the recent paper: The Mobile Reality), I decided to look at the historical milestones of computer mobility. Its heritage, if you like. The picture it paints is very interesting.

Mobile Evolution

Let’s look at the impact of mobility on a decade by decade basis.

1960

The starting point. Computer access was restricted to a single physical location, determined by the location of the computer machines themselves. Access was granted to few, selected, highly trained computer boffins, who were responsible for allocating the computing resource on a time-share basis, and deliver the results to the outside world. There is zero mobility involved at this stage.

1970

The 70’s introduced the first layer of mobility to the organisation, and it had a transformational impact. “Dumb” terminals, could be distributed across the organisation, connected with RS-232 serial connections. Mobility was location-based, since connectivity was hard-wired and employees would have to physically go to wherever the terminal was, in order to access it. Systems became multi-user giving selected, trained, specialist users simultaneous access to computing power on-demand. Suddenly, computing power and business applications were no longer constrained by the physical location of the computer, but were distributed to core departments across the organisation.

1980

The ‘80s saw the introduction of PCs. A hub-and-spoke revolution, where autonomous business machines could execute tasks locally, wherever they were located, and could communicate transparently with each other and with centralised servers. More “intelligent” connectivity through network cables introduced the client-server and email era. Mobility moved outside the constraints of the physical building. With the advent of “a PC on every desk”, users could work anywhere within the organisation and could communicate with each other, from building to building, and from town to town. Or copy their work on a floppy-disk and continue their work on their PC at home.

1990

In the 90’s mobility went through another revolutionary phase. PCs gave way to laptops, work would be taken anywhere, and modems could allow dial-up connectivity back to the office. Location, for users that had been issued with a company laptop and modem access, was no longer constrained to the confines of the organisation. They could easily work connected from home, or from a customer site anywhere in the world. Mobile phones became a corporate tool, eventually obliterating phonecards and phoneboxes, and wireless handsets, brought telephone mobility within the home. All that mobility created its own cultural revolution, bringing faster on-site customer support, home-working and flexible hours. At the same time, the internet and world-wide-web broke out of the military and academic domains, and the first commercial internet applications started appearing.

2000

With the millennium Y2K scare out of the way, mobility re-invented itself again. Website access and intranets, meant that every employee could access the corporate environment regardless of the physical machine they were using: A corporate notebook, home PC, Internet café, or hotel lobby, would be equally useful for checking emails, writing the odd MS-Office document, or finishing the latest marketing presentation. Virtually every employee had remote access to the organisation, and was actively encouraged to use it to reduce travelling and office-space. Internet commerce became universally accepted transforming the retail market. Computer form factor started reducing, with lighter notebooks and PDAs with styluses, touch screens and hand-writing recognition (remember Palm and Psion?), became the first truly portable devices. Mobile phones penetrated the personal consumer market, while Email and text messaging (SMS) started replacing phone calls, as the preferred mediums for short conversations. ADSL networks brought affordable broadband connectivity to the home, and the first 3G networks and devices allowed internet connection “on the go”.

2010

Which brings us to today: Enter the iPhone and iPad generation, where the preferred device factor is smaller (smartphones), more portable (tablets, phablets) and more universal (Smart TVs, Wifi Cameras, etc). Mobile connectivity became a bit more reliable and a bit faster, using faster 3G and 4G networks on the street. WiFi Fibre optic broadband at home, in fast-food restaurants and at coffee chains, brought faster downloads and HD streaming. Consumers are moving to apps as the preferred interface (rather than websites) and internet access has become accessible to everyone and the preferred customer interaction medium for many businesses. The delineation between personal computing and work computing has more or less disappeared, and the internet (as well as the office) can be accessed almost anywhere and by everyone. SMS text messaging is still prevalent (but virtually instant and virtually free) but asynchronous email communications declined in favour of synchronous Social Network access, Instant messaging (Skype, Twitter, FB Messaging, WhatsApp) or video chats (Skype, Lync, FaceTime, Hangouts).

Ubiquity

But we’re not quite there yet! The much heralded “ubiquitous” access to information, or “24×7” connectivity, is still a myth for a lot of us: While I constantly have to worry if my phone should connect via 3G or WiFi (a cost-driven and availability decision), while I can have internet access on a transatlantic flight, but not in a commuter train, while my broadband signal at home drops the line every 20 minutes because it’s too far away from the telephone exchange, while my WiFi router signal at one end of the house does not reach the dining room at the opposite end, and while I need a 3G signal booster at home (in a 450,000 people town) because none of the mobile networks around me have strong enough signal, mobile connectivity is not “ubiquitous”, it’s laboured.

Having lived and worked through 30 years of mobility transformation, I would argue that today’s “mobile revolution” is more evolutionary than revolutionary. What we are experiencing today is just another step in the right direction. Mobility will continue to have a transformational effect on businesses, consumers and popular culture, just as computer terminals transformed the typical desktop environment in the ‘70s and ‘80s, and as modems enabled home-working and flexible hours in the 90’s and 00’s. I expect that in the next 5 years we will see true “permanently on” connectivity and even more internet enabled devices communicating with each other. I also expect that businesses will become a lot more clever and creative with leveraging mobility.

Nevertheless, I don’t expect a mobile revolution.

Advertisements

A clouded view of Records and Auto-Classification

When you see Lawrence Hart (@piewords), Christian Walker (@chris_p_walker) and Cheryl McKinnon (@CherylMcKinnon) involved in a debate on Records Management, you know it’s time to pay attention! 🙂

This morning, I was reading Lawrence’s blog titled “Does Records Management Give Content Management a Bad Name?”, which picks on one of the points in Cheryl’s article “It’s a Digital-First World: Five Trends Reshaping Records Management As You Know It”, with some very insightful comments added by Christian.  I started leaving a comment under Lawrence’s blog (which I will still do, pointing back to this) but there are too many points I wanted to add to the debate and it was becoming too long…

So, here is my take:

First of all, I want to move away from the myth that RM is a single requirement. Organisations look to RM tools as the digital equivalent to a Swiss Army Knife, to address multiple requirements:

  • Classification – Often, the RM repository is the only definitive Information Management taxonomy managed by the organisation. Ironically, it mostly reflects the taxonomy needed by retention management, not by the operational side of the business. Trying to design a taxonomy that serves both masters, leads to the huge granularity issues that Lawrence refers to.
  • Declaration – A conscious decision to determine what is a business record and what is not. This is where both the workflow integration and the auto-classification have a role to play, and where in an ideal world we should try to remove the onus of that decision from the hands of the end-user. More on that point later…
  • Retention management – This is the information governance side of the house. The need to preserve the records for the duration that they must legally be retained, move them to the most cost-effective storage medium based on their business value, and actively dispose of them when there is no regulatory or legal reason to retain them any longer.
  • Security & auditability – RM systems are expected to be a “safe pair of hands”. In the old world of paper records management, once you entrusted your important and valuable documents to the records department, you knew that they were safe. They would be preserved and looked after until you ask for them. Digital RM is no different: It needs to provide a safe-haven for important information, guaranteeing its integrity, security, authenticity and availability. Supported by a full audit trail that can withstand legal scrutiny.

Auto-categorisation or auto-classification, relates to both the first and the second of these requirements: Classification (using linguistic, lexical and semantical analysis to identify what type of document it is, and where it should fit into the taxonomy) and Declaration (deciding if this is a business document worthy of declaration as a record). Auto-classification is not new, it’s been available both as a standalone product  and integrated within email and records capture systems for several years. But its adoption has been slow, not for technological reasons, but because culturally both compliance and legal departments are reluctant to accept that a machine can be good enough to be allowed to make this type of decisions. And even thought numerous studies have proven that machine-based classification can be far more accurate and consistent than a room full of paralegals reading each document, it will take a while before the cultural barriers are lifted. Ironically, much of the recent resurgence and acceptance of auto-classification is coming from the legal field itself, where the “assisted review” or “predictive coding” (just a form of auto-classification to you and me) wars between eDiscovery vendors, have brought the technology to the fore, with judges finally endorsing its credibility [Magistrate Judge Peck in Moore v. Publicis Groupe & MSL Group, 287 F.R.D. 182 (S.D.N.Y.2012), approving use of predictive coding in a case involving over 3 million e-mails.].

The point that Christian Walker is making in his comments however is very important: Auto-classification can help but it is not the only, or even the primary, mechanism available for Auto-Declaration. They are not the same thing. To take the records declaration process away from the end-user, requires more than understanding the type of document and its place in a hierarchical taxonomy. It needs the business context around the document, and that comes from the process. A simple example to illustrate this would be a document with a pricing quotation. Auto-classification can identify what it is, but not if it has been sent to a client or formed part of a contract negotiation. It’s that latter contextual fact that makes it a business record. Auto-Declaration from within a line-of-business application, or a process management system is easy: You already know what the document is (whether it has been received externally, or created as part of the process), you know who it relates to (client id, case, process) and you know what stage in its lifecycle it is at (draft, approved, negotiated, signed, etc.). These give enough definitive context to be able to accurately identify and declare a record, without the need to involve the users or resort to auto-classification or any other heuristic decision. That’s assuming, of course, that there is an integration between the LoB/process and the RM system, to allow that declaration to take place automatically.

The next point I want to pick up is the issue of Cloud. I think cloud is a red herring to this conversation. Cloud should be an architecture/infrastructure and procurement/licensing decision, not a functional one. Most large ECM/RM vendors can offer similar functionality hosted on- and off-premises, and offer SaaS payment terms rather than perpetual licensing. The cloud conversation around RM however, comes to its own sticky mess where you start looking at guaranteeing location-specific storage (critical issue for a lot of European data protection and privacy regulation) and when you start looking at the integration between on-premise and off-premise systems (as in the examples of auto-declaration above). I don’t believe that auto-classification is a significant factor in the cloud decision making process.

Finally, I wanted to bring another element to this discussion. There is another RM disruptive trend that is not explicit in Cheryl’s article (but it fits under point #1) and it addresses the third RM requirement above: “In-place” Retention Management. If you extract the retention schedule management from the RM tool and architect it at a higher logical level, then retention and disposition can be orchestrated across multiple RM repositories, applications, collaboration environments and even file systems, without the need to relocate the content into a dedicated traditional RM environment. It’s early days (and probably a step too far, culturally, for most RM practitioners) but the huge volumes of currently unmanaged information are becoming a key driver for this approach. We had some interesting discussions at the IRMS conference this year (triggered partly because of IBM’s recent acquisition of StoredIQ, into their Information Lifecycle Governance portfolio) and James Lappin (@JamesLappin) covered the concept in his recent blog here: The Mechanics on Manage-In-Place Records Management Tools. Well worth a read…

So to summarise my points: RM is a composite requirement; Auto-Categorisation is useful and is starting to become legitimate. But even though it can participate, it should not be confused with Auto-Declaration of records;  “Cloud” is not a functional decision, it’s an architectural and commercial one.

I buy, sell, market, service… When did ECM become a Monte Carlo celeb?

P1030993sI am writing this at 40,000 feet, on a morning flight to Nice, final destination Monte-Carlo, for what promises to be a very busy 4-day event. The European leg of IBM’s Smarter Commerce Global Summit runs from 17-20 June at the Grimaldi Forum in Monaco, and in a strange twist of fate I am neither a speaker nor an attendee. I am staff!

The whole event is structured around the four commerce pillars of IBM’s Smarter Commerce cycle: Buy, Sell, Market and Service. Each pillar represents a separate logical track at the event, covering the software, services and customer stories.

Enough with the corporate promo already, I hear you say, where does Enterprise Content Management come into this? Surely, SmarterCommerce is all about retail, transactional systems, procurement, supply chain, CRM and marketing campaign tools?

Yes and no. It’s true that in the fast moving, high volume commercial transaction world, these tools share the limelight. But behind every new promotion, there is a marketing campaign review; behind every supplier and distributor channel, there is a contract negotiation; behind every financial transaction there is compliance; behind every customer complaint there is a call centre; and behind every customer loyalty scheme, there is an application form: ECM underpins every aspect of Commerce. From the first approach to a new supplier to the friendly resolution of a loyal customer’s problem, there is a trail of communication and interaction, that needs to be controlled, managed, secured and preserved. Sometimes paper-based, but mostly electronic.

ECM participates in all commerce cycles: Buy (think procurement contracts and supplier purchase orders and correspondence), Sell (invoices, catalogues, receipts, product packaging, etc.), Market (collateral review & approval, promotion compliance, market analysis, etc.).

But the Service cycle is where ECM has the strongest contribution, and its role goes much beyond providing a secure repository for archiving invoices and compliance documents: The quality, speed and efficiency of customer service, relies on understanding your customer. It relies on knowing what communication you have previously had with your customer or supplier (regardless of the channel they chose), it relies on understanding their sentiment about your products, it relies on anticipating and quickly resolving their requests and their problems.

As a long-standing ECM advocate, I have had the privilege of leading the Service track content at this year’s IBM Smarter Commerce Global Summit in Monaco. A roller-coaster two month process, during which we assembled over 250 breakout sessions for the event, covering all topics related to commerce cycles, and in particular for customer service: Advanced Case management for handling complaints and fraud investigations; Content Analytics for sentiment analysis on social media; Mobile interaction monitoring, to optimise the user’s experience; Channel-independent 360 degree view of customer interaction; Digitising patient records to minimise hospital waiting times; Paperless, on-line billing; Collaboration tools to maximise the responsiveness of support staff; and many more.

A global panel of speakers, with a common goal: putting the customer at the very centre of the commercial process and offering the best possible experience with the most efficient tools.

More comments after the event…

“Hey, Watson! Is Santa real?” – Why IBM Watson is an innocent 6-year old…

I love the technology behind “IBM Watson“. I think it’s been a long time coming and I don’t doubt that in a matter of only a few years, we will see phenomenal applications for it.

Craig Rhinehart explored some of the possibilities of using Watson to analyse social media in his blog “Watson and the future of ECM”. He also set out a great comparison of “Humans vs. Watson”, in the context of a trivia quiz. However, I believe that there is a lot more to it…

Watson is a knowledgeable fool. A 6-year old kid, that can’t tell fact from fiction.

When Watson played Jeopardy!, it ranked its possible answers against each other and the confidence that it understood the questions correctly. Watson did not for a moment question the trustworthiness of its knowledge domain.

Watson is excellent at analysing a finite, trusted knowledge base. But the internet and social media are neither finite, nor trusted.

What if Watson’s knowledge base is not factual?

Primary school children are taught to use Wikipedia for research, but not to trust it, as it’s not always right. They have to cross-reference multiple research sources before they accept the most likely answer. Can Watson detect facts from opinions, hearsay and rumours? Can it detect irony and sarcasm? Can it distinguish factual news from political propaganda and tabloid hype?

If we want to make Watson’s intelligence as “human-like” and reliable as possible, and to use it to drive decisions based on internet or social media content, its “engine” requires at least another dimension: Source reliability ranking. It has to learn when to trust a source and when to discredit it. It has to have a “learning” mechanism that re-evaluates the reliability of its sources as well as its own decision making process, based on the accuracy of its outcome. And since its knowledge base will be constantly growing, it also needs to re-assess previous decisions on new evidence. (i.e. a “belief revision” system).

Today, Watson is a knowledge regurgitating engine (albeit a very fast and sophisticated one). The full potential of Watson, will only be explored when it becomes a learning engine. Only then can we start talking about real decision intelligence.

%d bloggers like this: