Archive

Archive for the ‘innovation’ Category

My first DMS kiss…

September 22, 2011 Leave a comment

A recent tweet exchange with @pmonks and @pelujan (legends amongst the ECM Twitterati…) prompted me to dig deep into my past to find my first flirting with Document Management, a relationship that has lasted over 35 years.

The year: 1984

The venue: London, offices of a Greek shipping company

The actor: An impoverished first year BSc student

The platform: Perkin-Elmer (later Concurrent) super-minis, 32-bit architecture

The language: CoBoL with proprietary RDBMS and transaction processing

The screen: Green on Black

The medium: X.25 network, over a private leased London-to-Athens line

The gig: Long-distance telephone calls between London and Athens offices were costing the company a fortune. Also, the timezone difference reduced the effective daily communication window by 4 hours. The company was looking for a way to leverage their existing technology platform, to exchange messages between offices synchronously or asynchronously, without incurring additional telephone costs.

The solution: A database system written in Cobol, which allowed terminal users at either end to pick a recipient from a list or registered users, leave a message from the user to the opposite party and receive a message back. Since it showed a history of the messages exchanged between the parties, if both parties were on-line, then you could have a dialogue in real-time (line-by-line). If not, the other party would pick the message when they logged in and respond back. This was using a temporary database table. If either party wanted to keep a permanent record of the conversation, they would “archive it” in a separate table, holding metadata like start time, end time, from, to, a subject description, location, etc. Also, since I wanted to be able to exchange messages about code with other programmers in the head office, it had a primitive system of referencing external files on shared disks.

In today’s terminology, this was email, Instant Messaging, micro-blogging and Document Management system rolled into one. An early form of social collaboration. I designed it and built it in about two weeks and it was used daily. It was simple, crude but effective.

[A side note for the pedants: I know email systems were already around by then in the Unix community, but they were not commonplace and they certainly were not available on a business platform like the Perkin-Elmer. Remember, 1984: no TCI/IP, no Internet, no Windows, no PCs, no files]

Since then, I’ve worked on many more weird DMS implementations, before the Document Management market was even identified as such: A hand-crafted invoice processing system written in VB with Kofax cards and massive Cornerstone monitors on OS/2 machines; A bespoke DMS for commercial property agents, with distributed desktop scanning (property images) attached to workflow (rental review) cases; A bespoke DMS based on Uniplex and Informix 4GL for lawyers, a fully fledged DMS with version control and content searching on NeXT machines, using C, Informix and BRS-Search (free-text database), later ported to a disasterous Ingres implementation on Windows 3.11

By then Documentum came on the scene and I remember writing VB for a very early implementation of version 1 (effectively just a set of APIs) for a Pharmaceutical company. FileNet was already on the scene with the first notion of Imaging+Workflow as a single intergrated platform, but our paths were not to cross until a decade later.

Now, there is a point to this inane drivel, beyond self-indulgence…

In today’s confused ECM market, none of these early bespoke implementations would classify as proper “Document Management”. Yet at the time, they were all innovative, trailblazing, and large companies would pay good money to implement them. It created the legitimate (if schizophrenic) ECM market space that we live in and love today.

When I launched “Document Management Avenue” in 1995 – the first independent online community forum for DMS, for those old enough to remember – we were tracking over 300 products in this space. I still have the list somewhere. Today, most of us can only point at a dozen or so major ECM / EDRMS vendors.

There you have it. My own short history of watching the birth of ECM – The bespoke became product, which became open-source, which became commodity. The rest, as they say, is history… And some of us are still arguing what to call the baby 🙂

Advertisements

“Hey, Watson! Is Santa real?” – Why IBM Watson is an innocent 6-year old…

I love the technology behind “IBM Watson“. I think it’s been a long time coming and I don’t doubt that in a matter of only a few years, we will see phenomenal applications for it.

Craig Rhinehart explored some of the possibilities of using Watson to analyse social media in his blog “Watson and the future of ECM”. He also set out a great comparison of “Humans vs. Watson”, in the context of a trivia quiz. However, I believe that there is a lot more to it…

Watson is a knowledgeable fool. A 6-year old kid, that can’t tell fact from fiction.

When Watson played Jeopardy!, it ranked its possible answers against each other and the confidence that it understood the questions correctly. Watson did not for a moment question the trustworthiness of its knowledge domain.

Watson is excellent at analysing a finite, trusted knowledge base. But the internet and social media are neither finite, nor trusted.

What if Watson’s knowledge base is not factual?

Primary school children are taught to use Wikipedia for research, but not to trust it, as it’s not always right. They have to cross-reference multiple research sources before they accept the most likely answer. Can Watson detect facts from opinions, hearsay and rumours? Can it detect irony and sarcasm? Can it distinguish factual news from political propaganda and tabloid hype?

If we want to make Watson’s intelligence as “human-like” and reliable as possible, and to use it to drive decisions based on internet or social media content, its “engine” requires at least another dimension: Source reliability ranking. It has to learn when to trust a source and when to discredit it. It has to have a “learning” mechanism that re-evaluates the reliability of its sources as well as its own decision making process, based on the accuracy of its outcome. And since its knowledge base will be constantly growing, it also needs to re-assess previous decisions on new evidence. (i.e. a “belief revision” system).

Today, Watson is a knowledge regurgitating engine (albeit a very fast and sophisticated one). The full potential of Watson, will only be explored when it becomes a learning engine. Only then can we start talking about real decision intelligence.

The Art of the bleeding obvious…

(I better caveat this: Any resemblance to actual organisations or events are entirely coincidental! These are my own opinions and not those of my employers…)

Let me tell you a little story, entirely hypothetical of course:

Analyst: We predict… (great fanfare and drum-roll), that by 2015, 85% of business operations will be done on mobile devices.

Vendor 1: Analysts says we’re going mobile. We better jump the competition. We’re buying a small unknown apps company and make a big song and dance about it.

Vendor 2: The Analyst is predicting and our competitors are buying. While they are sorting out their integration issues we’ll write our own apps which will be better, and we’ll jump the market. Let it be known and let it be so!

Vendor 3: Oops – our competition has a jump on us. Let’s build a cut-down version very quickly with limited functionality and launch to the market before the other ones have a chance. While they are fighting  for big deals to recover their investment, we’ll gain market share.

Analyst: Look, we predicted this will be a hot market and now three vendors are already competing for that space. We better write a review / scope / quadrant / wave for that market. Let’s see: We predict that Vendor 1 was the inovator in the market, Vendor 2 has the strongest offering, Vendor 3 will appeal to the mid-markets.

All together: (gasp of wonder) All hail the Analyst, for they have powers to analyse the market and predict the future so accurately. What is your next epiphany, oh mighty one?

Ok, ok, so it’s a generalisation, it’s irreverent, and I’ve probably insulted every analyst and IT vendor in the process. Is the scenario so far-fetched though?

A while back, when I was working as a consultant, we used to be the butt of many jokes: “The definition of a consultant, is someone who asks you for your watch before they tell you what time it is”.

The next time you read a great new “innovative” press release from a vendor, or analyst for that matter, just look at it more critically: Is it reflecting a business need or is it creating a new one? After 30 years of constant innovation in the software space, why are we still trying to solve the same problems: Reducing operations cost, improving performance, protecting and sharing information? I can count in one hand the innovations that have fundamentally changed business models: Straight-through processing; Supply chain automation; eCommerce; Satellite communications and precious few others. Most other “innovation” is just giving better sharper tools to do the same job. We’re still building the same cabinet, only we’re using electrical routers instead of chisels and planes.

There is nothing wrong with better, faster, cheaper tools of course. That’s progress. But whenever you come across “The next BIG thing” just take a step back and think: is it really that big a leap? Or is it the bleeding obvious next logical step forward, creating a self-fulfilling prophecy?

George

%d bloggers like this: