Archive

Posts Tagged ‘Software’

“Your baby is ugly!” – The schizophrenic world of overlapping software portfolios

No mother will ever say “My baby is the ugliest”. And no product manager will allow their brainchild to commit harakiri, following a software company acquisition.

I had the dubious pleasure to live and breathe this paranoid madness for over a decade, and I can tell you it’s neither pretty nor dignified.

Just look at our little ECM corner of the world:

  • FileNet ECM vs. IBM Content Manager vs. Content Manager on Demand
  • FileNet BPM vs. Websphere vs. Lombardi
  • IBM Records Manager (Tarian) vs. IBM Enterprise Records (FileNet)
  • Metastorm vs. Global 360 vs. Cordys
  • Tibco vs. Staffware
  • LiveLink vs. Hummingbird vs. Documentum
  • Vignette vs. RedDot
  • Etc. vs. Etc. vs. Etc.

Look carefully at the most acquisitive companies in the sector: It’s always a bloodbath.

Today the latest victims entered the fray: OnBase vs. Perceptive vs. Saperion or OnBase VNA vs. Acuo VNA, etc.

Naturally, the acquiring CEO – usually shoulder to shoulder with the incoming comrade – will issue reassuring PR statements to appease the acquired user-base: “Welcome to our happy family, we love you too! It’s going to be great!” (except in the case of FileNet and Lombardi where IBM’s message was more targeted to the existing user base: “Yes, we bought a prettier child, but we will never stop loving you”). Today’s example by Hyland is no exception…

And with the pleasantries completed, the gruesome reality starts to creep in: Innovators and Thought leader executives are either leaving in drones, or patiently waiting out their gardening leave or golden-handcuff term to expire. Marketing will talk about “coexistence” and “interoperation” and “unifying functionality” and “rationalising capabilities” and “ecosystems” across the portfolio. In the back room, skeletal engineering resources will be tearing each other’s hair out, scrounging for scraps of headcount to keep up with just the most basic bug-fixes on totally incompatible architectures, creating the QA matrices from hell. While salesmen in the field will try to pinch each other’s deals and upsell incompatible “extensibility” features creating Frankenstein implementation monsters that will never see the light of production, or another version update. Ever!

You think I’m exaggerating? Just ask any pre-sales support engineer who has had to live through these acquisitions… Pure madness!

The legend of ancient Spartans throwing their disabled and diseased children off a cliff, in order to maintain a superior warrior race, may have been disputed by archaeologists, but the software industry could take some lessons and apply some of the same rationale: less emotional attachment to the lesser products, and a more honest – if harsh – reality check for the customers: “Sorry, we cannot afford to maintain your ugly investment forever. Let’s come to an arrangement on how you move to our single, best, invested, up-to-date product portfolio, before you start running off to our competitors in despair”.

I sincerely hope that I’m proven wrong in my cruel cynical assessments, and I wish my Hyland and Perceptive colleagues a long and happy marriage, once the honeymoon period is over…

(VERY IMPORTANT: These are my personal opinions and are not necessarily representing the opinions of my current, previous or future employers. Phew, that was close…)

Advertisements

Cloud and SaaS for dummies…

I had to explain Cloud and SaaS to a (non-IT) friend recently. It had to be quick and simple…

On-premise/Licensed: You buy a car and you drive it to work whenever you want. You pay for Insurance, Service, MOT, tyres and petrol. You can tweak it or add “go faster” stipes if you like. If it breaks down, you pay to have it fixed.

Cloud: The government buys a train and pays for its maintenance. You hop on it when you need it, and pay a ticket. If you are going to use it regularly, you buy an annual pass. If the train breaks down, the company sends another one to pick you up and they refund your ticket.

Hybrid: You drive your own car to the station and then take a train to work.

Simple enough?

2020 and beyond… The mother of all Information Management predictions

January 30, 2014 6 comments

crystal ballI’ve been wanting to write this article for a while, but I thought it would be best to wait for the deluge of 2014 New Year predictions to settle down, before I try and look a little bit further in the horizon.

The six predictions I discuss here are personal, do not have a specific timescale, and are certainly not based on any scientific method. What they are based on, is a strong gut feel and thirty years of observing change in the Information Management industry.

Some of these predictions are more fundamental than others. Some will have immediate impact (1-3 years), some will have longer term repercussions (10+ years). In the past, I have been very good at predicting what is going to happen, but really bad at estimating when it’s going to happen. I tend to overestimate the speed at which our market moves. So here goes…

Behaviour is the new currency

Forget what you’ve heard about “information being the new currency”, that is old hat. We have been trading in information, in its raw form, for years. Extracting meaningful value however from this information has always been hard, repetitive, expensive and most often a hit-or-miss operation. I predict that with the advance of analytics capabilities (see Watson Cognitive), raw information will have little trading value. Information will be traded already analysed, and nowhere more so than in the area of customer behaviour. Understanding of lifestyle-models, spending-patterns and decision-making behaviour, will become the new currency exchanged between suppliers. Not the basic high-level, over-simplified, demographic segmentation that we use today, but a deep behavioural understanding of individual consumers that will allow real-time, predictive and personal targeting. Most of the information is already being captured today, so it’s a question of refining the psychological, sociological and commercial models around it. Think of it this way: How come Google and Amazon know (instantly!) more about my on-line interactions with a particular retailer, than the retailer’s own customer service call centre? Does the frequency of logging into online banking indicate that I am very diligent in managing my finances, or that I am in financial trouble? Does my facebook status reflect my frustration with my job, or my euphoric pride in my daughter’s achievement? How will that determine if I decide to buy that other lens I have been looking at for my camera, or not? Scary as the prospect may be, from a personal privacy perspective, most of that information is in the public domain already. What is the digested form of that information, worth to a retailer?

Security models will turn inside out

Today most security systems, algorithms and analysis, are focused on the device and its environments. Be it the network, the laptop, the smartphone or the ECM system, security models are there to protect the container, not the content. This has not only become a cat-and-mouse game between fraudsters and security vendors, but it is also becoming virtually impossible to enforce at enterprise IT level. With BYOD, a proliferation of passwords and authentication systems, cloud file-sharing, and social media, users are opening up security holes faster than the IT department can close. Information leakage is an inevitable consequence. I can foresee the whole information security model turning on its head: If the appropriate security becomes deeply embedded inside the information (down to the file, paragraph or even individual word level), we will start seeing self-describing and self-protecting granular information that will only be accessible to an authenticated individual, regardless if that information is in a repository, on a file-system, on the cloud, at rest or in transit. Security protection will become device-agnostic and infrastructure-agnostic. It will become a negotiating handshake between the information itself and the individual accessing that information, at a particular point in time.

Oh, and while we are assigning security at this granular self-contained level, we might as well transfer retention and classification to the same level as well.

The File is dead

In a way, this prediction follows on from the previous one and it’s also a prerequisite for it. It is also a topic I have discussed before [Is it a record, who cares?]. Information Management, and in particular Content Management, has long been constrained by the notion of the digital file. The file has always been the singular granular entity, at which security, classification, version control, transportation, retention and all other governance stops. Even relational databases ultimately live in files, because that’s what Operating Systems have to manage. However, information granularity does not stop at the file level. There is structure within files, and a lot of information that lives outside the realm of files (particularly in social media and streams). If Information Management is a living organism (and I believe it is), then files are its organs. But each organ has cells, each cell has molecules, and there are atoms within those molecules. I believe that innovation in Information Management will grow exponentially the moment that we stop looking at managing files and start looking at elementary information entities or segments at a much more granular level. That will allow security to be embedded at a logical information level; value to grow exponentially through intelligent re-use; storage costs to be reduced dramatically through entity-level de-duplication; and analytics to explode through much faster and more intelligent classification. File is an arbitrary container that creates bottlenecks, unnecessary restrictions and a very coarse level of granularity. Death to the file!

BYOD is just a temporary aberration

BYOD is just a transitional phase we’re going through today. The notion of bringing ANY device to work is already becoming outdated. “Bring Work to Your Device” would have been a more appropriate phrase, but then BWYD is a really terrible acronym. Today, I can access most of the information I need for my work, through mobile apps and web browsers. That means I can potentially use smart phones, tablets, the browser on my smart television, or the Wii console at home, or my son’s PSP game device to access work information. As soon as I buy a new camera with Android on it, I will also be able to access work on my camera. Or my car’s GPS screen. Or my fridge. Are IT organisations going to provide BYOD policies for all these devices where I will have to commit, for example, that “if I am using that device for work I shall not allow any other person, including family members, to access that device”? I don’t think so. The notion of BYOD is already becoming irrelevant. It is time to accept that work is no longer tied to ANY device and that work could potentially be accessed on EVERY device. And that is another reason, why information security and governance should be applied to the information, not to the device. The form of the device is irrelevant, and there will never be a 1:1 relationship between work and devices again.

It’s not your cloud, it’s everyone’s cloud

Cloud storage is a reality, but sharing cloud-level resources is yet to come. All we have achieved is to move the information storage outside the data centre. Think of this very simple example: Let’s say I subscribe to Gartner, or AIIM and I have just downloaded a new report or white paper to read. I find it interesting and I share it with some colleagues, and (if I have the right to) with some customers through email. There is every probability that I have created a dozen instances of that report, most of which will end up being stored or backed up in a cloud service somewhere. Quite likely on the same infrastructure where I downloaded the original paper from. And so will do many others that have downloaded the same paper. This is madness! Yes, it’s true that I should have been sending out the link to that paper to everyone else, but frankly that would force everyone to have to create accounts, etc. etc. and it’s so much easier to attach it to an email, and I’m too busy. Now, turn this scenario on its head: What if the cloud infrastructure itself could recognise that the original of that white paper is already available on the cloud, and transparently maintain the referential integrity, security, and audit trail, of a link to the original? This is effectively cloud-level, internet-wide de-duplication. Resource sharing. Combine this with the information granularity mentioned above, and you have massive storage reduction, cloud capacity increase, simpler big-data analytics and an enormous amount of statistical audit-trail material available, to analyse user behaviour and information value.

The IT organisation becomes irrelevant

The IT organisation as we know it today, is arguably the most critical function and the single largest investment drain in most organisations. You don’t have to go far to see examples of the criticality of the IT function and the dependency of an organisation to IT service levels. Just look at the recent impact that simple IT malfunctions have had to banking operations in the UK [Lloyds Group apologies for IT glitch].  My prediction however, is that this mega-critical organisation called IT, will collapse in the next few years. A large IT group – as a function, whether it’s oursourced or not – is becoming an irrelevant anachronism, and here’s why: 1) IT no longer controls the end-user infrastructure, that battle is already lost to BYOD. The procurement, deployment and disposition of user assets is no longer an IT function, it has moved to the individual users who have become a lot more tech-savy and self-reliant than they were 10 or 20 years ago. 2) IT no longer controls the server infrastructure: With the move to cloud and SaaS (or its many variants: IaaS, PaaS, etc.), keeping the lights on, the servers cool, the backups running and the cables networked will soon cease to be a function of the IT organisation too. 3) IT no longer controls the application infrastructure: Business functions are buying capabilities directly at the solution level, often as apps, and these departments are maintaining their own relationships with IT vendors. CMOs, CHROs, CSOs, etc. are the new IT buyers. So, what’s left for the traditional IT organisation to do? Very little else. I can foresee that IT will become an ancillary coordinating function and a governance body. Its role will be to advise the business and define policy, and maybe manage some of the vendor relationships. Very much like the role that the Compliance department, or Procurement has today, and certainly not wielding the power and the budget that it currently holds. That, is actually good news for Information Management! Not because IT is an inhibitor today, but because the responsibility for Information Management will finally move to the business, where it always belonged. That move, in turn, will fuel new IT innovation that is driven directly by business need, without the interim “filter” that IT groups inevitably create today. It will also have a significant impact to the operational side of the business, since groups will have a more immediate and agile access to new IT capabilities that will enable them to service new business models much faster than they can today.

Personally, I would like all of these predictions to come true today. I don’t have a magic wand, and therefore they won’t. But I do believe that some, if not all, of these are inevitable and it’s only a question of time and priority before the landscape of Information Management, as we know today, is fundamentally transformed. And I believe that this inevitable transformation will help to accelerate both innovation and value.

I’m curious to know your views on this. Do you think these predictions are reasonable, or not? Or, perhaps they are a lot of wishful thinking. If you agree with me, how soon do you think they can become a reality? What would stop them? And, what other fundamental changes could be triggered, as a result of these?

I’m looking forward to the debate!

ECM is dead. Long live ECM…

December 2, 2013 7 comments

It’s Autumn. The trees are losing their leaves, the nights are getting longer, it’s getting cold and grey and generally miserable. It’s also the time for the annual lament of the Enterprise Content Management industry and ECM… the name that refuses to die!

At least once a year, ECM industry pundits go all depressed and introspect and predict, once again, that our industry is too wide, too narrow, too complex, too simplified, too diverse or too boring and dying or not dying or dead and buried. Once again this year, Laurence Hart (aka Pie), Marko Sillanpää, Daniel Antion, John Mancini and, undoubtedly, several other esteemed colleagues, with a collective experience of several hundred years of ECM on their backs, will try (and fail) to reconcile and rationalize the semantics of one of the most diverse sectors in the software industry.

You will find many interesting points and universal truths about ECM if you follow the links to these articles above. Some I agree with wholeheartedly, some I would take with a pinch of salt.

But let me assure you, concerned reader, that the ECM industry is not going anywhere, the name will not change and we will again be lamenting its demise, next Autumn!

There is a fundamental reason why this industry is so robust and so perplexing: This is not a single industry, or even a single coherent portfolio of products. It’s a complex amalgamation of technologies that co-exist and complement each other, with the only common denominator being an affinity for managing “stuff” that does not fit in a traditional relational database. And every time one of these technologies grows out of favour, another new discipline joins the fold: Documents and emails and archives and repositories and processes and cases and records and images and retention and search and analytics and ETL and media and social and collaboration and folksonomies and cloud, and, and, and… The list, and its history, is long. The reason this whole hotchpotch will continue to be called Enterprise Content Management, is that we don’t have a better collective noun that even vaguely begins to describe what these functions do for the business. And finally, more and more of the market (you know, the real people out there, not us ECM petrolheads…) are starting to recognise the term, however vague, inappropriate and irrational it may be to the purists among us.

And there is one more reason: Content Management is not a technology, it’s an operational discipline. Organisations will manage content with or without ECM products. It’s just faster, cheaper and more consistent if they use tools.

As I said, if you have an academic interest in this ECM industry, the articles above are definitely worth reading. For my part, I would like to add one more thought into that mix:

The word “Enterprise” in “ECM” has been the source of much debate. And whilst I agree with Laurence that originally some of the vendors attempted to promote the idea of a single centralised ECM repository for the whole enterprise, that idea was quickly abandoned in the early ’00s as generally a bad idea. Anyone who has tried to deploy this approach in a real world environment, can give you a dozen reasons why it’s really, really a very naïve idea.

Nevertheless, Content Management has always been, and will always be “Enterprise”, in the sense that it very rarely works as a simple departmental solution. There is very little value in doing that, especially when you combine it with process management, which adds the most value when crossing inter-departmental boundaries. It is also “Enterprise” in the sense that as a platform it can support both vertical and horizontal applications across most parts of an organisation. Finally, there are certain applications of ECM, that can only be deployed as “Enterprise” tools: It would be madness to design Records Management, eMail archiving, eDiscovery or Social collaboration solutions, on a department by department basis. There is no point!

That’s why, in my opinion at least, the term ECM will live for a long time yet… Long Live ECM!

Is it Art, is it Science or is it the Art of Science?

I don’t often cross-post between my ECM and my Photography blogs, but this is definitely a post that is relevant to both…

I was having one of these left-brain vs. right-brain discussions with a friend of mine who works in IT and also happens to be a keen photographer, as I am. He asked me: “Do you consider yourself primarily a technologist or an artist?”

I could not answer the question. The obvious answer is “both”, but the more I think about it, the less sense the question makes. Is there really a distinction between these two? I don’t believe so. They are certainly not mutually exclusive.

Let’s look at an example of a software developer and a painter or a photographer or a writer: They all have to start with a vision, they all have to innovate and all have to be problem solvers. Just imagine yourself in an artist’s studio, a photographer’s studio or your IDE environment, and look at each process:

In painting, you chose your canvas, depending on the final purpose of the painting. In photography your format and your output medium, based on the audience. In software you chose the operating system and the market your solution is intended for.

Then you choose your primary crafting tool: Your paintbrushes or your pencils, your cameras and lenses or your coding language. And you start the creative process. Your lines of code are your brushstrokes, the same lights and shadows and colours make up your composition.

In art you use a palette of colours and you combine them to create new ones. In photography you have exposure techniques and filters and in coding you use code libraries.

You step back, you look at your masterpiece or test your code, and then you use turpentine, an eraser, debugging tools or Photoshop to correct minor mistakes.

I believe that not only software development, but most scientific undertakings are a form of art. If you are experimenting in a chemistry lab, or you are designing a marketing campaign, or designing a new electronic device, you will have to use tools and imagination to create something new. You will use subjective judgements to determine if it’s bad or if it’s good. And once you deliver it you will be critiqued by other people.

So, as a solutions architect, I use artistic processes to bring my visions to life. As a photographer, I use both technology and science to create new art. Can I ever de-couple art from science? No. If I did I would end up being bad at both.

My computer is having a spaz attack – discuss!

January 10, 2012 7 comments

I was working from home yesterday. My daughter stormed into my office:

  • “Dad, I need to use your computer”
  • “Why?”
  • “I need to print some pictures”
  • “Why don’t you use your laptop?”
  • “It’s not working…”
  • “What do you mean ‘it’s not working’? What is not working?”
  • “I don’t know, it’s having a spaz attack”

I have long given up any pretence of understanding the etymology of the teenage language. In a language where “fit” means handsome and “sick” means nice, I have no hope in tracing the origins of “spaz”. I can only guess that it stems from “spastic” or “spasmodic”. I have learned that “spaz” is an abbreviation of “spasticated” (as in “it’s gone all spasticated…”), which leaves me none the wiser. I digress… Whatever the origin of the term, in my thirty years of troubleshooting computers I’m pretty sure that epileptic seizures where never on the symptom list…

  • “Why didn’t you bring it down so that I can sort it out?”
  • “I don’t have time for that. All I need is to print three pictures for my artwork.”

I yielded, even though every fibre in my body was screaming for answers and detail. There is no such thing as “Not working”.

I saw a huge IT generation gap issue here: When I started working with computers, somewhere in the prehistoric early ‘80s, you needed to understand computers to use them. I won’t bore you with stories of bootstrapping from paper tapes and disk drives that needed to be shutdown in a certain way because the heads would physically crash on the disk, but to a whole generation of us, “not working” immediately triggers a root cause analysis mechanism in our brain: Power, motherboard, fans, memory, disks, peripherals, operating system, drivers, software, network connection, telephone line, etc. By process of elimination, ONE of them is not working, but not usually the whole. And part of “using” the computer was to understand which part is not working and how to get it to work. Because, frankly, if we didn’t figure it out there wasn’t anyone else around that could.

My daughter is a typical business user: To her, the computer is a means to an end. Her laptop is a tool. She has no interest to find out which part isn’t working or to make any attempt to fix it. If she can’t go to Google and print the three pictures she needs, when she needs them, it renders the tool useless. “The computer is not working”, does not mean the physical machine is broken, it means “my tool doesn’t do what I need it to do”. Why and how is irrelevant.

A couple of weeks ago, when her laptop refused to start altogether (hard disk index corruption problem, a simple CHKDSK fix), her only concern was if she would lose the book she has been writing for the last six months – cue the usual backup lecture from dad… She doesn’t want to learn how to do backups, she wants her book to be there.

Interestingly, the same gap exists between most business user communities and IT. IT will worry about which part of the infrastructure is failing, which vendor to contact, which component needs tweaking, which performance bottleneck requires more resources at peak time. For the users it’s black & white: “The system” is working, or it’s not.

Coincidentally, I saw another example of the same gap earlier in the day, yesterday, when I went to my dentist. After paying for my check-up, and after several failed attempts, the receptionist informed me that they could not give me a receipt because the printer is broken and they were waiting for the engineer to arrive. I could see the aforementioned printer from where I was standing: It was flashing a red light with a message that it had a paper-jam. There were four people at reception, and various dentist assistants that paraded through. Not one of them had either the skills or the inclination to clear the jam.

I felt envy towards the engineer – Money for nothing!

I also resisted the temptation to say “Can I try and sort it for you?”, but it was hard! I agreed to have my receipts posted in the mail, instead.

Categories: Uncategorized Tags: , , , , ,

Data Governance is not about Data

December 1, 2011 2 comments

Those that have been reading my blogs for a while, know that I have great objections to the term “unstructured” and the way it has been used to describe all information that is text-based, image-based or any other format that does not tend to fit directly into the rows and columns of a relational database. None of that “unstructured” content exists without structure inside and around it, and databases have long moved on from storing just “rows and columns”.

A conversation last night with IDC Analyst @AlysWoodward, (at the excellent IDC EMEA Software Summit in London), prompted me to think about another problem that distinction has created:

Calling that content “unstructured” is a convention invented by analysts and vendors, to distinguish between the set of tools required to manage that content and the tools that service the world of databases and BI tools.  The technologies used to manage text-based content and digital media need to be different, as they have a lot of different issues to address.

It has also been a great way of alerting the business users that while they are painstakingly taking care of their precious transactional data, that only represents a about 20% of their IT estate, while all this other “stuff” keeps accumulating uncontrolled and unmanaged on servers, C: drives, email servers, etc.

These artificial distinctions however, are only relevant when you consider HOW you manage that information, the tools and the technologies. These distinctions are not relevant when you are trying to understand WHAT business information you hold and need as an organisation, WHY you are holding it and what policies need to be applied to it, or WHO is responsible for it: The scanned image of an invoice is subject to the same retention requirements as the row-level data extracted from it; the Data Protection act does not give a different privacy rules for emails and for client records kept in your CRM system; a regulatory audit scrutinising executive decisions will not care if the decisions are backed by a policy document or a BI query; you can’t have a different group of people deciding on security policies for confidential information on your ERP system and another group for the product manufacturing instructions held in a document library.

“Data Governance” (or “Information Governance”, or “Content Governance”, I’ve seen all of these terms used) is not an IT discipline, it’s a business requirement. It does not only apply to the data held in databases and data warehouses, it applies to all information you manage as an organisation, regardless of location, format, origin or medium. As a business, you need to understand what information you hold about your customers, your suppliers, your products, your employees. You need to understand where that information lives and where you would go to find it. You need to understand who is responsible for managing it, making sure it’s secure and who has the right to decide that you can get rid of it. Regardless if that information lives in a “structured” or “unstructured” medium, and regardless of the tools or technologies that are needed to implement these governance policies.

The Data Governance Council, has developed an excellent maturity model for understanding how far your organisation has moved in understanding and implementing Data Governance. It covers areas such as “Stewardship”, “Policy”, “Data Risk management”, “Value Creation”, “Information Lifecycle Management”, “Security”, “Metadata”, etc. etc.  All of these disciplines are just as relevant in taking control of the data in your databases, as they are for managing the files on your shared drives, your content repositories and the emails on your servers.

I seriously believe that by propagating this artificial divide between “data” and “content”, we are creating policy silos that not only minimise the opportunity for getting value out of our information, but we are introducing even further risks through gaps and inconsistencies. We may have to use different tools for implementing these governance controls on different mediums, but the business should be having ONE consistent governance scheme for all its information.

Open to your thoughts and suggestions, as always!

%d bloggers like this: