Archive

Posts Tagged ‘document management’

It’s Knowledge Management, Jim, but not as you know it

March 19, 2015 1 comment

LibraryA recent conversation with a colleague sent me searching back to my archives for a conference presentation I did nearly 16 years ago. The subject of the conference was on the impact of Document Management as an enabler for Knowledge sharing in the enterprise.

Driven by three different technology sectors at the time, Document Management, Search and Portals, Knowledge Management was all the rage back then. No good deed goes unpunished, however, and after several massive project failures and even more non-starter projects, Knowledge Management lost its shine and became a dirty phrase that no self-respecting consultant wanted to be associated with.

Why did Knowledge Management fail in the ‘90s?

They say 20:20 hindsight is a wonderful thing… Reading again through my slides and my notes, made me realise how different this market has become since the late ‘90s. There were a number of factors at the time that made sure that Knowledge Management never took off as a viable approach but, in my view, two were the most dominant:

The first one was the much used phrase of “Knowledge is power”. Leaving aside the fact that knowledge in and by itself very rarely has intrinsic value – it’s the application of knowledge that creates the power – the phrase was quickly misconstrued by the users to mean: “I have knowledge, therefore I have power”. Guess what? Who wants to dilute their power by selflessly sharing out knowledge? Not many users felt altruistic enough to share their prized knowledge possessions, their crown jewels, for the greater good of the organisation. “As long as I hold onto the knowledge, I hold on to the power and therefore I am important, valuable and irreplaceable”. Nobody said so, of course, but everyone was thinking it.

The second one was the incessant focus on the information itself as the knowledge asset. Technology was focused almost exclusively on extracting tacit knowledge from individuals, encapsulating it in explicit documents, categorising it, classifying it, archiving it and making it available to anyone who could possibly need it. There were two problems with this approach: The moment tacit information became explicit, it lost its owner and curator; it also started aging and becoming obsolete. Quite often, it also lost its context too, making it not only irrelevant but often dangerous.

Why are we talking again about Knowledge Management in 2015?

The last decade has brought a silent cultural revolution on knowledge sharing. We have all learned to actively share! Not only did we become a lot less paranoid about sharing our “crown jewels”, but we are all actively enjoying doing so, inside and outside the work environment: Wikipedia, blogs, Twitter, self-publishing, Facebook, Pinterest, LinkedIn, SlideShare, Open-source, crowdsourcing, etc., all technologies that the millennium (and the millennials) have brought to the fore. All these technologies are platforms for sharing information and knowledge. The stigma and the paranoia of “Knowledge is Power” has actually transformed into “Sharing is Power”. The more we share the more are valued by our networks, and the bigger the network grows the more power we yield as individuals. And, surprise-surprise, it’s reciprocal! The bigger the network we create the bigger the pool of knowledge we can draw upon.

What couldn’t have been envisioned in the late ‘90s, or early ‘00s, is that by 2015 the knowledge power would be contained in the relationships and the connections, not in the information assets. Not just connections between knowledge gurus inside an enterprise, but amongst individuals in a social environment, between companies and consumers and amongst professional organisations.

Social Media and Collaboration environments have proven to us that the value of sharing knowledge is significantly higher than the value of holding on to it. We may or may not see the term “Knowledge Management” resurrected as an IT concept, but the reality is that knowledge sharing has now become an integral part of our daily life, professional and personal, and it’s not likely to change any time soon.

Advertisements

CMaaS – Content Management as a Service

I haven’t written much about cloud because, frankly, I don’t think its as revolutionary as people think and because the demand for it has been largely vendor induced. Whatever you think about cloud however, it is here, it is a driving force, and it will continue to be a conversation topic for a while.

I wrote on a previous article (Cloud and SaaS for dummies), that cloud is like a train: Someone else has to maintain it and make sure it it there on time, all you have to do is buy a single ticket and hop on it when you need it. At least that’s the oversimplified theory… For Content Management however, the reality is a bit different: When you get on the train, you don’t carry your bookcase, your briefcase and your children’s photo albums with you, and you certainly don’t leave them there expecting them to be available and in tact next time you hop on the train. You take the train to go from A to B, and you keep your personal belongings with you.

The train analogy works well for Software as a Service (SaaS) cloud models, but not for Content.

The financial argument of SaaS is compelling: Buying software capabilities on demand moves the financial needle from CapEx to OpEx; the total cost of ownership reduces, as support costs & administration skills burden the provider; technology refresh secures ubiquitous access; and economies of scale dramatically reduce infrastructure costs.

Microsoft, Google, Apple, Box, Dropbox and every other ECM and Collaboration vendor, are offering content storage in the cloud – often free – to entice you to move your content off your premises, or off your personal laptop, to a happier, more abundant and more resilient place, which is all good and worthwhile. What isn’t good, is the assumption that providing storage in the cloud (or as I’ve seen it incorrectly mentioned recently “CaaS – Content as a Service”), is the same as providing Content Management in the cloud. It is not!

We (the ECM industry) have fought for years to establish the idea that managing content goes a lot further than just storing documents in a file system. It requires control: Security, versions, asynchronous editing, metadata, taxonomies, retention, integration, immutable flags, workflow, etc. etc. Unfortunately the new fad of EFSS (Enterprise File Synching and Sharing) systems, is turning the clock back: Standalone EFSS environments, are just another way for users to bypass IT and Security controls (Chris Walker articulates this very well in his article You’re out of your mind).

Now, before you jump on my throat and tell me that EFSS came about exactly because of the straitjacket that compliance, governance and ECM have put organisations in, let me say, “I know!”. I’ve lived and breathed this industry since it was born, so I understand the issues. However, we (ECM and IG practitioners) risk throwing out the baby with the bathwater:  Ignoring EFSS and all other file external sharing mechanisms is dangerous, at best. Blocking them is impractical and unenforceable. Institutionalizing them (as Chris suggests) adds a layer of governance over them, but it does not solve the conflict with the need for secure internal repositories and regulatory control.

So, what if you could have your cake and eat it too? Instead of accepting EFSS as an externally imposed inevitability, why not embrace EFSS within the ECM environment? Here’s a revolutionary idea: Why not have an ECM environment that incorporates EFSS capabilities, instead of fighting against them? An ECM repository that provides the full ECM control environment we know and love, as well as keeping content synchronised across all your mobile and desktop devices, so that you can work

I try to stay impartial on my blog and refrain from plugging IBM products, but in this case I cannot avoid the inevitable: IBM Content Navigator offers this today (I don’t doubt that other ECM vendors are or will be offering it soon).

What we are starting to see,  is the evolution of proper “Content Management as a Service – CMaaS”:  Not only storing content in a cloud and retrieving it or sharing it, but offering the complete ECM capability, including sync & share, offered as a cloud-based, on-demand, scalable and secure service.

Why should organisations settle for either an on-premise heavy-weight ECM platform, or a light-weight low-compliance cloud-based sharing platform, when they can combine both?

George Parapadakis

2020 and beyond… The mother of all Information Management predictions

January 30, 2014 6 comments

crystal ballI’ve been wanting to write this article for a while, but I thought it would be best to wait for the deluge of 2014 New Year predictions to settle down, before I try and look a little bit further in the horizon.

The six predictions I discuss here are personal, do not have a specific timescale, and are certainly not based on any scientific method. What they are based on, is a strong gut feel and thirty years of observing change in the Information Management industry.

Some of these predictions are more fundamental than others. Some will have immediate impact (1-3 years), some will have longer term repercussions (10+ years). In the past, I have been very good at predicting what is going to happen, but really bad at estimating when it’s going to happen. I tend to overestimate the speed at which our market moves. So here goes…

Behaviour is the new currency

Forget what you’ve heard about “information being the new currency”, that is old hat. We have been trading in information, in its raw form, for years. Extracting meaningful value however from this information has always been hard, repetitive, expensive and most often a hit-or-miss operation. I predict that with the advance of analytics capabilities (see Watson Cognitive), raw information will have little trading value. Information will be traded already analysed, and nowhere more so than in the area of customer behaviour. Understanding of lifestyle-models, spending-patterns and decision-making behaviour, will become the new currency exchanged between suppliers. Not the basic high-level, over-simplified, demographic segmentation that we use today, but a deep behavioural understanding of individual consumers that will allow real-time, predictive and personal targeting. Most of the information is already being captured today, so it’s a question of refining the psychological, sociological and commercial models around it. Think of it this way: How come Google and Amazon know (instantly!) more about my on-line interactions with a particular retailer, than the retailer’s own customer service call centre? Does the frequency of logging into online banking indicate that I am very diligent in managing my finances, or that I am in financial trouble? Does my facebook status reflect my frustration with my job, or my euphoric pride in my daughter’s achievement? How will that determine if I decide to buy that other lens I have been looking at for my camera, or not? Scary as the prospect may be, from a personal privacy perspective, most of that information is in the public domain already. What is the digested form of that information, worth to a retailer?

Security models will turn inside out

Today most security systems, algorithms and analysis, are focused on the device and its environments. Be it the network, the laptop, the smartphone or the ECM system, security models are there to protect the container, not the content. This has not only become a cat-and-mouse game between fraudsters and security vendors, but it is also becoming virtually impossible to enforce at enterprise IT level. With BYOD, a proliferation of passwords and authentication systems, cloud file-sharing, and social media, users are opening up security holes faster than the IT department can close. Information leakage is an inevitable consequence. I can foresee the whole information security model turning on its head: If the appropriate security becomes deeply embedded inside the information (down to the file, paragraph or even individual word level), we will start seeing self-describing and self-protecting granular information that will only be accessible to an authenticated individual, regardless if that information is in a repository, on a file-system, on the cloud, at rest or in transit. Security protection will become device-agnostic and infrastructure-agnostic. It will become a negotiating handshake between the information itself and the individual accessing that information, at a particular point in time.

Oh, and while we are assigning security at this granular self-contained level, we might as well transfer retention and classification to the same level as well.

The File is dead

In a way, this prediction follows on from the previous one and it’s also a prerequisite for it. It is also a topic I have discussed before [Is it a record, who cares?]. Information Management, and in particular Content Management, has long been constrained by the notion of the digital file. The file has always been the singular granular entity, at which security, classification, version control, transportation, retention and all other governance stops. Even relational databases ultimately live in files, because that’s what Operating Systems have to manage. However, information granularity does not stop at the file level. There is structure within files, and a lot of information that lives outside the realm of files (particularly in social media and streams). If Information Management is a living organism (and I believe it is), then files are its organs. But each organ has cells, each cell has molecules, and there are atoms within those molecules. I believe that innovation in Information Management will grow exponentially the moment that we stop looking at managing files and start looking at elementary information entities or segments at a much more granular level. That will allow security to be embedded at a logical information level; value to grow exponentially through intelligent re-use; storage costs to be reduced dramatically through entity-level de-duplication; and analytics to explode through much faster and more intelligent classification. File is an arbitrary container that creates bottlenecks, unnecessary restrictions and a very coarse level of granularity. Death to the file!

BYOD is just a temporary aberration

BYOD is just a transitional phase we’re going through today. The notion of bringing ANY device to work is already becoming outdated. “Bring Work to Your Device” would have been a more appropriate phrase, but then BWYD is a really terrible acronym. Today, I can access most of the information I need for my work, through mobile apps and web browsers. That means I can potentially use smart phones, tablets, the browser on my smart television, or the Wii console at home, or my son’s PSP game device to access work information. As soon as I buy a new camera with Android on it, I will also be able to access work on my camera. Or my car’s GPS screen. Or my fridge. Are IT organisations going to provide BYOD policies for all these devices where I will have to commit, for example, that “if I am using that device for work I shall not allow any other person, including family members, to access that device”? I don’t think so. The notion of BYOD is already becoming irrelevant. It is time to accept that work is no longer tied to ANY device and that work could potentially be accessed on EVERY device. And that is another reason, why information security and governance should be applied to the information, not to the device. The form of the device is irrelevant, and there will never be a 1:1 relationship between work and devices again.

It’s not your cloud, it’s everyone’s cloud

Cloud storage is a reality, but sharing cloud-level resources is yet to come. All we have achieved is to move the information storage outside the data centre. Think of this very simple example: Let’s say I subscribe to Gartner, or AIIM and I have just downloaded a new report or white paper to read. I find it interesting and I share it with some colleagues, and (if I have the right to) with some customers through email. There is every probability that I have created a dozen instances of that report, most of which will end up being stored or backed up in a cloud service somewhere. Quite likely on the same infrastructure where I downloaded the original paper from. And so will do many others that have downloaded the same paper. This is madness! Yes, it’s true that I should have been sending out the link to that paper to everyone else, but frankly that would force everyone to have to create accounts, etc. etc. and it’s so much easier to attach it to an email, and I’m too busy. Now, turn this scenario on its head: What if the cloud infrastructure itself could recognise that the original of that white paper is already available on the cloud, and transparently maintain the referential integrity, security, and audit trail, of a link to the original? This is effectively cloud-level, internet-wide de-duplication. Resource sharing. Combine this with the information granularity mentioned above, and you have massive storage reduction, cloud capacity increase, simpler big-data analytics and an enormous amount of statistical audit-trail material available, to analyse user behaviour and information value.

The IT organisation becomes irrelevant

The IT organisation as we know it today, is arguably the most critical function and the single largest investment drain in most organisations. You don’t have to go far to see examples of the criticality of the IT function and the dependency of an organisation to IT service levels. Just look at the recent impact that simple IT malfunctions have had to banking operations in the UK [Lloyds Group apologies for IT glitch].  My prediction however, is that this mega-critical organisation called IT, will collapse in the next few years. A large IT group – as a function, whether it’s oursourced or not – is becoming an irrelevant anachronism, and here’s why: 1) IT no longer controls the end-user infrastructure, that battle is already lost to BYOD. The procurement, deployment and disposition of user assets is no longer an IT function, it has moved to the individual users who have become a lot more tech-savy and self-reliant than they were 10 or 20 years ago. 2) IT no longer controls the server infrastructure: With the move to cloud and SaaS (or its many variants: IaaS, PaaS, etc.), keeping the lights on, the servers cool, the backups running and the cables networked will soon cease to be a function of the IT organisation too. 3) IT no longer controls the application infrastructure: Business functions are buying capabilities directly at the solution level, often as apps, and these departments are maintaining their own relationships with IT vendors. CMOs, CHROs, CSOs, etc. are the new IT buyers. So, what’s left for the traditional IT organisation to do? Very little else. I can foresee that IT will become an ancillary coordinating function and a governance body. Its role will be to advise the business and define policy, and maybe manage some of the vendor relationships. Very much like the role that the Compliance department, or Procurement has today, and certainly not wielding the power and the budget that it currently holds. That, is actually good news for Information Management! Not because IT is an inhibitor today, but because the responsibility for Information Management will finally move to the business, where it always belonged. That move, in turn, will fuel new IT innovation that is driven directly by business need, without the interim “filter” that IT groups inevitably create today. It will also have a significant impact to the operational side of the business, since groups will have a more immediate and agile access to new IT capabilities that will enable them to service new business models much faster than they can today.

Personally, I would like all of these predictions to come true today. I don’t have a magic wand, and therefore they won’t. But I do believe that some, if not all, of these are inevitable and it’s only a question of time and priority before the landscape of Information Management, as we know today, is fundamentally transformed. And I believe that this inevitable transformation will help to accelerate both innovation and value.

I’m curious to know your views on this. Do you think these predictions are reasonable, or not? Or, perhaps they are a lot of wishful thinking. If you agree with me, how soon do you think they can become a reality? What would stop them? And, what other fundamental changes could be triggered, as a result of these?

I’m looking forward to the debate!

The Great Big File Box in the sky – help me out here…

October 20, 2011 4 comments

The internet is buzzing with the success stories of Dropbox.com and Box.net. How much they’ve grown, how much they are worth, who’s likely to buy whom, where does iCloud/iPages come into it, etc., etc.

Am I the only one who doesn’t quite get the point here? Yes, I can see how it makes file sharing easier and how it potentially reduces internal IT costs by outsourcing the management of large volumes of information.

How is this ever a good strategy?

We have spent the last 20 years, trying to educate companies on the need to organise their information rather than just dumping in on shared file drives. Classification, version control, metadata, granular security, records management, etc. Anything to convince users to think a little bit further than just “File, Save As” in order to minimise the junk stored on servers, to maximise the chance of finding information when you need it and maintain some sense of auditability in your operations.

So instead of moving forwards, we’re moving backwards! First Sharepoint and now these wonderful cloud services, allow us to shift our junk from our own fileservers to The Great Big File Box in the sky.  With no plan, no structure, no governance, no strategy, no security model, no version control or audit trail.

How is this ever a good idea? I plead ignorance – please help me understand this…

Did anyone go to an “all you can eat” buffet restaurant and not come out feeling bloated??

Lawyers are from Mars, Technology is from Venus

September 16, 2011 Leave a comment

I spent two excellent days last week at the Legal Week’s Corporate Counsel Forum, where I’ve met several new and interesting people and learned an awful lot of things I didn’t know.

But I left the conference very frustrated.

The forum audience comprises primarily senior lawyers: General Counsel and Heads of Legal departments. The topics covered were as wide as crisis management, ‘moral’ compass, employment, Bribery Act, ‘Tesco’ law, cross-border teams, intellectual property, competition, etc., etc. Fascinating subjects, some of which admittedly I knew nothing about and learned a lot. It gave me a small insight into “a day in the life of a General Counsel” and the sheer volume of diversity that they have to be knowledgeable about, deal with and protect themselves (and their company) from.

And in 8 out of 10 conference sessions I wanted to shout: “There is a solution that can help here!”.

It amazes me (and frustrates me!) how much of the technology that other parts of the organisation take for granted seems to be absent from the legal department. As if they are the poor relatives in the organisation. I am not talking about highly specialised legal technologies such as eDiscovery, Content Analytics or even Information Risk & Compliance Governance (although these too are available and seem to be missing from many legal officers’ armoury, but that’s another conversation…). I am talking about basic capabilities that make the daily office operation significantly more efficient:

  • Digitising paper – avoiding the costs, avoiding delays of shifting piles of paper around and the risk of losing them by accident or in a crisis
  • Electronic document repositories – managing security and access controls, reducing duplication, managing versions, allowing online access from anywhere and simple searching
  • Case management – allowing lawyers to organise their work, negotiate with third parties, monitor progress, apply rules and generate reports automatically instead of using spreadsheets
  • Email management – capturing, filtering, organising and routing emails, ensuring compliance
  • Collaboration software – communicating amongst large teams, dispersed in different geographies and timezones

The list goes on… This isn’t trailblazing, these are automation tools and capabilities that have proven their value and have been helping organisations remove basic inefficiencies, for the last 10-20 years.

I am not advocating that technology is the answer to everything. Some business problems can be improved with some common sense and a bit of reorganising. Others are far too complex to be tackled by technology alone. But there is certainly enough basic technology to make a General Counsel’s life much simpler.

One of the key messages coming out of the conference was the resource constraints that legal departments are facing. Too much to do, too little time, too few people, too much information to process, too much knowledge to upkeep, too many risks to avoid, too many departments to coordinate, too many regulations to adhere to and too many stakeholders to appease.

So why are you wasting time on menial tasks that can be simplified, automated, or eliminated by use of simple tools, instead of using that time effectively to add value to the elements of the process where technology can’t  help.

Whenever I asked that question, the answer is typically “We don’t control the budget” or “We have other priorities” or “We don’t have the time to look at new tools”, etc.

Excuses! The question here is not “have I got time to worry about technology?”. The question is “Can I afford the luxury of NOT using it?”.  If these technologies can improve the productivity and reduce costs in the operations department, the marketing department, the sales department, the procurement department, why not use them to improve the efficiency of the legal department too?

(I would love to hear your views on this, especially if you are and in-house lawyer or work in a legal department)

Google Wave killed the ECM star…

November 26, 2009 33 comments

I don’t get excited by technology much these days. I tend to have a rather cynical view about it – Typically it’s either been done before, or it’s a solution looking for a problem. But occasionally something comes along which makes me sit back and take notice.  I’ve known about Google Wave for a while now. It’s been heralded as an “email killer” or a “wiki on steroids” or a “collaboration on the fly” and various other profound marketing statements. So it’s been sitting in the “wait and see…” corner of my mind for a while. I’m particularly skeptical about products which are declared “game changing” before they are even released…

Yesterday however, I indulged myself in watching the 1:20′ demo video of Google Wave. If you have not seen it, get yourself a cup of coffee and some biscuits, lock yourself in a room, stick the headphones on and be prepared to watch a good movie. You’ll laugh out loud too! This isn’t your typical PowerPoint presentation or even product demo. It’s good fun and it’s important.

However, beyond the functionality that you see demonstrated, pay attention to the personalities of the presenters, the people behind the product. You will begin to understand why Google Wave is significant.  It’s not the technology, it’s the attitude that’s different.

“What’s this got to do with ECM?” you may ask… It has everything to do with ECM. If Google Wave succeeds as a corporate platform (and I see absolutely no reason why it wouldn’t), it will fundamentally change the ECM industry.  Why? because the ECM industry, and Document Management before it, was invented as a workaround to compensate for NOT being able to do what Google Wave does. Let me explain… let’s look at some of the fundamental capabilities of ECM and how they might change in Google Wave world:

1. Check-In / Check – Out: This was invented primarily to overcome the issues of multiple authors trying to edit the same document at the same time and then having to synchronise the edits. Google Wave’s real time authoring synchronisation, removes the need for asynchronous editing and document locking.

2. Version history: Each Wave contains a complete version history of its lifecycle. The “Playback” function allows users to go back in time and trace the lineage of any edit in the document. A much more detailed and granular approach.

3. Lifecycle workflow management: Author, review, comment, modify, authorise edits – all native to the authoring interface and contained within the Wave. With the development of agent/robots I can imagine adding a “Review-y” (when you see the video you’ll understand what I mean…) to the wave, which will make sure that the right people are invited/uninvited to wave at various stages, based on the type of discussion or queries raised. All with complete audit trail information contained in the wave itself.

4. Security: Here we have another paradigm shift: Forget for a moment the traditional Access Control Lists (ACL) that we are all familiar with in the ECM world. Although not explicitly demonstrated in the video, the fact that the protocol supports federation and has the intelligence to allow/disallow the relevant people in and outside the firewall to see parts of the wave, means that it supports contextual security. The Wave’s security model is (appears to be…) contextually adaptive. That means that it will define its access behaviour based on the context/domain that it appears in. So not only you can implement implicit access security, but effectively it comes with Rights Management already built in.

5. Search & retrieval: The search capabilities demonstrated on the video were impressive, and given that this is Google, I don’t think it will have scalability issues, somehow…

6. Publishing: Of course you can take a snapshot of the wave and create a traditional document or another wave, much like we do today. But here’s another shift… (If any of you have seen the Harry Potter movies, you will remember the newspapers with the moving video clips on the page?) Rather than publishing content to websites, blogs, etc. you “embed” the wave on the site. Which means real-time dynamic web-publishing rather than static. The same way that Google Wave obviates the check-in/out, review cycled of traditional ECM, it also eliminates the need for elaborate web content publishing cycles. If you need to use staged publishing (remember, the approval itself is embedded in the wave), it’s easy to have an embedding function that checks for approval and only presents the wave up to the previously approved point!

7. Process / BPM: Process engines can attach Waves as documents, so that’s not an issue. Forms are not an issue either as they are included in the Wave.  Where things are different, is that a Wave is the ultimate “active content”. It is an entirely even-driven engine, which adopts its behaviour to external events. So, by introducing “agents” into the Wave audience, you are effectively embedding both rules processing and predefined behaviours into the wave. Think of this simple example: you added a third party to your wave, asking them to complete a part of a form inside the wave. Since the wave is jointly monitored by both internal and external wave environments, as soon as the information is completed, your process agent (who is a participant to the wave and therefore also monitoring the wave live) immediately recognises this, removes the third party from the wave and continues the process. Remember, the whole audit trail is “recorded” inside the Wave itself!

8. email management: Redundant. The purpose of email management was to convert emails to document objects in order to apply document management rules and controls. As the wave replaces email but is a document in its own right, you get both functions effectively rolled in one

9. Imaging: Imagine embedding both a scanned image and it’s OCR’ed text rendition in a wave, where you have created overlay-ed documents – much like the Satellite/map views of Google maps. Remember, it’s the same people that invented both…

I can go on and on… Yes there are holes in all the above and there will be some niche scenarios where this will not work quite the same way. But for 90% of standard office communications and documents, I can see Google Wave turning the whole ECM industry on its head: My colleagues have heard me rant repeatedly about the need of a new ECM strategy that is no longer bound to the file / folder paradigm.  From what I’ve seen so far, a Wave is exactly that: a self-contained, self-governing multiple-content object, which includes both content and its associated behaviours.

As an ECM practitioner, am I scared? No, I’m excited – very excited! The ECM software, as we know it today, may change dramatically in the next 5 years because of Google Wave.  But the ECM practices and principles are still required. We just need to make sure that we adapt them to the 21st century. So ECM guys, roll your sleeves up, there is A LOT of work to be done here…

%d bloggers like this: