Skip to content
August 26, 2014 / cognidox

New release of OfficeToPDF server-based PDF conversion tool

We’ve made a new release (1.4) of our OfficeToPDF open source project and pushed the code to its usual home on CodePlex (http://officetopdf.codeplex.com/releases/view/129209).

Apart from a general improvement on stability and exception-handling, the latest version can now support PDF conversion of additional file types:

  • Microsoft Project mpp files (requires MS Project >= 2010)
  • Microsoft Visio vsdx, vsdm files (requires MS Visio >= 2013)
  • Comma-separated values (CSV) files
  • OpenDocument odt, odc, odp files
  • Microsoft PowerPoint Template pot, potm, potx files

It also added new flags such as /markup to allow document markup in the PDF when converting Word documents or /pdfa which creates PDF/A files in supported applications (Powerpoint, Word, Visio & Publisher).

If you measure popularity as a function of number of downloads and user reviews, it’s true that OfficeToPDF isn’t the most popular of our various open source projects. But it addresses a very specific need and user feedback gives us the impression that it’s considered very useful.

We started OfficeToPDF back in the day when fewer applications could save as PDF and we wanted a tool that converted documents via a command line utility on the server rather than on individual desktop / laptop clients. We had an integration then with a leading third party PDF conversion and assembly tool. The USP for that tool was that it could convert from a very large number of document types, and it could be linked to ECM products.

But, the majority of the types supported were never actually encountered by our users, and we were primarily interested in integration with our CogniDox tool. The company seemed to go through a business model change and the most obvious impact of that was a hike in prices. The cost per server now started in the $20K to $24K range, and the annual support was considered high as a result. Most of our users stopped using it and switched to OfficeToPDF instead.

Our house rule is that if any of our CogniDox technologies can stand alone and serve a purpose independent of the CogniDox application, then we open source it. As a conservative estimate, we’re ‘giving away’ at least $10K of value in this software. We still get the occasional request from people to buy a license, and they seem a little confused when we send them the download link and tell them it’s free.

The fact that other developers can freely integrate this code into their process tools and adapt the code to their needs is just as important as zero cost.

Maintaining open source projects when you are busy working “to keep the lights on” isn’t always easy, and it takes a well-funded project to build up a sizeable developer (as opposed to user) community that can help. It still feels good to do it, in the pure spirit of open source development.

August 20, 2014 / cognidox

Do You Trust Your Ex-Employees?

It’s one thing to ask whether companies truly trust their employees with company information, but I think most would agree that trusting their ex-employees is definitely not desirable.

I was thinking about this while closing down the logins of a recent leaver on our various SaaS accounts. The internal systems were relatively straightforward – it’s all controlled via a directory service so one inactivation command disabled all logins to our tools.

1f6525ad-b838-48ed-afff-f83fc9e44d8d.jpegBut, like many companies out there we’ve signed up to various ‘must have’ SaaS applications running on the public cloud. I’m talking about sales tracking tools, sites for desktop screen-sharing, and of course social media sites. The social networking sites are arguably the worst because they accept credentials from consumer-facing sites (e.g. Twitter, Google, Facebook, Hotmail) and therefore blur the distinction between your personal sites and company / enterprise usage. If you sign up to a work-related account using your personal email address, it can bring problems for you as an employee. With things like Microsoft accounts, where you can associate multiple email addresses with a single account, an employee who has joined a work email address with a personal address runs the risk of their former employer locking them out of their personal account by using their former email address to gain access.

Add to this the security problems caused when an ex-employee’s devices are hacked or stolen – along with the linked work accounts. An employee might alert the company to the problem, but would an ex-employee do the same?

Going back to my task-in-hand, there was no fear in this case of a ‘bad leaver’. It was just a chore trying to remember all the places we’d shared or granted access to accounts. We were so quick to sign up when we found a good application, but we kept no records because ever shutting down these accounts seemed a remote possibility.

It would seem from some survey stats out recently that many companies don’t even bother to try closing accounts. One survey found that 89% of ex-employees could still access very confidential information using their ‘old’ logins. This data is on sites such as Salesforce, Facebook, Google Apps, etc. It also found that 45% of these ex-employees did login at least once. That’s close to another stat I’ve seen where 51% of companies found that ex-employees tried to access company data.

IT departments would argue that part of the problem here is that nobody (apart from the users of course) knows these applications are in use. Staff create workspaces on the file sharing sites because it serves a pragmatic need during one busy period or another. The same solution is then re-used to store files that might be needed when access to the company network isn’t possible or convenient. That’s why a huge 68% admit to storing work information in their personal file-sharing cloud.

Another real possibility is that passwords for these applications are shared. There are various reasons for this, but chief among them is avoiding cost and maximizing simplicity. So, say five people have access and one leaves the company. The other four still need to carry on using the tool. Do they remember to change the password? Probably not.

It’s in the interests of SaaS vendors to make the sign-up process as easy as possible. But while I was struggling with the chore of closing down those accounts, my allegiance was definitely with those who warn us about the lack of security that this can bring.

Like many things, a little planning and record-keeping will help in the long run. Here are some suggestions to bear in mind:

  • Keep a list of services you’re using – help IT by sharing the list with them
  • When signing up for a site, spend some time finding out how to manage accounts for the day you need to disable / remove an account. Keep a record of the process somewhere central
  • As part of your social media policy explain to employees that mixing personal email addresses with work accounts is not a good idea
  • As part of the employee exit process include a task that encourages them to remove their former work email from any personal accounts
July 9, 2014 / cognidox

Virus-infected Office Macro threats and self-signed SSL certificates

I saw an article today whose headline (“Remember macro viruses? Infected Word and Excel files? They’re back…”) drew my eye. It also got coverage on The Register in their usual style :-).

The gist is that virus-infected Macros fell out of fashion due to security changes in Office, but now the target is the User rather than Office. The aim is to persuade the User that the document is more secure because the macro is present and to just click to enable the content.

The article (and the comments that follow) are mostly about random documents sent to you from somewhere out there on the Internet. Clicking to open those (let alone to enable macros) is rarely a good idea.

Inside an Enterprise, macros are used more frequently than the article needs to acknowledge. They’re used to add extra automation functionality to Word and Excel. In this case, the macro-enabled document is often from a known colleague and the enterprise web domain from whence it came is a trusted zone.

Typically, a layered security model would be used inside an enterprise to defend against this threat.

The first perimeter layer should be mail scanning – do you really need macro-enabled documents coming in? If not, block them from inbound mail.

The next layer should be that all the client PCs are up-to-date with anti-virus signatures. Check that your enterprise anti-virus solution is scanning Office documents. This catches cases where a document has come in from a USB stick or a file sharing service like DropBox.

Application level filtering such as setting the macro security to “Disable all macros except digitally signed macros” provides a final layer, but it has the disadvantage that signing isn’t well understood.

A way to improve security (not mentioned in the article) for behind-the-firewall macro-enabled usage is to generate and use a self-signed SSL security certificate. These are not so suitable for public websites, but are useful for internal sites and applications such as code signing (to confirm the software author and guarantee the code has not been altered). This is especially true if the organisation is large and there’s a chance the colleague sending the file is not known to the recipient.

Self-signed certificates can be created for free using a tool such as the OpenSSL toolkit, which can be used to generate an RSA Private Key and CSR (Certificate Signing Request) for Linux/Apache environments. In a Windows based environment, you can use a tool such as SelfCert.exe, or generate a code signing certificate using Microsoft Certificate Services.

In some implementations the end-user will still get a warning and have to accept the certificate. Some argue this can promote bad habits if end-users become blasé about accepting SSL certificates because “they were told to”. However, in the internal enterprise model we are addressing, the way around this is to pre-install the SSL certificate on every machine. That way, the trust question is never asked. A means to achieve this is for IT departments to push the certificate out as a trusted publisher to client PCs using group policies. Read this Microsoft technet article for more detail.

June 17, 2014 / cognidox

Improve Email Management with CogniDox DMS Integration

Sorry to state the obvious, but you receive a lot of emails and your number of unread messages only ever seems to go up.

It’s not just you. The statistics1 say you are one of around >3 billion worldwide active email accounts, and you are in line to receive your share of around 150 billion total worldwide emails per day. On average, corporate users send or receive around 120 emails a day – roughly 80 received and 40 sent. Other studies suggest that the average knowledge worker spends 14.5 hours per week reading and answering emails.

You can tweet and update your social media statuses as much as you like, but your email inbox will still contain the same number of messages. And, that will likely increase by around 5% in the next 12 months.

There is a lot of criticism of email along the lines that it shreds our attention for other tasks and kills our productivity and time management. Yet, the email client continues to be our business “command and control centre” where work is received and tasks are delegated.

Perhaps the better strategy is to improve email, not replace it.

Researchers identified the problems that people experience with task management in email as far back as 20032, but the changes required to solve these are only slowly appearing in email clients. For example:

  • Most email clients support threading of messages via Find Related or Open Message in Conversation.
  • Most email clients allow you to establish rules that sort email into different folders as it arrives.
  • It is usually possible to create a to-do list from the email message.
  • Tools such as Google Priority Inbox or SaneBox use machine learning to decide which emails appear in your ‘important’ list and which get moved to a folder for later reading.
  • A number of email client add-on / plug-in tools combine social information about contacts with emails from/to those contacts.

But some problems with email should probably not be solved in the email client.

Email is at its best as a notification engine, and email used as file storage is not playing to its strengths (ask any Exchange Server administrator :-)). Other business applications are better at managing content. For example, using email attachments to forward documents for review is not a good idea. If anything changes that requires a new version of the document, the reviewers have to sort and search their email messages to make sure they respond to the correct version. Links in email messages to a document repository are a much better idea. Generating those email notifications as part of a document review workflow in the document control system is even better.

Another example is receiving emails (with or without attachments) from external sources that need to be shared with a wider team. An example might be a bid / tender process where the Sales Account Manager receives a set of documents that require a response or completion. Rather than forward these in email, better to store them directly in the document repository where they can be version controlled, reviewed, and edited until the content is final and approved.

To follow on from that with another example, at some point the approved documents need to be sent back to the sender. It is much better if that can be done by directly referencing the document in the repository (rather than the one saved to a hard drive or sent as an email attachment). It removes the opportunity for error.

One ‘problem’ for achieving this is that we use so many different email client applications to read our emails. At present, around 50% is done from Mobile devices; the rest from Desktop and Webmail (around 30% and 20% respectively). But in the business office environment the typical usage is desktop-based, with Microsoft Outlook as the most widely used email client.

This is very similar to the fact that the majority of documents that end up in a document repository are produced using the Microsoft Office desktop tools – Word, PowerPoint, and Excel. In a previous software release we dealt with that by providing an add-in for those applications. The aim was to encourage good practice (storing documents in a controlled manner) without having to leave the tools in which they were created.

The solution for better email management therefore is to extend our Microsoft Office Add-in to include support for Outlook.

There was an important shift with the introduction of Outlook 2007 which brought in a new UI and event interface. It’s different enough to make us decide not to support Outlook 2000, 2003 or Express. The Outlook add-in is compatible with Outlook 2007, 2010 and 2013 running under Microsoft Windows XP, Vista, 7 and 8.

We also had a constraint that our internal CogniDox API had to be extended to support email integration. We made the required changes in CogniDox 8.8.0, and so using that version (or later) is mandatory to make use of the latest Add-in version.

What the new Add-in provides for Outlook are features such as:

  • Save an Outlook .msg file as a document in CogniDox (can include attachments)
  • Save one or more attachments from email message as individual documents in CogniDox
  • Attach a CogniDox document as a link to email during composition (for internal recipients)
  • Attach a CogniDox document as a file to email during composition (for external recipients)

The Outlook Add-in appears as a sidebar in the same style as the Word, Excel and PowerPoint add-ins. One extra feature is support for drag and drop: for example select a message in Outlook and drag it onto either a category or a document title in the Browse View. It will then either create a new document or version, either as a draft or issue.

The new Add-in and User Guide are available as follows:

You will need a user account to access the support site. The software is free to download for existing customers.

Notes:

1 Based on various reports from The Radicati Group Email Statistics Reports http://www.radicati.com/

2 Bellotti, V., Ducheneaut, N., Howard, M., Smith, I., Taking Email to Task: The Design and Evaluation of a Task Management Centred Email Tool, 2003 [PDF]

June 11, 2014 / cognidox

The Technology Behind LinkedIn Publish

LinkedIn has opened its publishing platform called LinkedIn Publish to the rest of us that are not “Influencers”. You know you have this feature if there is a pencil icon in the “Share an update” field on your LinkedIn homepage. If you don’t, you can ask for access here: http://specialedition.linkedin.com/publishing/.

It’s been promoted as a way to publish “long form posts” (as opposed to the limited character length status update). Not exactly clear why this isn’t called a blog, but maybe it’s to avoid comparison with the other blogging platforms.

The social media commentators have been active in discussing it, and their advice on whether to use it seems to be: Why not? It’s another way to get engagement. And, it’s a more focussed and targeted audience then other platforms.

But it isn’t a ‘silver bullet’. You need a large number of connections or followers to be effective and your content needs to be read, liked, and shared to be promoted. There’s also the assumption that your connections are interested in what you have to say – many of us have a mixed bag when it comes to LinkedIn connections. When I joined (in 2004, according to my Account info) the main rationale was to stay in touch with former work colleagues. They are now doing all manner of things, and not necessarily interested in what I am doing today.

I have no insights into whether this or Facebook, Google+, or something else is the future of social media. So I did the obvious geeky thing and looked instead at the technology. The rich text editor they’re using is TinyMCE (the main alternative is CKEditor). It’s been themed in the LinkedIn style but otherwise looks like an ‘out of the box’ TinyMCE toolbar. You can do the expected things like embed images and other media, but you can’t use embedded HTML. That still means you can (for example) embed a video sharing code, so it may not be all that important to you. But you can use HTML in WordPress.

If you follow the advice I’ve seen on the web and use Microsoft Word to edit the post then directly copy/paste into TinyMCE, I think you will encounter formatting issues sooner rather than later.

One major difference / deficiency compared to WordPress is the lack of categories / tags that you can assign to a post. That will severely hamper search for your future readers when you’ve amassed a decent number of posts. If I understand correctly, tagging your content to suitable channels is something that LinkedIn Publish does by algorithm. You can’t control it.

Also, WordPress is more transparent when it comes to where your posts are stored. It’s my guess this post will be stored at one of the two LinkedIn data centres in either Virginia or Texas. But it’s under their control, not mine.

It raises two thoughts for me. The first is that I’d prefer to have my content stored in a document control repository (for version control, review, approval) and then upload it automatically to the LinkedIn Publish site. The second is that marketing folk will want to publish content to many sites (content syndication) and it might be a good feature for us to consider adding LinkedIn Publish to our existing WordPress publishing plug-in. One for the roadmap.

May 28, 2014 / cognidox

Document Control, ISO 9001 and CogniDox DMS

ISO 9001:2008 is not prescriptive – it provides a framework and good advice but generally leaves it up to the company to do what they consider best, and that includes adopting software tools or methodologies. There are, for example, only six documented procedures included as mandatory.

This isn’t going to change. The draft version of ISO 9001:2015 looks like it will merge documents and records under the term “documented information” and there will be no mandatory quality manual, procedures or quality records. That won’t mean there is no value in these documents, but rather there will be more flexibility in how it’s managed.

The problem with flexibility is that it can leave a newcomer to ISO 9001 in a confused state. Where do I start? What do I need to do? How do I know when we’re ready for audit? This is why ISO 9001 is often compared with other continuous improvement approaches such as Lean Six Sigma (LSS). Some have said that ISO 9001 provides the “what” and LSS provides the “how”. In truth, that’s an over-statement because tools associated with LSS tend to be problem-solving techniques rather than tools, and are not coordinated in any particular way.

There are blogs out there that can help with ISO 9001. One useful post this week came from The ISO 9001 Blog and provided seven tips to  provide a documented procedure for controlling your documents.

Take time to read the blog in full, but the list of tips are:

  1. Approve for Adequacy (who is responsible for approving this)
  2. Review/Update and Re-Approve
  3. Changes and Revision Status identified
  4. Relevant Versions at point of use
  5. Legible and identifiable.
  6. Control of External Documents
  7. Prevent use of Obsolete Documents

These are very good tips, but it could be more prescriptive about exactly how to do this. There’s a mention that “this is often easier with electronic versions than with paper copies” but that advice stops far short in my opinion. There is a massive advantage in using an electronic DMS to implement these tips.

To rattle through a quick mapping of tips to CogniDox features, we would find that the ability to create workflows with mandatory approvers delivers #1. The review and notification process takes care of #2. Version history and the event log provides #3. A clear link to latest and approved-latest versions solves #4 (as does the ability to hide any version other than the approved-latest one). Tip #5 is supported by embedded metadata in the documents, so readers can see what they are using. We’d look to limited partner access and/or the extranet portal functionality for #6. Finally, tip #7 can be achieved by marking the document as obsolete.

The electronic DMS approach also allows you to add extra tips. For example, using a graphical and interactive version of a procedure (such as a flowchart) makes it far easier to use than a printed page. Using email notification links as an alternative to attachments is another example.

But the absolute stand-out argument in favour of electronic DMS is the ability it provides to integrate with other line-of business systems. The technology we use has a major influence on service innovation, and by linking (for example) the DMS with Help Desk systems we can provide better visibility where our customers are reporting difficulties and which document assets (including software, user guides, etc.) might be affected.

The acceleration in the generation of data (aka big data) puts even more pressure on quality compliance. Without systems to help, it may prove impossible to do otherwise.

May 12, 2014 / cognidox

Managing WordPress Blogs from the CogniDox DMS

This latest addition to our theme “projects that can change the way your company works” looks at the topic of blogging in small to medium companies with a B2B business model.

There is an ocean of words out there advising us that inbound marketing is the future and that the traditional sales funnel concept is obsolete. Now, it’s all about customer success management (CSM) and how you use your content strategy to guide the customer experience. Evidence seems to support this: a Hubspot study in 2013 showed that if you publish blogs daily (as opposed to once per month), the effect is 70% more organic search traffic and 12% more referral traffic to the website. Since the primary goal in an inbound marketing strategy is to attract visitors to your site, this sounds very appealing.

Regular blogging is key to search engine optimisation too. Anyone who has read even an introduction to SEO knows how important it is to keep web site ‘freshness’ and encourage good-quality back links from other websites. Again, one of the best ways to do that is to write new blog posts on a regular basis.

Sadly, just creating a blog page and then ignoring it has no positive effect on web traffic whatsoever.

This becomes a challenge to companies that are just not used to this style of marketing. I have in mind tech companies who are removed at least one level from the end-consumer product, and who traditionally got by on datasheets and maybe the occasional brochure. One basic problem is what to write about? There is good advice out there that may help, and a summary is not only to think of a blog as an opinion-piece, but also to consider other content types such as how-to posts, interviews, trade show reviews, top-ten lists, and so on.

Choosing the blogging platform appears to be the easy part. WordPress is the most popular and is in use at more than 60 million websites with over 44 million blog posts published each month. According to BuiltWith, WP has over 92% market share of high-traffic blog sites. Rivals to WP such as Blogger and Tumblr pale by comparison where usage is concerned.

But there are tactical problems when using a blogging platform in a typical business. Compared to the simple case of the single-author blog, the following issues are common:

  • Multi-author blogging is the norm
  • Communication between contributors is key
  • Publishing approval authority is unclear
  • Editorial calendars are hard to manage
  • Limited access to WP admin accounts

The reality for many companies is that too few people (usually in Marketing) have more than their preferred share of responsibility for producing content and ensuring it follows the correct company message€™. They need help from colleagues to produce the flow of content and they need timely approvals from senior management so they can publish with confidence.

It would be a fantastic scenario for any blog editor to have a backlog of articles that are at various stages of review, and a simple approval workflow to mark articles as ready-to-go.

To facilitate this, we added features in CogniDox to help the internal management of blog posts and their publication to the WordPress platform.

CogniDox allows a blog post to be created in-house using tools such as Microsoft Word or the built-in online rich text editor, which you can then send to colleagues for review. When it is reviewed and ready, this is followed by approval. Once approved, a CogniDox plug-in allows the post to be published directly on a WordPress.com or WordPress.org site. The plug-in shows you how the post will appear on the WP site, and allows you to add categories and tags.

It could also be integrated into a Joomla-based website to appear alongside other web pages and tools, by using an open-source tool we’ve built called WordBridge.

Once published, you get the other benefits of WordPress – a vast array of themes and plugins that will enable you to extend your blogging functionality into areas such as adding social media buttons, photo galleries, mailing list forms, e-commerce or membership management.

If you would like more information about this and other CogniDox features, contact us for a demo.

April 30, 2014 / cognidox

Enterprise Search is critical for Information Management

Continuing the theme of projects that can change the way your company works, in this post we’ll look at the topic of Information Findability.

Information Findability is determined primarily by two factors:

  1. The quality of information layout in categories, folders, cabinets, and similar structures; and
  2. The capability of the search engine.

The quality of the user interface could be a third factor, or you can see it as part of the information layout. What’s behind the first factor is the intuitiveness of navigation, or, more simply stated – how obvious is it that the information you seek will be here rather than there? The problem with information layout is that it is virtually impossible for one structure to suit all needs. Take a trivial example such as a PO to procure a piece of test equipment for a project. Where does the PO document belong? In Finance or in R&D? In the specific project folder? The correct answer is: in all of them.

Increasingly, modern document management systems are realising the value of the “virtual folder”. This presents a list of documents that is relevant to where the user is now in their navigation. The list is produced dynamically (from tags) and is most easily displayed with web-based technology where pages are commonly dynamic in nature anyway. This is what end-users expect from Web 2.0 systems and don’t get from network file shares. There is a trade-off here because users don’t want *too much* dynamism, and will expect repeatable results when they navigate to the same place. It has to be both familiar and contain the relevant documents. A case of “don’t make me think” in action.

The capability of the search engine is down to three attributes: Faster, Deeper, and Wider.

Let’s start with fastness. Search engines work by converting all file formats they can read into a single format called an index. It gathers together unstructured data from many diverse file formats, and provides a very rapid retrieval of search results compared to, for example, a lookup in an SQL database. The data stored is trimmed of common, stem, or stop words, to make it more efficient.

The process of creating the index is done by crawling the content, usually automated to start at a specified time or time intervals. It could also be event-driven i.e. add to index every time new content is added. The methods differ in the system resources required and the ‘right’ answer depends on how real-time the data needs to be in order to be valuable. Most document systems incrementally update the index every 15 minutes or so because that balances a reasonable number of documents being added against the performance impact of the indexing process.

Let’s move to deepness. A deep search is one where the most statistically relevant result is obtained. Imagine a system that only supported search by a limited data model (such as a fixed taxonomy or ontology). It’s unlikely to be useful. At a minimum, we should be looking for full-text search capabilities. It must not be the case that an end-user has to manually tag content with metadata in order for it to be found. If a document contains the relevant words, it should  be found. Whether it is displayed to a user may depend on the user’s right to see the document (“security trimming”), but it should be found. Another important factor for depth is having flexibility in what files types can be indexed. The more file type formats that can be indexed, the better. It’s also essential to be able to add custom formats.

Finally, there is wideness. A wide search is one where you are not limited to a single source (or information silo) but instead content is indexed from a set of sources such as other content repositories, business systems, and network file shares. This is often called federated search. The value is fairly obvious, for example as well as retrieving product part numbers from one system, it may be beneficial to combine that with order information from an ERP, or fault reports from a Help Desk system. The problem with combining data from heterogeneous sources is that success depends on data formats and the feasibility of a search API. Every company has a different set of business systems, which doesn’t help either. Typically, your mileage with the wideness factor is related to the quality of the systems integration / search consultants that you use; and with the openness or otherwise of the tools that they use.

A software product such as CogniDox can significantly impact fastness and deepness, and have some impact on wideness.

One approach is to treat Search as an add-on, and integrate with leading proprietary software search platforms. But, leading proprietary products such as FAST, Autonomy, and Endeca have been acquired and merged into product lines, making the situation uncertain for their standalone customers. And, as the influence of these proprietary solutions diminished, the Apache Solr® open source solution has grown in strength as a powerful, scalable, cross-platform search engine (https://lucene.apache.org/solr/).

Therefore, CogniDox provides built-in search powered by the Solr engine. Solr has a rich set of features such as faceted search, full text search, rich document handling and dynamic clustering. Out-of-the-box, it provides indexed search for CogniDox documents (including full text search). The Apache Tika project, which is commonly used alongside Solr, has an extensive list of supported Document Formats (http://tika.apache.org/1.5/formats.html).

It basically provides us with fastness and deepness; and by virtue of the fact that it is open source, it leaves the way clear for any wideness initiative.

April 10, 2014 / cognidox

Building a Product Release Engine with CogniDox

In this blog I want to start a new theme – projects that can change the way your company works.

All companies are unique to a degree, but there are many common issues that the majority are trying to solve. In the technology world we talk so much about features that it can be difficult to relate these back to the problems they are meant to address. I want to approach it from the other direction: what is the problem and how can software (CogniDox in particular) address that problem?

In the first of this series I’m going to consider a biggie: how do we make an efficient process for actually releasing a product? We read scholarly articles about innovation and the overall process / development methodologies that might help, but what about the mechanics of actually making a product release in the leanest manner possible?

If we succeed in this, we can say we have an efficient Product Release Engine.

Most high-tech companies have multiple products. Most products combine multiple project deliverables; from different hardware and software teams as well as technical writers and training. Most product releases are complex and require a specific configuration of elements to work reliably and properly.

You can have great teams using the best tools, but still suffer from information silos in your product development. The hardware team may have files created in AutoCAD or SolidWorks for CAD design. The software team may use Git or Perforce for the version control of software programs. The technical authors may use a DITA-compliant XML tool for the user guides. And so it continues across Training, Technical Support, and other groups.

There are two other common problems.

The first is the task of making a product release can be hard to pin on any one job role. It could be the Product Manager, but they may think their job is about managing user requirements, prioritising product features and building roadmaps. Project Managers may only be concerned about milestones and finishing on-time, rather than what happens after. It could be the Software team – after all they’ll likely have a software configuration tool in place and will be familiar with the language of branches, builds, and releases. But software is only one stream in the overall product, so this is necessary but not sufficient. The solution to this is to make this an explicit job title – the Release Manager. It doesn’t always have to be a full-time role or person, but it should be clear who is responsible. Their responsibility is to validate that all release components have been approved for release by the technical, product, and executive teams.

The second problem is that there is often a gap between the product deliverables and the entitlements of the customers receiving the product. Even if someone is responsible for the release, they lack the tools to help them manage a matrix of products and customers. Even if it’s managed using the ubiquitous spreadsheet, it still requires a manual step to decide before a customer receives anything.

So what can be done to link together the different teams that contribute to a product development and prevent ‘silos of information’ forming in the company?

CogniDox is a ‘silo-linker’ that solves this problem and gives the Release Manager a useful set of tools. Here’s how:

  • It provides a common repository for all the individual specialist tool outputs. Infeeds from mechanical drawing, software development and technical authoring (to name a few) all end up in the same place
  • It enables a common review and approval workflow across teams and job functions. This is augmented by dashboard tools to instantly see the ‘health’ of a product category; to judge whether it is ready for a Gate Review meeting, for example
  • Document Holder files can be used to describe what components make up the product. A document holder is a special CogniDox document type (DH) that is used to link  documents. DH document types are XML files that are then automatically formatted into a web page with links and information
  • The Product Manager user role in CogniDox includes features useful for Release Managers. For example, they can build a release configuration where content is tagged with a license and customers are assigned licenses for access entitlement. Licenses can be linked to anything that binds a set of customers: product type, geography, service level agreement, generation of product, early adopters, and so on
  • At product release time, content is automatically copied to an extranet web portal where it is available for customer download. No more panics at 6pm Friday (always the designated release day) to “get everything out the door”. Not to mention the work on Saturday and Sunday to correct the errors
  • Download activity by customers is automatically tracked on an individual document level. Analytics like this can give insights into the way individual content (such as a new release of software drivers or a quick start guide) are actually been consumed

If you’d like to know more about the tools that CogniDox provides for Product Managers, feel free to contact us for more information or a demo http://www.cognidox.com/about-us/contact-us

March 30, 2014 / cognidox

Document Management and Document Control: Is there a difference?

It’s been a while since I’ve blogged so I thought it would be good to get ‘back in the saddle’ by getting back to the basics of what I believe to be important in file sharing, document management and document control.

Let’s start with the most obvious benefit of document management and document control.

In a busy company where lots of unstructured information abounds, the quality of data is vital. If data quality is poor due to lack of version control or duplication, time is lost as staff work out what information they can trust.

A model for this is a pyramid of information management maturity – the nearer the top of the pyramid, the more a company is capable in information governance, quality management and lifecycle management. We can show this as a simple graphic:

File Sharing

At the lowest level, a company with a need for collaboration support can solve its problem with a file sharing solution such as Dropbox or Google Drive.  These are often cheap, but the emphasis is very much on storage. These collaborative tools offer very basic version control, by just rolling back or forward in a limited number of steps. More like ‘undo’ than proper version control.

Document Management

Moving up a level, a company may achieve Document Management maturity status by using a repository-based tool such as SharePoint. It is often said that SharePoint is now synonymous with document management. However, this is defined as being able to share a document by saving it to a document management server. Document management is reduced to providing lists of documents stored on a server, admittedly with version control, but often not much more than that.

Having now seen a number of companies start and abandon SharePoint projects, it seems to come down to three critical variables:

  1. The project is run by IT for Marketing with poor buy-in from other users. This leads to an imposition of the solution on the end-users and poor adoption (~30%) is the result. The company is now in a bad place: it has a ‘solution’ but nobody is using it.
  2. SharePoint requires extensive customisation and is not much of an out-of-the-box application so consultants are brought in to help. The project becomes diverted into a set of IT rather than business challenges and even integration with other Microsoft products such as InfoPath become “worst experiences ever”.
  3. SharePoint shows its roots in FrontPage Designer and shared network drives to just become a cluster of intranets for individual teams. They all have a list of documents they like to access, and if one team wants a document from another it gets cloned into their site. More often than not, it doesn’t replace the file share either, so the problem of document duplication gets worse, not better.

Commenting on SharePoint is like taking a photo of a moving car because it has gone through so many changes even from SP2007 to SP2013. It’s gone beyond the simple collaboration tool that it was in SP2007, for example. It is still very much in flux. It will be fascinating to see how Microsoft packages its solutions over the next 3 years as it reconciles Office 365, Azure, OneDrive and SharePoint. My instinct tells me that SharePoint will be the one that gets assimilated under the OneDrive brand. It will be intriguing to see what that means for on-premise based SharePoint. The cost for on-premise SharePoint 2013 increased by 40%; the costs for OneDrive, Azure and hosted Office365 continue to fall.

Document Control

There is a maturity level above this; and in line with standards such as ISO 9001 we should call this Document Control. The key extra capability is that there is a document lifecycle model and there will be support for workflows such as review and approval processes. There needs to be a document control procedure, with only one master version of each document. There needs to be an audit trail and a full activity history. This goes beyond event logs; it should be possible to easily view the activity history around a document when it was in a previous version.

It’s well known that ISO places no explicit requirements on the DMS software itself, so you can search for “is SharePoint 2010 ISO compliant” to your heart’s delight and you won’t get a definitive answer. What you might find however is that the case studies and SlideShare presentations on “how we did ISO with SharePoint” usually involve SP plus extra software from the 3rd party SP solution partner ecosystem.

Ease of use and installation is a major factor. If you can get a document control solution up and running quickly you can start to engage the business and encourage them to customise it. Therein lies the path to 100% adoption.

Search and information ‘findability’ is a major factor. Partly, this problem is inherited from the hierarchical nature of folders and subfolders common in a file share. Users have to navigate through a maze of folders to locate the document that is of interest. Documents should be unique, categories should be dynamic and virtual. The web page that a user sees should be constructed from documents that are tagged as relevant to that page. It should not be as I read in this RFI from a user of SharePoint; “Because of the hierarchical nature of this structure, duplicate documents often exist on the system as staff are unsure what exact folder to upload the document to.”

This leads to the second part of the findability problem: the quality of the integrated enterprise search engine. The user above also complains that: “Even though Fast Search has been installed, it does not easily locate the relevant documents as no ECM functionality such as metadata etc. has been applied to these documents.” The search engine should be able to index every text element in the repository (including metadata and text in image files as well as the text-heavy Word and PDF files) and present results according to its relevancy to the search string.

Information security is a major factor. Access control systems are still rooted in simple Read or Read-Write permission rights. Much more is needed. Once the user’s right to access a document is established, it then becomes necessary to determine what the user can do with the document. Can they approve it, for example? And then there is access at the document repository level. The DMS should be able to support collaboration with partially-trusted users (contractor, freelancers, JV partners, etc.), allowing them DMS features as needed but without disclosure of other categories such as HR and Finance that are outside the ‘need to know’ boundary. It’s essential that the search engine understands the information security controls that are in place and performs security trimming on any results before they are displayed to the search user.

Conclusions

It’s always the case that the right software solution depends on your requirements. If you are kicking around a few ideas for a startup, there is no reason why file sharing won’t meet your needs. It’s when you reach the heights of having to justify the wasted time of tens or hundreds of employees or having to meet QA / FDA regulatory compliance and other governance for supply chain management that you begin to understand the difference between that and document management / document control.

Follow

Get every new post delivered to your Inbox.

Join 459 other followers