Estimating the Value of Technical Documentation

This post was inspired by Melissa Rach’s post on “The Value of Content” on (Good stuff there so go read the original post for the details).

I thought I’d take a stab at converting her steps to my own technical documentation work, and here’s what I came up with:

  • Define your content – That would be user guides, references, release notes and the like.  What is it all doing? It guides customers in using our products to their fullest potential. If we get into things like troubleshooting guides, then we have an added benefit of reducing support costs to the company by solving customer problems with our content. Desired characteristics? That would be accurate (free of technical errors) professional quality, ease of use/readability, and ease of localization (that comes from our editors who tell us when we are phrasing something impossible for the translation folks to deal with).
  • Assign values to your content – This is where educated guessing comes in.  Some things can be clearly defined – $500 is the cost for the average Technical Assistance call to the company, so each topic in that troubleshooting guide could save the company $500. If we have 10 topics, and make some assumption like 10% of the 1000 visitors last month to that content found a solution (and thus avoid a support call to the company) we get a number around  $50,000 for that troubleshooting guide per month in cost savings to the company.  We can extend this model to configuration guides because we know our own support folks point customers to our guides to solve their issues.  How about quality and accuracy? Well if we assume a certain cost to the company for fixing documentation errors (Say $100 per doc bug), multiplied by 5 customer-found bugs per month, we run over $6000 a year. But that doesn’t account for customer frustration, and whether or not that documentation error ended up in a customer support call to resolve the issue, plus that support person tracking down the correct writer to fix the problem.  If we assume 1 in 5 customer-found bug has an ‘extended’ cost (support call, customer frustration and potential lost revenue) we can say improved quality content saves the company $12000 a year, per product. (For a company with over 1000 products, that number gets very big!)  The last way we can quantify content would be to estimate lost revenue. With a deepdive in web analytics, we can determine if our customers come to our documentation from marketing collateral. If we then assume a small percentage of those customers are evaluating the product, then good vs bad quality technical documentation can mean lost sales and that would also be a sizable number. If the average product purchase for a high-tech networking company is $10,000, and poor content leads to just 1% of 1000 potential customers going someplace else, the lost revenue is upwards of $100,000 per product!  Again, the numbers get quite big, and then you compare those numbers to what it would cost the company to hire additional top quality writers to address the issues. The average cost of a full-time employee runs around $255,000 (full benefits included). That means one fulltime writer on that troubleshooting guide would still save the company $250,000 annually on technical support calls.
  • Measure every way you can – Web analytics can tell you how popular some content is (and some content isn’t!)  Getting beyond that, we can again, use support call logs (if available) to determine how often our documentation solves customer issues (and thus shortens the length of a support call). Also, internal email aliases point to our content to help each other out, and then there are customer-facing forums that use our content. Then customer interviews can add to the value equation. What are they looking for and do we have that information? What are their top tasks when using our product? Do we have those tasks clearly covered in our technical documentation?
  • Baseline it all – Assign value today, and then use the same analysis after we make changes to determine if we can see measurable improvements. Do we get more hits on certain content? Less technical errors in our documentation because of improved SME review processes implemented?  Measure at the baseline and then remeasure six months or more after you’ve implemented change to determine its effectiveness.


Posted in content strategy | Tagged , | 1 Comment

Gathering Info on Content Strategy Conventions

This is what I have so far… and please, refer to the urls for the real info – this is just my very quick n dirty take on each of them as I consider which to try and attend in the coming year.


Benefits Location



Confab content strategy,workshops,casestudies Minniapolis, MN $2K May9-11, 2011
Intelligent Content Rockley group – best practices on content findability, usability, adaptability and delivery of content Palm Springs, CA $1k Feb 16-19, 2011
Content Strategy Forum content governance, user experience design, content metrics/analytics, localization, CMS, etc London, UK $2k Sept 5-7, 2011
Gilblane CMS, XML, globilization, mobile content, search engine optimization Boston, MA $1300 ($400 for workshops only) Nov 29-Dec1, 2011
Posted in content strategy | Tagged , | Leave a comment

Getting more into the DITA swing of things

The ol’ day job is taking a twist wherein I need to focus more heavily on DITA requirements for your group (tools, rendering, DITA 1.2 yadda yadda). So if this blog gets a little DITA heavy over the next couple of months, bear with me, eh?

To kick things off, go check out Sarah O’Keefe’s Scriptorium post on the use and abuse of DITA (aka babyDITA). Interesting discussions going on in the comments, and can I say  again  -BEST. CHART. EVER.

Posted in XML | Tagged | Leave a comment

Does Content Suffer from the 3-Click Rule?

Stop Counting Clicks

TOKYO - OCTOBER 23:  A model display T-Rex, di...
Image by Getty Images via @daylife

Back in the dim dawn of website design, there was an unwritten rule -Don’t burying information more than three mouse-clicks away.  That 3-click rule has long since been disputed (here , here , here , here , and here ).

The number of clicks isn’t what is important to users, but whether or not they’re successful at finding what they’re seeking.(here )

The guiding principal instead should be make all clicks easy/obvious!  If the user knows she’s clicking the right option on a given navigation page, and that option was obvious (not cluttered with 30 other clickable options) she’s got a growing confidence she’s on the right track and will keep clicking (provided subsequent options are also obvious).

What’s This Have To Do With Content?

Content can be separated into bite-sized bits that a customer navigates to on the website, or it can be grouped into larger documents (multi-chapter user guides, etc).  We can group content because it’s all related information, which is a good thing, but when the presentation of that grouped content becomes too difficult to scan and navigate through, we are no longer helping our readers.

So when we decide what goes into a user guide, are we basing this decision on how much that content really needs to co-exist? Or are we grouping it together under a vague title (like user guide vs administration guide) that is forcing our readers to make a vague choice?

Looking a map

Image via Wikipedia

And how much of that choice is driven by this old rule that we don’t want extra clicks to get to our content.  Once they open the guide, that’s it, right?

Fast, Easy Clicks Trump the 3-Click Rule

As content creators, we need to understand how the user experience guidelines have changed and adapt our content accordingly. As part of that, we need to look not only at our grouping, but the clarity of what is hidden under that grouping.  Will a reader know to look in the vague ‘user guide’ vs the equally vage ‘operations guide’? Would they be better served by an extra few mouse clicks that navigate deeper and allow us to ungroup some of that content and raise visibility of what was hidden?  These are our decisions to make, in association with the UE design folks to optimize our reader’s choices and fast scanning decisions to find the content they are looking for.

Posted in content strategy, links | Tagged | Leave a comment

Sustainable Content and Why It’s Important

Today’s content strategy thought of the day brought to you by mbloomstien and her recent presentation on Creation, Curation, and the Ethics of Content Strategy (slides) .

She quotes from  Erin Kissane’s new book, The Elements of Content Strategy :

What more can I say? Sustainable content – quality content that your writers can create and maintain without going insane.  Might buy this book just for the quote!

Posted in content strategy | Tagged | Leave a comment

Crafting Good Document Titles

Document titles have to cover a lot – from the features and/or products covered, to optimizing search relevancy, and if in a downloadable format – helping the user find that document on their desktop three months from now.

Issues to Consider in Titles

Primary issues to consider when crafting your doc title:

  • Quick scannability – On a busy website with many documents, you need titles that show some level of uniqueness such that customers can scan down the long list and find which of those documents they need.
  • Limited length – Studies show readers remember 5-8 words. Anything longer than that and they’ve forgotten key elements. Keep doctitles short.
  • Avoid search truncation – Google truncates search results after 65 characters. Search your existing titles on google and see what they look like before redesigning the title.
    Image representing Google as depicted in Crunc...
    Image via CrunchBase

Recommendations for Document Titles

  • Start titles with the most important trigger words.  This will vary by product/project. If you have 10 related documents on a website, and they all start with the same phrase “Grandma’s Secret Recipe – Noodle soup”, Grandma’s Secret Recipe – Hot Dog Surprise”, it’s harder for your readers to scan for the recipe they want.  Start instead with the recipe name… keep it simple.
  • Keep titles short – “Grandma’s Secret Recipe Passed Down From Generation to Generation for Hot Dog Suprise” is very long and gets trucated on Google searches.
  • Review your existing  titles on your website and google searches and make the best decision for your documents. There is likely not ONE set of rules we can apply across all our varied documents. Look at what you have today and figure out what makes each document or link unique, what is your reader deciding on to choose one document over the other?  Make that the start of your document title.
Posted in Uncategorized | Tagged , | 2 Comments

XML and the Proliferation of Bad Writing

International Recycle Symbol

Image via Wikipedia

DITA XML reuse is a godsend to busy writers trying to keep ahead of the information deluge they are responsible for.  If Jane wrote a topic to cover the benefits of SpiffyNewProduct in her datasheet, then Dave can reuse that benefits topic in his design guide, and Jo can reuse it in her deployment guide, etc. etc.

Great stuff!  But what happens if Jane’s topic wasn’t written to the latest corporate style? Or if Jane was a junior writer in a separate department from Jo?  Jo is left with the options of:

  1. Writing her own topic to the latest corporate style.
  2. Negotiating with Jane and Dave to edit the existing topic to improve it.
  3. Just grabbing the existing topic, warts and all, and getting on with her own project.

How many writers do you think choose option 3? In a fast-paced environment where project schedules already assume reuse and not rewriting, more writers will pick the easy option of grabbing what’s there instead of improving on it.

Then there’s the other source of bad writing – legacy content that has been run through a script to convert it to XML. There again, we get large volumes of XML topics not necessarily written to today’s standard.

Bad writing can get reused just as well as  good writing. In the perfect world, we each work toward improving that bad writing, to the benefit of new content, and republished old content. Part of XML projects that include reuse must include time for improving and adapting those older topics to today’s writing standards.  It’s not a drive-by window for grab-n-go authoring..

McDonald's drive-through 7324 


Enhanced by Zemanta
Posted in XML | Tagged , | Leave a comment

Content Strategy Linkfest

These are some of the great articles posted recently on content strategy. While some are focused on marketing content strategy, much of the information is also applicable to technical communicators. Read and enjoy! Modular Documentation Case Study Develop Content Management … Continue reading

Gallery | Tagged , | Leave a comment

Content Evangelistas!

Welcome to our Content Evangelists blog. We’re a dedicated group of information developers, content strategists, and writer malcontents looking to share our thoughts, experiences, and content strategy lessons learned as they related to complex technical products and support information.

Enhanced by Zemanta
Posted in Uncategorized | Leave a comment