Should you test all the Capabilities of your CMS?

We’ve all done something with a piece of software that had adverse effects like a Blue Screen or even a silly popup that says nothing. Or recently we learned that if you hold the new iPhone with your finger in a certain area you lose your wireless signal. That would be a bummer if you were answering serious question. There are other examples of sad software bugs. One of many possible reasons these bugs appear is because there may not be adequate test coverage to all product capabilities.

A product capability is synonymous with product features. Product features are comprised of many product functions. When implementing Commercial Off the Shelf (COTS) software there is thinking that you do not need to test all the functions of the COTS because 1) it was already tested by the software manufacture and 2) typical project constraints (time, money & scope). I used to think similarly. However, since implementing Content Management Systems (CMS) for a decade, I found that not testing more of the functions in a system can be harmful to the end solution. In testing terms, we would say we want to increase the function / test coverage ratio.

Typical testing on a project is called functional testing. Functional testing, tests business documented features, e.g. create content, that are written into some sort of systems documentation. During the content creation scenario, we test a variety of functions, e.g. menu -> file -> new, WYSIWYG editor and a save function. In addition, we probably tested some immediately ancillary functions like a cancel link. As a side note, the ancillary tests are called alternative scenarios. As I have noticed this is usually where testing stops. Bearing a positive test result, the testers and business believe that the feature they tested acts as expected. However, we did not test other ways the systems allows for content creation.

Very relevant with content management systems, many product functions perform similar actions against content. For example, there can be several ways to perform a content creation function, e.g. “My Work area,” “My workflows,” “My Content” and “Watched Content.” Each navigation to the content create function may act differently, e.g. “My Workflows” may produce a new piece of content form with limited form fields while the “My Work area” content create function may display workflow options. Stakeholders of the content management process will expect that if the software allows for the edition of content through these multiple ways that each content create function would perform similarly. Testing all these “extra” function is called capability testing. Capabilities are functions available to users that were not explicitly written in the system specifications. There are some testing techniques that can help increase your testing coverage, e.g. combinatorial and pairwise testing but that is a start of a different post.

Even though, I used a simple example there are some extreme results to not testing all capabilities. For example, deployed content to production websites that was not reviewed by the correct persons. This deployment could have posted a 10-k without CxO approval.

Capability testing is difficult to test because testers will need to hunt for these special forms of product features. It is important that test scripts are written to cover many if not all capabilities of the system not just the ones including in functional testing as per system specification. This thoroughness has implications to both the test process and the entire project. Project teams need to work with stakeholders to make sure that capabilities in the system are appropriate. It is not uncommon for the project team to discover these “extra” capabilities and then turn them off. The trick is to make sure you find them all because if you don’t the users will.

Comments are closed.