Usability has become a well-established discipline, with defined process, best practices, and several ISO standards. In fact, according to the ISO 9241-11 standard, the definition of usability is “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” For software, the definition of usability is “the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions.”
Content is a critical part of understanding, learning, and using any product. To be able to understand how the user is likely to interact with the product, we need to first understand the user goals and the tasks the user will carry out to reach that goal.
One example, at the complex end of the spectrum, would be using something like enterprise accounting software. One of the goals might be to use accounting procedures to successfully close out month end. It takes a lot of content to support a user to understand, learn, and use the system to reach their goals. There is training material, a user guide, user support content, and content that lives in the interface. An example at the simpler end of the spectrum would be a consumer product such as a smart phone. The amount of content is less, but is there nevertheless. Both of these products get usability tested along the way, and even the marketing websites for each of these products get tested for usability, to be sure, as effective marketing is considered important.But what about the testing of the content itself? Usability testing rarely takes place on the content. The testing that takes place goes as far as checking that the user can get to the page where the content resides, and then it ends. The usability testers can tick off the box to indicate that their test is complete: the content has been found, and it’s assumed that the content is effective. If that sounds like a fatal flaw in the test, you’re right.
An example that comes to mind is testing the effectiveness of content on the website of a city. The task put to the user during the usability test was to pay a parking ticket. The users easily found the page and indicated that they could complete the task. However, when they actually engaged with the content, we discovered that the users found the text too long and didn’t read it, and so they overlooked the command button (clearly labelled “Pay Your Parking Ticket”); instead, they tried to click on a static graphic.This analysis gave us the information we needed to rework the content on the page to make it more usable. Without taking that extra time during testing, we would not have discovered that it was the content, rather than the navigation, that could have hindered the user from completing the task at hand.
Because content is generally viewed by user experience professionals to be outside of their professional domain, it’s up to content professionals to ensure that the content itself gets tested. Ensuring that the content meets user needs, it helps if a content professional is involved in four key aspects of testing.
Creating the test plan
The usability test plan discusses how the tests will be carried out, who will be part of the test, how long each test will take, and so on. Depending on the content and the intended audiences, it may be prudent to have a range of ages, genders, literacy levels, and so on, to ensure that the content works for a particular range of users. Testing the content may extend the length of each user’s test, or there may be some trade-offs needed to be able to test key pages, which may need to be negotiated with the usability professionals.
Creating the test cases
The test cases may be written in a way that have a bias toward navigation – in other words, when a user reaches a particular page, the test is considered complete and successful. It is important that the user not stop short of engaging with the content. The test plan should be prepared with some think-aloud protocol that allows the users to give feedback on what their thoughts are while they use the content to complete a task.
Watching the tests
Because content is so nuanced, it is almost impossible to get a good sense of how well the content works for users without being present to observe their interactions. In a case study at a content strategy conference, the presenters demonstrated how key instructions for a page about diabetes were unusable, and after testing, they decided to add a video to the page. During the second round of tests, they discovered that this actually made the page less usable because when users got to the video, they clicked to watch, and forgot to go back to the content below the video. It was direct observations that uncovered those flaws in presentation of content.
Analyzing the results
The results of user tests often get rolled up into aggregated results. Sometimes this can be helpful to understand general user behaviour – for example, what keywords people use to search for content. It is important to walk through the final analysis prepared by the user experience professionals to really understand the results.
Content testing is not inexpensive, and to get good results, the testing cannot be automated. As a result, this is an area that sometimes gets put aside as a “nice to have” rather than a core requirement. Yet, as content is an essential part of purchasing, understanding, and learning to use a product or service, organizations owe it to their customers to consider content as part of a successful user experience.
Rahel Anne Bailie is Chief Knowledge Officer at Scroll in London. Rahel also teaches in the Content Strategy Master’s Program at FH-Joanneum, runs the “Content, Seriously” meetup, is organising Content Strategy Applied conference and is working on her third book on writing structured content. She is a Fellow of the Society for Technical Communication, the co-author of Content Strategy: Connecting the dots between business, brand, and benefits, and co-editor of The Language of Content Strategy.